report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
Mr. Chairman and Members of the Subcommittee: We are pleased to be here today to discuss our observations on the Office of Management and Budget’s (OMB) efforts to carry out its responsibilities to set policy and oversee the management of the executive branch. As you know, last month we issued a major new series of reports, entitled Performance and Accountability Series: Major Management Challenges and Program Risks, and an update of our high risk series. Collectively, the reports show that long-standing performance and management challenges hinder the federal government’s efforts to achieve results. The report series also highlighted numerous improvements that agencies need to make in their performance, management, and accountability. Making these improvements will require the sustained efforts of the leadership and staff within agencies. At the same time, the report series also underscored the pivotal role that the federal government’s central management agencies— in particular, OMB—must play in guiding and overseeing agencies’ efforts to address the shortcomings that we identified and to implement the changes necessary to improve performance. Today, as requested by the Subcommittee, we will cover three major points. First, we will provide an outline of OMB’s wide-ranging management responsibilities and note that the question of whether to integrate or separate management and budget functions has been long debated. Second, we will discuss the effectiveness of OMB’s management leadership, which, in our view, has been uneven. Finally, we will discuss the factors that appear to contribute to progress in sustaining improvements in federal management. As agreed, our statement today is based on, and updates as appropriate, the testimony we provided on these three points when we appeared before this Subcommittee last May. Our observations are made on the basis of work we are currently doing and have done at federal agencies and at OMB. framework contains as its core elements financial management improvement legislation, including the Chief Financial Officers (CFO) Act of 1990, the Government Management Reform Act of 1994, and the Federal Financial Management Improvement Act of 1996; information technology reforms, including the Paperwork Reduction Act (PRA) of 1995 and the Clinger-Cohen Act of 1996; and the Government Performance and Results Act of 1993 (the Results Act). The CFO Act mandated significant financial management reforms and established the Deputy Director for Management (DDM) position within OMB. In addition to serving as the government’s key official for financial management, the DDM is to coordinate and supervise a wide range of general management functions of OMB. These functions include those relating to managerial systems, such as the systematic measurement of performance; procurement policy; regulatory affairs; and other management functions, such as organizational studies, long-range planning, program evaluation, and productivity improvement. OMB is responsible for providing guidance and oversight for various other laws and executive orders as well. For example, the Federal Acquisition Streamlining Act (FASA) requires that executive agency heads set cost, performance, and schedule goals for major acquisition programs and that OMB report to Congress on agencies’ progress in meeting these goals. Executive Order 12866 directs OMB to coordinate the review of agencies’ rules and regulations to ensure that they impose the least burden, are consistent between agencies, focus on results over process, and are based on sound cost/benefit analysis. OMB also has been responsible since 1967, through its Circular A-76, for carrying out executive branch policy to rely on competition between the federal workforce and the private sector for providing commercial goods and services. OMB’s perennial challenge is to carry out its central management leadership responsibilities in such a way that leverages opportunities of the budget process, while at the same time ensuring that management concerns receive appropriate attention in an environment driven by budget and policy decisions. Concern that OMB and its predecessor agency, the Bureau of the Budget, lacked the support and institutional capacity necessary to sustain management improvement efforts throughout the executive branch has prompted numerous calls for changes in the past. these groups, the Ash Council, the Bureau of the Budget was reorganized in 1970 and renamed OMB, thereby signaling the intent to heighten the management focus in the agency. However, the creation of OMB did not ensure that an institutionalized capacity for governmentwide management leadership would be sustained, nor did it establish how OMB should balance its budget and management responsibilities. As a result, observers have continued to debate how to best ensure that management issues can be effectively considered within the context of—yet without being overwhelmed by—the budget process. Some observers have advocated integrating the two functions, while others have proposed the creation of dedicated offices or a separate agency to provide governmentwide management leadership. Prior OMB reorganizations, reflecting these different points of view, have alternated between seeking to more directly integrate management into the budget review process and creating separate management offices. Previous congressional and OMB attempts to elevate the status of management by creating separate management units within OMB sought to ensure that an adequate level of effort was focused on management issues. Underscoring its concern that management issues receive appropriate attention, Congress established the DDM position to provide top-level leadership to improve the management of the federal government. In 1994, OMB reorganized to integrate its budget analysis, management review, and policy development roles, in an initiative called “OMB 2000.” This reorganization was the most recent of a series of attempts to bolster OMB’s management capacity and influence. To carry out its responsibilities, OMB’s Resource Management Offices (RMO) are responsible for examining agency budget, management, and policy issues. Linking management reforms to the budget has, at a minimum, provided the opportunity to include management issues as part of the president’s yearly budget reviews—a regularly established framework for making decisions. levels and transferred their responsibilities for overseeing agencies’ implementation of many governmentwide management initiatives to the RMOs. This increased OMB’s reliance on RMO managers and staff to focus on management issues and coordinate their activities with the statutory offices. In fiscal year 1997, OMB obligated $56 million and employed over 500 staff. In recent years, OMB has focused increased attention on management issues, but there is much more that needs to be done. In last year’s budget, the Administration took an important first step in what can be seen as an evolving results-based planning and budgeting process. The first Governmentwide Performance Plan, as required by the Results Act, was prepared as an integrated component of the President’s 1999 Budget; this year’s Plan, released on Monday with the President’s 2000 Budget, again describes three aspects of federal government performance: fiscal, management, and program. In OMB’s view, the performance of government programs is inextricably linked to the fiscal and economic environment and the management framework in which they operate. In our assessment of the Fiscal Year 1999 Governmentwide Performance Plan, we noted that the separate management performance section within the plan was a useful approach that added essential context and depth to the Plan. This year’s Plan follows a structure similar to that developed last year, including (1) a discussion of the Administration’s High Impact Agencies initiative, which focuses on defining service delivery commitments, developing customer and employee satisfaction measures, using interagency partnerships, and enhancing electronic access; and (2) 24 specific priority management objectives (PMO), many of which are also on GAO’s high risk list. These PMOs were selected by OMB as areas in need of real change and are intended to create a clear set of priorities for the Administration’s management improvement efforts. PMOs included in the Fiscal Year 2000 Governmentwide Performance Plan. However, in our assessment of the Fiscal Year 1999 Plan, we noted that there needed to be a clearer and stronger linkage between these PMOs and the underlying agency annual performance plans. Specifically, by improving the discussion of the program performance consequences of the PMOs, OMB could better ensure that agencies develop relevant goals and strategies in their performance plans and clarify agency accountability for specific results. We recommended that OMB ensure that agencies incorporate appropriate goals and strategies in their annual performance plans and describe their relevance to achieving the priority management objectives described in the governmentwide performance plan. Today, we will highlight some of the management issues that have been both of particular concern to this Subcommittee and the subject of our recent work. Like most organizations, federal agencies increasingly depend on information technology (IT) to improve their performance and meet mission goals. Federal agencies, however, face serious challenges in ensuring effective performance and management of the nearly $27 billion in planned obligations for computer technology and information systems each year. Agencies face the challenge of meeting recent legislative reform requirements to implement strong IT leadership and effective processes for improved management of information technology investments. Of primary concern are agencies’ abilities to identify and correct date coding problems with mission-critical systems to meet the Year 2000 deadline. Safeguarding critical government systems and sensitive information from unauthorized access is also crucial. As the policy and oversight arm of the executive branch, OMB is responsible for guiding and overseeing agency efforts to meet these challenges and enforcing accountability through the executive branch budget formulation and execution process. OMB, is charged with ensuring that no system critical to the federal government’s mission experiences disruption because of the Year 2000 problem. As the Council has concentrated its efforts on international, private-sector, and state and local government issues, OMB has played a key role in tightening requirements on agency reporting of Year 2000 progress. OMB now requires that, beyond the original 24 major departments and agencies, 9 additional agencies report quarterly on their progress, and that all agencies report on their status. Further, OMB places each of the 24 major agencies into one of three tiers after receiving quarterly progress reports, based on OMB’s judgment as to whether evidence of the agency’s reported progress is or is not sufficient. Additionally, OMB has clarified instructions on agencies preparing business continuity and contingency plans. Many congressional committees have played a central role in addressing the Year 2000 challenge by holding agencies accountable for demonstrating progress and by heightening public appreciation of the problem. The Congress also passed important Year 2000 legislation. However, serious risks remain. Our reviews of federal Year 2000 programs have found uneven progress; some major agencies are significantly behind schedule and are at high risk that they will not correct all of their mission- critical systems in time. In summary, it is essential that OMB provide leadership in ensuring that priorities continue to be set, rigorous testing be completed, and thorough business continuity and contingency plans be prepared to successfully meet the Year 2000 challenge. Continuing computer security weaknesses also put critical federal operations and assets at great risk. In September 1998, we reported that recent audits have identified significant information security weaknesses at virtually every major agency. others of the importance of information security. This has led to significant actions, including a Presidential directive requiring each major department and agency to develop a plan for protecting critical infrastructures. A series of Senate hearings also highlighted these risks and the need for greater action. OMB, the Chief Information Officer (CIO) Council, and the National Security Council are working collaboratively on a plan to (1) assess agencies’ security postures, (2) implement best practices, and (3) establish a process of continued maintenance. In addition, on January 22, President Clinton announced major new initiatives to strengthen our nation’s defenses against attacks to our critical infrastructure, computer systems, and networks. Implementing these initiatives effectively will require a more concerted effort at individual agencies and at the governmentwide level. Agencies need to do a better job of establishing comprehensive computer security programs that address systemic problems as well as individual audit findings in this area. Moreover, we found that most agencies have not addressed enhancing information security in their fiscal year 1999 performance plans. In addition to individual agency actions, more effective governmentwide oversight is important to (1) ensure that agency executives understand the risks, (2) monitor agency performance, and (3) resolve issues affecting multiple agencies. As these efforts progress, it is important that OMB play a key role in ensuring that a comprehensive federal strategy emerges. part of the fiscal year 1999 budget cycle review. In addition, working with the CIO Council, OMB recently revised its guidance to agencies on preparing and submitting their annual IT budget requests. The new format for agency budget exhibits provides greater clarity about types of IT spending and the mission area of the agency that these investments support. Finally, OMB has indicated its intention to revise governmentwide guidance dealing with strategic information management planning and security. Nevertheless, broad IT management reforms are still in their early stages in most federal agencies. As our reviews demonstrate, agencies continue to be challenged by (1) weaknesses in IT investment selection and control processes; (2) slow progress in designing and implementing IT architectures; (3) inadequate software development, cost estimation, and acquisition practices; and (4) the demand for effective CIO leadership and organizations. Improvements in these areas will be difficult to achieve without effective agency leadership support, highly qualified and experienced CIOs, and effective OMB leadership and oversight. With the Deputy Director for Management serving as its co-chair, OMB must continue to work effectively with the federal CIO Council to focus management attention on putting in place disciplined information technology management processes that can lead to improvements in the delivery of high quality, cost-effective results. The development of the “Raines’ Rules”—requiring agencies to satisfy a set of investment management criteria before funding major systems investments—can potentially serve to further underscore the link between information technology management and spending decisions. These criteria were incorporated into OMB guidance to agencies for the fiscal year 2000 budget process. from OMB’s efforts. For instance, 11 agencies received unqualified audit opinions on their fiscal year 1997 financial statements—up from 6 in fiscal year 1996. At the same time, there are major obstacles to overcome. The most serious challenges are framed by the results of our first-ever audit of the government’s consolidated financial statements, for fiscal year 1997; deficiencies in the statements prevented us from being able to form an opinion on their reliability. These deficiencies are the result of widespread material internal control and financial systems weaknesses that significantly impair the federal government’s ability to adequately safeguard assets, ensure proper recording of transactions, and ensure compliance with laws and regulations. Financial management has been designated one of OMB’s priority management objectives, with a goal of producing performance and cost information in a timely, informative, and accurate way, consistent with federal accounting standards. To help accomplish this goal, a May 26, 1998, presidential memorandum required agency heads to develop plans for resolving the problems that have been identified. Further, House Resolution 447, passed on June 9, 1998, underscored congressional expectations for timely resolutions of the problems. Considerable effort is now being exerted several agencies have made good progress towards achieving financial management reform goals. With a concerted effort, the federal government as a whole can continue to make progress toward generating reliable financial information on a regular basis. While annual audited financial statements are essential to identifying any serious problems that might exist and providing an annual public scorecard on accountability, an unqualified audit opinion, while certainly important, is not an end in itself. The CFO Act is focused on providing on a systematic basis, accurate, timely, and relevant financial information needed for management decisionmaking and accountability. For some agencies, the preparation of financial statements requires considerable reliance on ad hoc programming and analysis of data produced by inadequate financial management systems. Thus, the overarching challenge in generating timely, reliable data throughout the year is overhauling financial and related management information systems. Program (JFMIP), which issues financial systems requirements to be followed by all CFO Act agencies. Together with the CFO Council, OMB has established eight priorities as discussed in OMB’s Federal Financial Management Status Report and the Five-Year Plan (June 1998). They are: (1) obtaining unqualified opinions on financial statements and issuing accounting standards, (2) improving financial management systems, (3) implementing the Results Act, (4) developing human resources and CFO organizations, (5) improving management of receivables, (6) ensuring management accountability and control, (7) modernizing payments and business methods, and (8) improving administration of federal assistance programs. Finally, OMB is currently piloting accountability reports that provide a single overview of federal agencies’ performance, as authorized by the 1994 Government Management Reform Act. By seeking to consolidate and integrate the separate reporting requirements of the Results Act, the CFO Act, and other specified acts, the accountability reports are to show the degree to which an agency met its goals, at what cost, and whether the agency was well-run. If effectively implemented, accountability reports that include information on the full cost and results of carrying out federal activities could greatly aid decisionmaking for our national government. OMB has a vital role in leading and overseeing agencies’ efforts to instill a more performance-based approach to decisionmaking, management, and accountability. OMB has shown a clear commitment, articulated in its fiscal year 1999 annual performance plan and the fiscal year 1999 governmentwide plan, to implement the Results Act. recommended that OMB work with Congress and the agencies to identify specific program areas that can be used as best practices. We believe that this would help to demonstrate the use and benefits of performance-based management and how concrete information about program results can contribute directly to congressional and executive branch decisionmaking. OMB’s efforts to improve capital decision-making are another example of where OMB’s leadership efforts are yielding some results. OMB and GAO have worked together in this area, with OMB developing a Capital Programming Guide that provides agencies with the key elements for producing effective plans and investments. OMB’s Guide drew on GAO’s work on best practices used by leading private sector and state and local governments, which was subsequently published. Consistent with these best practices, OMB has required agencies to submit 5-year capital spending plans and justifications—thus encouraging a longer-term consideration of agency capital needs and alternatives for addressing them. OMB’s Guide provides a basic reference on principles and techniques, including appropriate strategies for analyzing benefits and costs, preparing budget justifications, and managing capital assets once they are in place. In addition, OMB has worked closely with the President’s Commission to Study Capital Budgeting, which is expected to issue its report and recommendations soon. As federal agencies implement the performance-based management agenda established by the Congress in the 1990s, the government’s human capital policies and practices will increasingly become prominent issues. Leading performance-based organizations understand that effectively managing their human capital is essential to achieving results. Organizational success hinges on having the right employees on board and on providing them with the training, tools, structures, incentives, and accountability to work effectively. Thus, human capital planning must be an integral part of any organization’s strategic and program planning and human capital itself should be thought of not as a cost to be minimized but as a strategic asset to be enhanced. The challenge—and opportunity— confronting federal agencies as they seek to become more performance- based is to ensure that their human capital policies and practices are aligned with their program goals and strategies. An important opportunity exists for OMB to take a leadership role in impressing upon the agencies the importance of adopting a strategic approach to human capital planning—traditionally a weak link in federal agency management. Although the Office of Personnel Management’s role in informing the agencies about effective strategic human capital planning is potentially significant, the Results Act provides the statutory impetus for OMB to bring its considerable influence to bear. The Act requires agencies to describe in their strategic plans and annual performance plans the human resources they will need to meet their performance goals and objectives. OMB Circular A-11 states that annual plans may include goals and indicators involving the workforce or the workplace environment, such as employee skills and training, workforce diversity, retention, downsizing, and streamlining. Nevertheless, in examining the first round of agency strategic plans and annual performance plans, we found that few of these documents emphasized human capital or the pivotal role it must play in helping agencies achieve results. Through active participation in the development of agency strategic and annual performance plans and by holding agencies accountable for their attention to human capital considerations, OMB could bring considerable energy and discipline to the federal government’s efforts to build, maintain, and marshal the human capital needed to achieve results. same resources in other areas that pose higher risks could yield significantly greater payoffs. OMB’s OFPP has worked to implement FASA and the Clinger-Cohen Act. OFPP has also been working to streamline the procurement process, promote efficiency, and encourage a more results-oriented approach to planning and monitoring contracts. OFPP is spearheading a multi-agency effort to revise parts of the Federal Acquisition Regulation (FAR). For example, a major revision to Part 15 of the FAR should contribute greatly to a more flexible, simplified, and efficient process for selecting contractors in competitively negotiated acquisitions. OFPP also developed best practices guides to help agencies draft statements of work, solicitations, and quality assurance plans, as well as to aid in awarding and administering performance-based service contracts. OFPP issued a best practices guide for multiple award task and delivery order contracting to encourage agencies to take advantage of new authorities under FASA. In addition, OMB has encouraged agencies to buy commercial products, conduct electronic commerce, and to consolidate their ordering to take advantage of the buying power of the federal government. OMB’s Circular A-76 sets forth federal policy for determining whether commercial activities associated with conducting the government’s business will be performed by federal employees or private contractors. The A-76 process calls for agencies to contract for commercial services once they have determined on the basis of cost studies that it would be cost effective to contract out these services. Agencies’ efforts to undertake cost studies—with the important exception of the Department of Defense—have declined significantly in recent years. In June 1998, we testified that OMB had undertaken only limited efforts to monitor or enforce compliance with its A-76 guidance or evaluate the success of this process. Since then, Congress passed the Federal Activities Inventory Reform (FAIR) Act that, among other things, provides a statutory basis for some requirements of Circular A-76. Like Circular A- 76, FAIR requires federal agencies to develop a list of all commercial services that are possible candidates for performance by the private sector. OMB is reviewing agencies’ efforts to develop commercial activities lists and is developing supplemental guidance to Circular A-76 to assist agencies in complying with FAIR. Finally, OMB’s oversight role across the government can provide the basis for analyzing crosscutting program design, implementation, and organizational issues. We have pointed to the need to integrate the consideration of the various governmental tools used to achieve federal goals, such as loans, grants, tax expenditures, and regulations. Specifically, we recommended that OMB review tax expenditures with related spending programs during their budget reviews. In addition, our work has provided numerous examples of mission fragmentation and program overlap within federal missions as shown in table 1. performance into budget functions—a well-known and long used budget classification structure that focuses on federal missions, or “areas of national need.” We found in reviewing the Fiscal Year 1999 Plan that in several parts of the Plan, descriptions of program performance were presented in a sequential, agency-by-agency format that missed opportunities to address well-known areas of fragmentation and overlap. Organization-based presentations are appropriate to emphasize agency accountability but tend to “stovepipe” performance discussions and inadequately describe crosscutting governmentwide performance goals. More broadly, we concluded that while the use of the budget functions offers a reasonable and logical approach, it does not always provide mutually exclusive descriptions of governmentwide missions and that a more cohesive picture of federal performance was needed. A more cohesive picture of federal missions would be presented if discussions were broadened beyond functional lines where necessary to capture the full range of government players and activities aimed at advancing broad federal goals. Beyond questions of how best to analyze and describe governmentwide missions and performance, OMB’s efforts to ensure crosscutting programs are properly coordinated may be hampered if efforts to resolve problems of program overlap and fragmentation involve organizational changes. OMB lacks a centralized unit charged with raising and assessing government-organization issues. OMB has not had such a focal point since 1982 when it eliminated its Organization and Special Projects Division. Mr. Chairman, the record of OMB’s stewardship of management initiatives that we have highlighted today suggests that creating and sustaining attention to management improvement is a key to addressing the federal government’s longstanding problems. In the past, management issues often remained subordinated to budget concerns and timeframes, and the leverage the budget could offer to advance management efforts was not directly used to address management issues. The experiences to-date suggest that certain factors are associated with the successful implementation of management initiatives. Building and sustaining these factors appears to be pivotal regardless of the specific organizational arrangements used to implement the management initiatives. First, top management support and commitment within both OMB and the White House is often critical to providing a focus on governmentwide management issues throughout both the budget process and the executive agencies themselves. As our study of OMB 2000 pointed out, management and performance measurement issues gained considerable attention in the budget formulation process initially because of the clear commitment of OMB’s leadership. However, top leadership’s focus can change over time, which can undermine the follow-through needed to move an initiative from policy development to successful implementation. Thus, institutional focal points can have important roles in sustaining these initiatives over time by serving as continuing “champions” to maintain attention to management initiatives and help ensure follow-through. Second, a strong linkage with the budget formulation process can be a key factor in gaining serious attention for management initiatives throughout government. Regardless of the location of the leadership, management initiatives need to be reflected in and supported by the budget and, in fact, no single organizational arrangement by itself guarantees this will happen. Many management policies require budgetary resources for their effective implementation, whether it is financial management reform or information systems investment. Furthermore, initiatives such as the Results Act seek to improve decision-making by explicitly calling for performance plans to be integrated with budget requests. We have found that previous management reforms, such as the Planning-Programming-Budgeting- System and Management By Objectives, suffered when they were not integrated with routine budget presentations and account structures. Third, effective collaboration with the agencies—through such approaches as task forces and interagency councils—has emerged as an important central leadership strategy in both developing policies that are sensitive to implementation concerns and gaining consensus and consistent follow- through within the executive branch. In effect, agency collaboration serves to institutionalize many management policies initiated by either Congress or OMB. In our 1989 report on OMB, we found that OMB’s work with interagency councils was successful in fostering communication across the executive branch, building commitment to reform efforts, tapping talents that exist within agencies, keeping management issues in the forefront, and initiating important improvement projects. Finally, support from the Congress has proven to be critical in sustaining interest in management initiatives over time. Congress has, in effect, served as the institutional champion for many of these initiatives, providing a consistent focus for oversight and reinforcement of important policies. For example, Congress’—and in particular this Subcommittee’s— attention to the Year 2000 problem, information management, and financial management, has served to elevate these problems on the administration’s management agenda. Separate from the policy decisions concerning how best to organize and focus attention on governmentwide federal management issues, there are some intermediate steps that OMB could take to clarify its responsibilities and improve federal management. For example, OMB could more clearly describe the management results it is trying to achieve, and how it can be held accountable for these results, in its strategic and annual performance plans. Many of OMB’s strategic and annual goals were not as results- oriented as they could be. Continued improvement in OMB’s plans would provide congressional decisionmakers with better information to use in determining the extent to which OMB is addressing its statutory management and budgetary responsibilities, as well as in assessing OMB’s contributions toward achieving desired results. In our 1995 review of OMB 2000, we recommended that OMB review the impact of its reorganization as part of its planned broader assessment of its role in formulating and implementing management policies for the government. OMB has not formally assessed the effectiveness, for example, of the different approaches taken by its statutory offices to promote the integration of management and budget issues. We believe it is important that OMB understand how its organization affects its capacity to provide sustained management leadership. Mr. Chairman, this concludes our statement. We would be pleased to answer any questions that you or other Members of the Subcommittee have at this time. (410419/935297) | Pursuant to a congressional request, GAO discussed its observations on the Office of Management and Budget's (OMB) efforts to carry out its responsibilities to set policy and oversee the management of the executive branch, focusing on: (1) OMB's wide-ranging management responsibilities and the question of whether to integrate or separate management and budget functions; (2) the effectiveness of OMB's management leadership; and (3) the factors that appear to contribute to progress in sustaining improvements in federal management. GAO noted that: (1) OMB is the lead agency for overseeing a statutory framework of financial, information resources, and performance planning and measurement reforms designed to instill a performance-based approach to federal management, decisionmaking, and accountability; (2) the Chief Financial Officers Act of 1990 mandated significant financial management reforms and established the Deputy Director for Management (DDM) position within OMB; (3) the DDM is to serve as the government's key official for financial management and coordinate and supervise a wide range of general management functions; (4) OMB is responsible for providing guidance and oversight for various other laws and executive orders as well; (5) OMB's perennial challenge is to carry out its central management leadership responsibilities in such a way that leverages opportunities of the budget process, while at the same time ensuring that management concerns receive appropriate attention in an environment driven by budget and policy decisions; (6) prior OMB reorganizations have alternated between seeking to more directly integrate management into the budget review process and creating separate management offices; (7) previous congressional and OMB attempts to elevate the status of management by creating separate management units within OMB sought to ensure that an adequate level of effort was focused on management issues; (8) OMB has focused increased attention on management issues, but there is much more that needs to be done; (9) OMB should ensure that agencies incorporate appropriate goals and strategies in their annual performance plans and describe their relevance to achieving the priority management objectives described in the governmentwide performance plan; (10) the record of OMB's stewardship of management initiatives suggests that creating and sustaining attention to management improvement is a key to addressing the federal government's longstanding problems; (11) in the past, management issues often remained subordinated to budget concerns and timeframes, and the leverage the budget could offer to advance management efforts was not directly used to address management issues; and (12) continued improvement in OMB's strategic plans would provide congressional decisionmakers with better information to use in determining the extent to which OMB is addressing its statutory management and budgetary responsibilities, as well as in assessing OMB's contributions toward achieving desired results. |
The Magnuson-Stevens Fisheries Conservation and Management Act granted responsibility for managing marine resources to the Secretary of Commerce. The Secretary delegated this responsibility to NMFS, which is part of Commerce’s National Oceanic and Atmospheric Administration (NOAA). The act also established eight regional fishery management councils, each responsible for making recommendations to the Secretary of Commerce about managing fisheries in federal waters. The eight fishery management councils—consisting of fishing industry participants, state and federal fishery managers, and other interested parties—and their areas of responsibility include the following: Caribbean Council, covering waters off the U.S. Virgin Islands and the Commonwealth of Puerto Rico; Gulf of Mexico Council, covering waters off Texas, Louisiana, Mississippi, Alabama, and the west coast of Florida; Mid-Atlantic Council, covering waters off New York, New Jersey, Delaware, Maryland, Virginia, and North Carolina; New England Council, covering waters off Maine, New Hampshire, Massachusetts, Rhode Island, and Connecticut; North Pacific Council, covering waters off Alaska; Pacific Council, covering waters off California, Oregon, and Washington; South Atlantic Council, covering waters off North Carolina, South Carolina, Georgia, and the east coast of Florida; and Western Pacific Council, covering waters off Hawaii, American Samoa, Guam, the Commonwealth of the Northern Mariana Islands, and uninhabited U.S. territories in the Western Pacific. In addition to these eight councils, NMFS has six regional science centers, which are responsible for generating the scientific information necessary for the conservation, management, and use of each region's marine resources. The six fishery science centers and their areas of responsibility are as follows: Alaska Center, covering the coastal oceans off Alaska and parts of the west coast of the United States; Northeast Center, covering waters along the Northeast Continental Shelf from the Gulf of Maine to Cape Hatteras, North Carolina; Northwest Center, covering the northeast Pacific Ocean, primarily waters off the coasts of California, Oregon, Washington, and British Columbia; Pacific Islands Center, covering the central and western Pacific Ocean; Southwest Center, primarily covering waters off the coast of California and areas throughout the Pacific and Antarctic Oceans; and Southeast Center, covering waters along the continental southeastern United States as well as Puerto Rico and the U.S. Virgin Islands. The Magnuson-Stevens Act, as amended by the Sustainable Fisheries Act, also established national standards for fishery conservation and management. These standards deal with preventing overfishing, using scientific information, using fishery resources efficiently, minimizing bycatch, and minimizing administrative costs. The regional councils use these standards to develop appropriate plans for conserving and managing fisheries under their jurisdiction, including measures to prevent overfishing and to rebuild overfished stocks as well as measures to protect, restore, and promote the long-term health and stability of the fishery. In 1982, the Pacific Fishery Management Council (Pacific Council) released its initial Fishery Management Plan for groundfish. The Pacific Council’s goal is to have long-range plans for managing groundfish fisheries that will promote a stable planning environment for the seafood industry, including marine recreation interests, and will maintain the health of the resource and environment. To help achieve these goals, stock assessments are conducted on groundfish species to estimate fish populations. Since 1995, the Northwest Center has had lead responsibility for conducting stock assessments on Pacific groundfish. The Northwest Center receives assistance from other NMFS science centers, such as the Southwest Center, which conducted the bocaccio assessment. Stock assessments are the biological evaluation of the status of fish stocks. Stock assessments provide official estimates in key areas, such as the size of the stock population, the size of the spawning population, the amount of fish that have died (fish mortality), and the estimated number of fish at a particular young age (recruitment). Stock assessments form the scientific basis used by regional councils to determine biologically sustainable harvests and guide the monitoring and rebuilding of overfished and threatened stocks. For example, regional councils use stock assessments and other indicators of biological productivity to recommend to NMFS a maximum, or total allowable catch, in a particular fishery—typically for a year. Stock assessments are a key tool for managing fisheries. Without stock assessments, fishery managers would have limited information about the status of fisheries in making decisions about setting harvest levels and developing plans to rebuild overfished stocks. For each species, the assessor reviews previous stock assessments, gathers available data about the species being assessed, runs the data through computer-generated models, and estimates the species’ total biomass. Stock assessors use NMFS-collected data, such as stock surveys conducted on NOAA vessels or contracted commercial fishing vessels, as well as data collected by non-NMFS sources, such as commercial and recreational catch data collected by state agencies. The following six key types of data are collected: Stock abundance—surveys of how many fish constitute a stock’s total size or weight. Commercial and recreational fisheries data—the amount and composition of fish caught from a particular stock, whether caught intentionally by commercial and recreational fishermen or unintentionally caught and discarded. Data sources include fishing logbooks, dockside samples, and onboard observations, among others. Life history—biological data, such as the age and sex composition of the stock, age at first maturity, fertility, average lifespan, and natural mortality. Ecosystem relationships—data on the relationship between a fish stock and its physical environment, as well the relationship of a fish stock to other species. Recruitment research—data on the abundance of juvenile and larval fish (fish at their earliest stage), which helps scientists forecast the size of a particular stock in the future. Synoptic oceanographic sampling—data on the ocean ecosystem, such as water temperature or salinity, plankton composition, or ocean currents. For each stock assessment, a review panel, consisting of NMFS scientists and outside experts, independently reviews the methodology of the assessment and works with the assessor to ensure their comments are adequately addressed. Through 2003, 24 of the 82 species of Pacific groundfish have had a full quantitative stock assessment. Relying on these assessments, NMFS has declared that nine species of Pacific groundfish are overfished, including the five species of Pacific groundfish we reviewed in this report—Pacific hake as well as bocaccio rockfish, canary rockfish, darkblotched rockfish, and yelloweye rockfish. Rockfish are long-lived, late-maturing and slow-growing species, making them particularly susceptible to overfishing. More specifically: Pacific hake, also called Pacific whiting, is generally found off the west coast of North America. It is one of many species of hake distributed in the Atlantic and Pacific oceans. Fishing of Pacific hake primarily takes place off the coasts of northern California, Oregon, Washington, and British Columbia. Fishermen use mid-water trawls and generally fish over the ocean bottom at depths of 100 to 500 meters. Pacific hake was declared overfished in 2002 because the 2002 stock assessment estimated Pacific hake biomass at 700,000 metric tons. By 2004, the biomass was estimated at between 2.7 and 4.2 million metric tons, and Pacific hake is no longer considered overfished. Figure 1 shows a picture of a Pacific hake. Bocaccio rockfish generally inhabit waters off the coast of northern Baja, Mexico to Alaska. Bocaccio, commonly sold at market as “red snapper,” are commercially fished using trawls, hook-and-line and gillnets. Adult bocaccio commonly live over rocky areas or open areas of the ocean’s floor to about 320 meters. Bocaccio were formally declared to be “overfished” in 1999. The 2003 stock assessment estimated the bocaccio biomass at 7,133 metric tons. Figure 2 shows a picture of a bocaccio rockfish. Canary rockfish inhabit the northeastern Pacific Ocean, from northern Baja, Mexico to the western Gulf of Alaska. Adult canary rockfish are primarily found along the continental shelf, from 46 to 457 meters deep. Canary rockfish are harvested commercially using trawl nets and hook-and-line and are also considered an important species for recreational fishermen. NMFS declared canary rockfish as overfished in 2000. The 2002 stock assessment estimated the canary rockfish biomass at 6,197 metric tons. Figure 3 shows a picture of a canary rockfish. Darkblotched rockfish are found in the waters from Santa Catalina Island, California to the Bering Sea on soft bottom areas at about 29 to 549 meters deep. Commercial fishery concentrations are located off the coasts of California and Oregon. Darkblotched rockfish are caught primarily by commercial trawls and contribute to both commercial and recreational fishing. NMFS determined that darkblotched rockfish was overfished in 2000, when the last full stock assessment was conducted; it was updated in 2003. This update estimated the darkblotched rockfish biomass at 7,266 metric tons in 2001. Figure 4 shows a picture of a darkblotched rockfish. Yelloweye rockfish almost exclusively inhabit rocky areas from northern Baja, Mexico to the Aleutian Islands, Alaska. Yelloweye rockfish, found in depths ranging from 15 to 550 meters, are caught using both trawl nets and line gear and are highly prized by both commercial and recreational fishermen. Stock assessments for yelloweye rockfish were first conducted in 2001, and NMFS determined that the species was overfished in 2002. The 2002 stock assessment estimated the yelloweye biomass at 2,325 metric tons in 2001. Figure 5 shows a picture of a yelloweye rockfish. The reliability of NMFS’ stock assessments is questionable for the Pacific hake and four rockfish species we reviewed, although they were based on the best information available at the time the assessments were conducted. The reliability of the stock assessments we reviewed is questionable because (1) four of the assessments did not have at least one NMFS-collected data source of sufficient scope and accuracy; (2) NMFS lacked a standard process for assessing the reliability of non-NMFS data used in all five assessments; and (3) for four of the assessments, the stock assessment reports did not adequately identify the uncertainty of the biomass estimates. (See table 1.) To address these limitations, the Northwest Center plans to increase the scope and accuracy of its collected data, as additional funds become available; is implementing changes that will help ensure the reliability of non-NMFS data; and plans to update the stock assessment model to provide uncertainty ranges for the 2005 stock assessments. Stock assessors use a variety of data, including NMFS data and non-NMFS data, in developing their assessments. Two key pieces of NMFS survey data are the shelf and slope bottom trawl survey and the acoustic survey. Other NMFS data that assessors sometimes use include larval surveys (data for fish in their earliest stage) and recruitment surveys. The non-NMFS data assessors use include commercial catch data and recreational catch data. A 2002 National Research Council report found that the inclusion of NMFS survey data was the best option for a reliable estimate of abundance because such surveys use an unbiased statistical design, control sampling locations, and provide for quality assurance. According to Northwest Center officials, to obtain reliable results, each stock assessment should include at least one source of NMFS-collected data of sufficient scope and accuracy because such surveys are unbiased and scientifically designed. Northwest Center officials raised concerns about basing assessments solely on non-NMFS data such as commercial and recreational catch data. Catch data do not provide the species’ relative or absolute biomass, according to NMFS officials. For example, catch data alone is insufficient because fishermen are not randomly sampling the ocean but are fishing areas that they are allowed to fish and they believe have the most fish; fishing restrictions, such as a total allowable catch, can limit the amount of fish being caught; and catch data have often been inaccurate for a variety of reasons, such as imprecise accounting for dead fish tossed back into the ocean. Although the assessors used several different data sources, four of the five assessments did not use NMFS survey data or the NMFS data used covered only a portion of the species’ habitats. In the yelloweye assessment, no NMFS survey data were available because yelloweye live almost exclusively in the rocky habitat that NMFS trawl surveys cannot cover. As a result, the yelloweye assessment was based solely on non-NMFS data. Similarly, the NMFS survey data used in the bocaccio, canary, and darkblotched assessments were limited in scope because the surveys were conducted only in trawlable waters. Bocaccio, canary, and darkblotched live in both the trawlable and untrawlable habitats. Using data from trawl surveys conducted from 1977 through 1998, NMFS reported in 2003 that 77 percent of the survey area was trawlable and 23 percent was untrawlable. Lacking data on species in the 23 percent of the ocean floor that is untrawlable, the assessors estimated the overall biomass using the NMFS data collected from the trawlable areas. However, the abundance in the trawlable area is not representative of the abundance in the untrawlable area. The 2003 NMFS report also found that darkblotched groundfish are less abundant in untrawlable waters, while canary and bocaccio species are more abundant in untrawlable waters. As a result, some rockfish populations may be understated while others may be overstated. According to stock assessors, relying solely on survey data from trawlable waters increases the uncertainty of stock assessments. In contrast, the fifth groundfish species, Pacific hake, lives primarily in mid-water habitat; and so the concerns about the lack of NMFS data in rocky, untrawlable habitats are not applicable. NMFS does not have a standard process for evaluating whether the non-NMFS data used in its stock assessments are reliable. We believe that certain internal control activities, such as a standard process for ensuring data reliability, can help ensure that information used to make management decisions is complete and accurate. Lacking a standard process, some assessors reviewed the quality of the raw non-NMFS data, while others did not. Assessors who reviewed for data quality found mistakes that they believed made some of the data unusable or that could have impaired the accuracy of the stock assessments. For example, the assessor for the 2002 yelloweye stock assessment found numerous errors in the recreational catch data, such as attributing the catch from an entire fishing vessel to a single fisherman, and thus did not use the data because doing so could have resulted in overestimating the biomass. According to another stock assessor, commercial catch data frequently have inconsistencies. Specifically, the assessor said California, Oregon, and Washington require fishermen to enter catch and location information into logbooks, but logbooks are often incomplete and inaccurate. While the stock assessment review panels evaluated the assessments, the panels did not evaluate the quality of the raw data used in the assessments. According to a Northwest Center official, several assessors have raised concerns about data quality and accessibility in feedback meetings. In response to these concerns, the Northwest Center official has recently begun assigning data stewards to each data set used in its assessments. Data stewards are responsible for helping assessors compile relevant data and for conducting quality assurance and quality control checks on the data. The Northwest Center plans to conduct a data quality workshop in July 2004 to formally establish the roles and responsibilities of the data stewards, with the intent of standardizing the data evaluation process. In 1998, the National Research Council recommended that NMFS include realistic measures of uncertainty in its stock assessments. NMFS’ 2001 stock assessment improvement plan also recognized the need to better quantify and communicate the uncertainty in assessments. In a review of the 2002 canary assessment, the stock assessment review panel recommended that standard estimates of uncertainty be included in future assessments because it is difficult to determine the reliability of the stock assessment without them. Similarly, we believe that estimates based on samples should have a range of uncertainty to show the amount of variability in the estimates. However, the bocaccio, canary, darkblotched, and yelloweye assessments did not present a measure of uncertainty associated with the biomass estimates. Without uncertainty ranges, it is difficult for regional councils and NMFS to know how much confidence they can have in relying on the estimates for determining stock abundance and hence for setting allowable harvests of the fish. For example, lacking uncertainty ranges, the 2002 bocaccio stock assessment estimated a bocaccio biomass of 2,914 metric tons in 2002. The 2003 assessment of bocaccio biomass, however, estimated 6,506 metric tons in 2002—more than doubling the previous estimate because of additional and updated data. With such wide variations, it is important to provide uncertainty ranges, otherwise management may make inappropriate decisions. While assessors told us that their stock assessments included some information about differences in estimated biomass when using different data sources (sensitivity analyses), the mathematical model that NMFS uses to estimate biomass (Stock Synthesis model) does not calculate uncertainty ranges. NMFS officials told us that NMFS is updating the model so that it can compute uncertainty ranges; NMFS expects to use the updated model for all 2005 stock assessments. The Pacific hake assessor used a mathematical model (AD Model Builder) that could compute uncertainty ranges and included these ranges in the Pacific hake assessment report. NMFS has taken some steps recommended in the Marine Fisheries Stock Assessment Improvement Plan to improve the quantity, quality, and type of data used in Pacific groundfish stock assessments, but much remains to be done to make the assessments more reliable. The Northwest Center has concentrated most of its efforts on implementing recommendations aimed at obtaining more data. Recommendations aimed at increasing the types of data and improving their quality have not yet been fully implemented for a variety of reasons, such as staffing and funding limitations. In addition, other program priorities have precluded NMFS from implementing the recommendation to create a comprehensive plan that incorporates the improvement plan and related plans so that it can develop integrated program initiatives to improve stock assessments. The October 2001 stock assessment improvement plan identified three scenarios (tiers) to consider when analyzing the resources needed to improve stock assessments. The three tiers of assessment improvements are as follows: Tier 1—improve stock assessments using existing data without initiating new data collection programs. Tier 2—conduct baseline monitoring of species, which in most cases requires sampling the species at least every 1 to 3 years, and preferably at least every 1 to 2 years. Tier 3—implement “next generation” stock assessments by explicitly incorporating ecosystem considerations, such as multispecies interactions and environmental effects in assessments. The improvement plan also made a number of recommendations to improve stock assessments. The recommendations fall into the following four categories: Data collection—pursue new initiatives to expand data collection efforts that at a minimum bring stock assessment science to Tier 2. In addition, continue to develop partnerships and cooperative research programs with other entities, such as state agencies, commercial and recreational fishing organizations, and individuals to improve the quantity, quality, and type of data collected. Communication—educate constituents about NMFS’ strategies for improving stock assessments. Training—implement comprehensive training and staff development programs for NMFS’ analytical and quantitative staff, and augment existing programs that support graduate students interested in stock assessment science. Planning—develop integrated program initiatives by preparing a comprehensive plan that combines the improvement plan with its complementary plans. Improvement in the quantity of data collected for use in stocks assessments is a key component to achieving Tier 2 status. The improvement plan states that as the quantity of the data increases, the assessments become more reliable because the data cover a longer period of time, producing better population trend information. Northwest Center officials said that the quality of the data improves with more frequent surveys and more randomly selected survey locations that, over the long term, provide a better understanding of the variability inherent in the population distribution and abundance. A better sense of trends and variability allow for improved short-term predictions of the status of the species. The Northwest Center has concentrated most of its efforts on implementing improvements in data quantity, such as more frequent acoustic, and shelf and slope bottom trawl surveys. The following illustrate some of the actions the Northwest Center took in 2003 to improve data quantity: Increased the frequency of the Pacific hake acoustic survey from triennially to biennially. Beginning in 2003, the survey was restructured into a single, integrated survey with Northwest Center and Canadian officials jointly planning all survey elements. Officials from the Northwest Center and Canada now jointly conduct all of the acoustic surveys. Increased the frequency of the groundfish shelf and slope bottom trawl survey from triennially to annually, leveraging available resources by cooperatively working with the fishing industry. Specifically, contracting with private commercial fishing vessels to conduct the surveys. According to Northwest Center officials, working collaboratively with the fishing industry has afforded fishermen the opportunity to become stakeholders in the data collection process. Extended the geographic range of the groundfish shelf and slope bottom trawl survey. The surveys are now coastwide from Cape Flattery, Washington to the Mexican border, adding over 300 more miles along the southern California coast. Previous surveys ended at Morro Bay, California. Efforts continue to communicate the strategies needed to improve stock assessments and to augment existing programs aimed at developing future stock assessment scientists. For example, through the Pacific Fishery Management Council, Northwest Center staff meet with their constituents, such as representatives from state agencies and the fishing industry, to discuss groundfish management issues. In addition, the Northwest Center organized a series of public meetings to discuss new initiatives that affect the stock assessment program. For example, the Northwest Center held public meetings in several communities along the Pacific coast to discuss implementation of the Observer Program—a program designed to collect information about discarded fish for the non-hake west coast groundfish fleets. Finally, the Northwest Center now participates in NMFS’ National Sea Grant program to augment a Northwest Center-supported graduate study program at the University of Washington that trains stock assessment scientists. The Sea Grant program provides fellowships for students interested in marine research, such as stock assessment methodology and marine resource economics. The Northwest Center plans to employ two Sea Grant students during the summer of 2004. While the Northwest Center has implemented some recommendations aimed at improving stock assessments, it has not yet fully implemented many others. These recommendations include collecting additional types of data, such as ecosystem and recruitment information; improving data quality, such as calibrating survey vessel equipment; and increasing training opportunities for Northwest Center staff. Also, NMFS has not acted on the task force recommendation to combine the improvement plan and its complimentary plans into a comprehensive plan that provides integrated program initiatives. According to the improvement plan, additional types of data will allow NMFS to further test and validate model assumptions, thereby increasing the reliability of the stock assessments. The improvement plan further states that information derived from ecosystem research and recruitment surveys is essential if assessments are to meet the national standards of “next generation” assessments or Tier 3 status. According to Northwest Center officials, ecosystem information and coastwide recruitment surveys are two of the most critical data sets needed to ensure continuous improvement of groundfish stock assessments. The Northwest Center conducts ecosystem research as part of its Science for Ecosystem-based Management Initiative. Understanding the complex ecological relationships between fish and the environment within which they exist provides a better understanding of the effects of the ecosystem on the groundfish fisheries and the scientific knowledge needed to make informed ecosystem-based management decisions. Although research is ongoing to develop ecosystem information, only a limited amount of the data is collected and used in stock assessments. For example, ecosystem data are collected during shelf and slope bottom trawl surveys as time and resources allow. However, this information is not widely incorporated into stock assessments. For the five species we reviewed, only the boccacio assessment used ecosystem data—information on the temperature of the ocean’s surface. According to Northwest Center officials, the collection of ecosystem data is limited because the relatively small size of the commercial vessels used in the shelf and slope bottom trawl surveys cannot support the number of researchers needed to effectively conduct comprehensive ecosystem research and collection activities. Furthermore, the implementation of comprehensive ecosystem research and data collection programs is contingent upon the funding of a dedicated research vessel for west coast surveys. Northwest Center officials said they are to receive a dedicated research vessel sometime during calendar years 2008 or 2009, at the earliest. Better recruitment information for Pacific groundfish is also needed because such information provides an early predictor of fish abundance, especially for species such as hake, where there is a great variation in recruitment. Northwest Center officials said that current recruitment surveys are limited because existing funds support only yearly surveys in selected areas. To achieve the best early predictions of stock status, these officials said, recruitment surveys should be coastwide and conducted twice a year. According to Northwest Center officials, 13 full-time staff are needed to expand these and other high-priority data collection efforts, such as surveys in untrawlable habitat and expanded acoustic surveys. The lack of quality data was identified in the improvement plan as an impediment to producing reliable stock assessments. For example, when equipment on different survey vessels are not calibrated, the data are not comparable, and trends may not be accurately determined. The Northwest Center is continuing its efforts to calibrate survey vessel equipment. The improvement plan also recommended that NMFS provide additional training to ensure that qualified NMFS staff are available now and in the future to conduct stock assessments and related activities. For example, the plan recommended the development of a comprehensive training program and more professional developmental opportunities for NMFS’ scientific staff. Northwest Center officials said they try to meet the training and professional development needs of their scientific staff. However, to date they have focused on developing external training programs, such as the University of Washington graduate program, to develop stock assessment scientists for the future and have yet to develop a comprehensive training program for in-house stock assessment scientists. Finally, the improvement plan recommended that NMFS prepare a comprehensive plan that combines the improvement plan with other complementary plans, such as the NOAA Fisheries Data Acquisition Plan and the NMFS Social Sciences Plan. A comprehensive plan would allow NMFS to better integrate and coordinate program initiatives for improving stock assessments. For example, the acquisition plan—the key complementary plan to the improvement plan—identifies the need for fishery research vessels to satisfy NMFS’ data collection needs. Although the improvement plan includes the number of staff that would participate in data collection surveys, it does not contain the capital and operating costs of the research vessels. Similarly, the staffing requirements for augmenting the social sciences capabilities of NMFS to conduct economic analyses is represented in the sciences plan and not in the stock improvement plan. A NMFS official said that other program priorities, such as conducting more stock assessments and improving data collection activities, have precluded them from developing a comprehensive plan. According to NMFS funding and budget requests, the Northwest Center needs at least $8.9 million to complete ongoing and planned improvements to the stock assessments for Pacific groundfish. However, the actual cost of implementing remaining improvements to Pacific groundfish stock assessments may be even higher because the Northwest Center’s budget requests primarily reflect the amount of money the Center believed it could realistically obtain, rather than the actual cost of the improvements. According to NMFS, the Northwest Center needs at least $8.9 million to complete ongoing and planned improvements for Pacific groundfish stock assessments: $2.6 million that NMFS’ Northwest Center requested but did not receive between fiscal years 2001 to 2003 and $6.3 million the Center requested for fiscal years 2004 and 2005. Specifically, as shown in table 2, the Northwest Center records have identified the following funding needs $7.7 million to improve the types of data used, including $2.4 million for surveys of untrawlable waters, $2.1 million to expand acoustic and recruitment surveys, and $3.2 million to collect ecosystem data; and $1.2 million to improve the quality of data used in stock assessments, including $600,000 to enhance the calibration of vessel equipment; $525,000 to develop and implement methods to collect information on stock identification, structure, and movement; and $75,000 to standardize trawl survey procedures. The Northwest Center did not receive its full funding request, in part, because NMFS did not receive all the funding it had requested. Between fiscal years 2001 and 2003, NMFS received $20.6 million (80 percent of its request) in additional funding to implement improvements for all marine stock assessments. NMFS allocated $3.6 million (58 percent of funds the Northwest Center requested) to the Northwest Center for improving Pacific groundfish stock assessments, resulting in a $2.6 million shortfall in the Center’s request. This shortfall occurred in part because of NMFS’ need to balance the requests of its six science centers against its program priorities and the available funds. According to NMFS officials, their goal is to achieve parity among the science centers in terms of their capability to conduct scientific work, such as stock assessments. The $8.9 million needed to implement remaining recommended improvements is probably understated because the Northwest Center’s budget requests primarily reflect the amount of money the Center officials believed they could realistically obtain, rather than the amount the improvements would actually cost, according to NMFS officials. The Northwest Center’s budget requests for fiscal years 2004 and 2005 are preliminary requests submitted before the Northwest Center received its fiscal year 2003 funding. Consequently, the Northwest Center will likely submit revised budget requests for fiscal years 2004 and 2005 that account for both its unfunded needs from fiscal years 2001 through 2003 and items that were unexpectedly funded in fiscal year 2003. Moreover, the fiscal year 2004 and 2005 preliminary budget requests do not incorporate any unanticipated problems or data gaps that have developed since the Northwest Center submitted its preliminary requests. According to NMFS officials, NMFS’ science centers, including the Northwest Center, primarily make and justify their funding requests in response to how much money Congress appropriates. After Congress passes NMFS’ budget, NMFS asks its science centers to reassess and detail how much new money each needs to implement science center programs, such as marine stock assessment improvements. According to NMFS officials, it is unrealistic for a science center to request more funds than are available in its appropriation, even if it needs more. While NMFS’ Northwest Center requested $6.2 million to implement improvements to Pacific groundfish stock assessments between fiscal years 2001 and 2003, NMFS’ 2001 West Coast Groundfish Research Plan estimated that almost twice as much money would be needed—approximately $11.7 million in new funding—to implement top-priority improvements to Pacific groundfish stock assessments. NMFS is now updating its plan and cost estimates for improving Pacific groundfish stock assessments. Using key findings from its December 2003 review of the groundfish program, the Northwest Center plans to update its groundfish research plan, last published in 2001. According to NMFS, the updated groundfish research plan should be completed in late 2004 and is designed to (1) provide a comprehensive framework for Pacific groundfish, (2) identify some of the greatest information gaps, and (3) provide guidance for setting priorities on work to fill these gaps. In addition, the updated plan will estimate how much such improvements will cost. Stock assessments are the key to effectively managing fisheries. They provide estimates of the species population, which NMFS uses to set harvest limits that allow for sustainability and/or recovery of the species. While stock assessment results often change from assessment report to assessment report, the more types of information used in the assessments, such as recruitment surveys and ecosystem studies, and the greater the accuracy and quality of the data, such as scientifically designed and collected data, the more reliable the assessment results. However, the Pacific groundfish assessments we reviewed did not (1) use scientifically designed and collected NMFS data of sufficient scope and accuracy, such as survey data on the abundance of groundfish residing in rocky, untrawlable habitats; (2) subject the non-NMFS data used to a standard process for assessing its reliability; and/or (3) identify the uncertainties of the assessments total biomass estimates. As a result, the reliability of the five assessments is questionable. Without reliable assessments, fishery managers may reach erroneous conclusions and take actions that could adversely affect the fishing industry economically or adversely affect the recovery and sustainability of the fishery resources. Moreover, without a comprehensive, integrated improvement plan, funding requests and planned actions to improve the stock assessments may not be coordinated, jeopardizing successful and timely implementation of assessment improvements. To improve the reliability of Pacific groundfish stock assessments, we recommend that the Secretary of Commerce require the Director of National Marine Fisheries Service to take the following four actions: Continue efforts to collect more types of data, such as data obtained from surveys in rocky, untrawlable waters, recruitment surveys, and ecosystem studies, for groundfish assessments where reliable data are now lacking. Establish a standard approach that requires that non-NMFS data used in stock assessments be evaluated for its reliability, and continue efforts to implement the task force’s recommendations to improve data quality. Require that stock assessment reports clearly present the uncertainties in the assessments, such as the margin of error associated with species biomass estimates. Develop a comprehensive plan that integrates the NMFS stock assessment improvement plan with other NMFS plans to ensure that stock assessment improvement actions and budget requests are coordinated. We provided the Department of Commerce with a draft of this report for review and comment. We received a written response from the Under Secretary of Commerce for Oceans and Atmosphere that included comments from the National Oceanic and Atmospheric Administration (NOAA). NOAA generally agreed with the report’s accuracy and concurred with the reports recommendations. However, NOAA said it was concerned about the report’s conclusion—that the reliability of stock assessments is questionable for the five species reviewed—because it could be misconstrued to infer that the assessments are unreliable for use in managing the west coast groundfish fishery. In this regard, NOAA provided additional comments to show the usefulness of the assessments, even if some of the input data used in the assessments contained errors. We stand by our conclusions that the five stock assessments we reviewed were questionable because the input data were insufficient and/or potentially inaccurate and that four of the assessment reports did not present the uncertainties associated with the biomass estimates. Nonetheless, we added language to the report to address NOAA’s concern. Specifically, we expanded upon the fact that NMFS used the best information available at the time the stock assessments were conducted by adding information on the importance of the assessments to effectively manage the fisheries. Without these stock assessments, NMFS and fishery managers would have very limited information on which to base fishery management decisions. NOAA agreed with the report recommendation to continue collecting more types of data for groundfish assessments where reliable data are now lacking. NOAA said that the reliability of stock assessments will be improved if NMFS survey efforts are expanded and additional NMFS fishery data are collected. NOAA said NMFS places a priority on these improvements and will continue efforts to address this and other recommendations to improve the collection of fishery data as funding becomes available. NOAA also agreed with the report recommendation to establish a standard approach to evaluate the reliability of non-NMFS data used in stock assessments and continue efforts to improve data quality. NMFS said, through its west coast fishery science centers, it participates on interagency data committees, to develop quality assurance protocols and to assess the quality of non-NMFS data. NOAA agreed that it is important to ensure that these interagency data committees continue to highlight the need for standardized quality control procedures for the collection of data. NOAA agreed with the report recommendation to clearly present the uncertainties in the stock assessments. NOAA said that quantifying uncertainty of stock assessments is important to sound decision-making by providing more information about the assessment, although this quantification does not reduce the uncertainty in the assessment itself. While the methods used and the completeness of the uncertainty characterization varied from assessment to assessment, NOAA said it is desirable to have both a quantitative analysis of model uncertainty and an evaluation of the consequences of alternative model scenario. Finally, NOAA agreed with the report recommendation to develop a comprehensive plan that integrates the stock assessment plan with other NMFS plans to ensure that improvement actions and budget initiatives are coordinated. NOAA said that while much remains to be done, long-term planning efforts and coordination among field and headquarters are ongoing, and NOAA is committed to these actions. NOAA’s comments and our detailed responses are presented in appendix II of this report. NOAA also provided technical comments that we incorporated in this report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies of this report to the Secretary of Commerce and the Director of the National Marine Fisheries Service. We will also provide copies to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841 or Keith Oleson at (415) 904-2218. Key contributions to this report are listed in appendix III. We reviewed National Marine Fisheries Service (NMFS) stock assessments for five species of Pacific groundfish: Pacific hake (Pacific whiting) as well as four types of rockfish—bocaccio, canary, darkblotched, and yelloweye. Specifically, for these five species you asked us to (1) assess the reliability of NMFS’ stock assessments, (2) identify which relevant recommendations from the stock assessment improvement plan have been implemented and which have not, and (3) identify the estimated costs associated with planned and ongoing improvements to groundfish stock assessments. We did not review the stock assessments of any of the other west coast Pacific groundfish species, thus the information contained in this report pertains to the five species we reviewed unless stated otherwise. For all three objectives, we reviewed key laws and agency reports and interviewed officials from NMFS, including officials from the Northwest Fisheries Science Center, which has lead responsibility for conducting Pacific groundfish stock assessments with assistance from other west coast science centers. To assess the reliability of the Pacific hake, bocaccio, canary, darkblotched, and yelloweye stock assessments, we examined methodological and administrative documents developed by NMFS and others to support the groundfish data collection, maintenance, and assessment process. We reviewed the controls over stock assessment data, the types of fish population surveys used, and recent Pacific groundfish stock assessment studies (2002 and 2004 studies for Pacific hake, 2002 and 2003 studies for bocaccio, 2002 study for canary, 2000 and 2003 studies for darkblotched, and 2001 and 2002 studies for yelloweye). We examined whether and to what extent NMFS has processes and procedures in place to ensure the reliability of data used in the Pacific groundfish stock assessments. We reviewed the stock assessment reports and determined whether they articulated the level of uncertainty in the assessment model estimates. We interviewed an array of government officials and fisheries experts, including the Oregon Department of Fish and Wildlife, the Washington Department of Fish and Wildlife, the California Department of Fish and Game, the Pacific Fishery Management Council, the Pacific States Marine Fisheries Commission, environmental groups, and industry associations, as well as fishermen and academics. We did not simulate NMFS’ stock assessment models nor evaluate the mathematical and statistical methodologies used in the models for Pacific hake, bocaccio, canary, darkblotched, and yelloweye. To identify the relevant recommendations to improve stock assessments that NMFS has implemented and has not implemented, we reviewed agency reports on marine fisheries stock assessments, strategic planning, and data collection. We also interviewed officials from the Oregon Department of Fish and Wildlife, the Washington Department of Fish and Wildlife, the California Department of Fish and Game, the Pacific Fishery Management Council, and the Pacific States Marine Fisheries Commission as well as environmental groups, industry associations, fishermen, and academics. To determine the estimated costs associated with NMFS’ planned and ongoing improvements to Pacific groundfish stock assessments, we reviewed relevant budget requests and funding documents for fiscal years 2001 through 2005 and interviewed National Oceanic and Atmospheric Administration officials. We did not evaluate the accuracy of NMFS’ budget requests for specific project items but rather used the amounts NMFS requested for these project items to estimate the total additional costs of implementing the planned and ongoing improvements to Pacific groundfish stock assessments. We conducted our review from May 2003 through April 2004 in accordance with generally accepted government auditing standards. The following are GAO comments on NOAA’s letter dated May 13, 2004. 1. We added clarifying language to the scope and methodology section of the report to clearly identify the species and activities covered by the review. 2. We revised the report to show the publication date. 3. We revised the report accordingly. 4. We revised the report to clearly show that NMFS has not collected enough ecosystem data and that the frequency and range of recruitment surveys are limited. The statement does not address untrawlable habitat. 5. We revised the report to clarify that NMFS “generally uses” NMFS’s staff or contracts with outside experts. 6. We revised the report to more clearly differentiate between NMFS as a whole and NMFS’ Northwest Center in particular. We made similar revisions, as appropriate, throughout the report. 7. We revised the report to specify “bottom” trawl survey. We made similar changes, as appropriate, throughout the report. 8. We revised the report to include the year and scope of the task force review. 9. We revised the report to indicate that the Northwest Center is responsible for coordinating groundfish stock assessments. 10. We revised the report to include the citation. 11. We revised the report to include the date and citation of the National Research Council report. 12. The NMFS data used in the bocaccio, canary, and darkblotched assessments were limited because NMFS conducted its surveys in trawlable waters only. NMFS data were not available for untrawlable waters, which these species also inhabit. For this reason, we did not revise the report. 13. We revised the report to clarify the shared responsibilities of the Northwest and Southwest Centers. 14. We revised the report to include NOAA’s recommended definition of stock abundance. 15. We revised the report to include larval fish. 16. We revised the report to clarify the role of the review panel. 17. We revised the report to more explicitly distinguish the five species related to our report from other overfished Pacific groundfish. 18. We revised the report to more clearly describe the distribution of Pacific hake. 19. Bocaccio survey data for untrawlable habitats, as stated in comment 12, was unavailable. For this reason, we did not change the report. 20. NOAA commented that highly standardized protocols are used for collecting non-NMFS data (fishery dependent data) for rockfish. We found that although NMFS does have collection and quality assurance procedures for state-collected non-NMFS data, NMFS does not check or have a standard process to verify that these data have been reviewed for reliability. As discussed in our report, some assessors chose to review the raw data, while others did not. Assessors who voluntarily reviewed raw non-NMFS data found mistakes that either made some of the data unusable or could have impaired the accuracy of the stock assessments. For these reasons, we did not change the report. 21. We revised the punctuation accordingly. 22. We revised the report to clarify that Pacific hake live in mid-water habitat. 23. The footnote placement and citation are in accordance with GAO guidelines. For this reason, we did not change the report. 24. We believe that our report has addressed this issue. By referring to the West Coast Groundfish Research Plan by its complete title, we adequately distinguish between the two reports. For this reason, we did not change the report. 25. We changed “health” to “stock abundance.” 26. The footnote placement is repositioned in report. 27. We revised the report to include assessment “improvements.” 28. As indicated in our report, we illustrate some of the actions that the Northwest Center took to improve data quantity and did not intend to provide a comprehensive list of all actions conducted to improve data quantity coast wide. However, we added footnote 16 to clarify the actions taken by the Southwest Center. 29. The example we provided is not intended to be a comprehensive list of all ecosystem research conducted on the west coast. Instead, it illustrates the type of work the Northwest Center is conducting and the opportunities for improving ecosystem research. For this reason, we did not change the report. 30. After reviewing the report we believe no change is required because of subject-verb agreement. 31. We added clarifying language. 32. We believe that table 2 notes “a” and “b” in our report already adequately address this issue. Annotations for projects that do not separate out groundfish funds occur only in items that are annotated as Southwest Center projects. For this reason, we did not change the report. 33. We changed “survivability” to “sustainability.” 34. We changed “survivability” to “sustainability” and added fishery “resources” for clarification. 35. NOAA commented that GAO does not adequately convey the different degrees of precision associated with the stock assessments and GAO’s conclusion that the reliability of the five assessments we reviewed is questionable and could easily be misconstrued to mean all these assessments are an unreliable basis for management of the west coast groundfish fishery. NOAA also commented that the five assessments GAO reviewed all passed scientific review and are serving as the basis for formal status determination and fishery management. Our report acknowledges that stock assessments are scientifically reviewed and are a key tool for managing fisheries. However, we found the reliability of the five assessments questionable for the three reasons we highlighted in our report, and we recommended actions on how to improve the reliability of the stock assessments. We added clarification to the report to show that stock assessments are a key tool for managing fisheries and are important in making decisions about setting harvest levels and developing plans to rebuild overfished stocks. NOAA also commented that quality assurance for non-NMFS data is not absent. As stated in our response number 20, we found that although NMFS does have collection and quality assurance procedures for state- collected non-NMFS data, NMFS does not check or have a standard process to verify that these data have been reviewed for reliability. As discussed in our report, some assessors chose to review the raw data, while others did not. Assessors who voluntarily reviewed raw non- NMFS data found mistakes that either made some of the data unusable or could have impaired the accuracy of the stock assessments. For these reason, we did not change the report. 36. NOAA commented that it is more pertinent to focus on the degree of standardization of the survey data than the source. By categorizing data as NMFS data and non-NMFS data, we were not implying that non- NMFS organizations could not conduct useful fishery-independent surveys. We categorized the data in this manner because NMFS currently conducts nearly all of the fishery-independent surveys and non-NMFS organizations collect most of the fishery-dependent data. Footnote 7 in the report states that NMFS generally refers to its data as fishery-independent data and to non-NMFS data as fishery-dependent data. For these reasons, we did not change the report. 37. We believe the Pacific hake biomass estimates are questionable because the assessment used non-NMFS data that NMFS did not check or subject to standard data reliability testing. Assessors who reviewed raw non-NMFS data for other stock assessments found mistakes that either made some of the data unusable or could have impaired the accuracy of the stock assessments. For this reason, we did not change the report. 38. NOAA commented that bocaccio, canary, and darkblotched assessments all obtain adequate abundance trend information from the NMFS bottom trawl surveys. NOAA also commented that although bottom trawl survey cannot access the roughest habitat, it is useful as an index of relative changes in the overall abundance. As stated in our report, we found that the NMFS survey data used in these assessments were limited in scope because the surveys were conducted only in trawlable areas. Assessors estimated overall biomass using the NMFS data collected from the trawlable area, which has a different abundance rate than the untrawlable area. Stock assessors commented that relying on survey data from trawlable waters only increases the uncertainty of stock assessments. For these reasons, we did not change the report. 39. As noted in our report, the National Research Council found that the inclusion of NMFS survey data was the best option for a reliable estimate of abundance because such surveys use an unbiased statistical design, control sampling locations, and provide for quality assurance. Northwest Center officials said that to obtain reliable results, each stock assessment should include at least one source of NMFS-collected data of sufficient scope and accuracy because such surveys are unbiased and scientifically designed. NMFS data were unavailable for the yelloweye assessment. Northwest Center officials also raised concerns about basing assessments solely on non-NMFS data such as commercial and recreational catch data. Catch data do not provide the species’ relative or absolute biomass, according to NMFS officials. Catch data alone are insufficient because fishermen are not randomly sampling the ocean, but are fishing areas that they are allowed to fish and they believe have the most fish; fishing restrictions, such as a total allowable catch, can limit the amount of fish being caught; and catch data have often been inaccurate for a variety of reasons, such as imprecise accounting for dead fish tossed back into the ocean. For these reasons, we did not change the report. 40. NOAA commented that the doubling of estimated bocaccio biomass in 2003 was due to factors that would not be addressed in a standard statistical analysis. Although a standard statistical analysis may not fully address the doubling of an estimate, an assessment without an uncertainty range does not quantify and communicate any of the uncertainty. For this reason, we did not change the report. In addition to the person named above, Leo G. Acosta, Kristine N. Braaten, Allen T. Chan, David Dornisch, Alan Kasdan, Robert Marek, Cynthia C. Norris, Carol Herrnstadt Shulman, and Tama R. Weinberg made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | Because of concerns raised about the accuracy of National Marine Fisheries Service (NMFS) stock assessments, GAO reviewed the assessments for five species of Pacific groundfish: Pacific hake and four types of rockfish--bocaccio, canary, darkblotched, and yelloweye. Specifically, for these five species GAO (1) assessed the reliability of NMFS' stock assessments, (2) identified which relevant recommendations from NMFS' stock assessment improvement plan have been implemented and which have not, and (3) identified the costs associated with planned and ongoing improvements to groundfish stock assessments. The reliability of the NMFS assessments is questionable for the five species GAO reviewed, although the assessments were based on the best information available at the time they were conducted. According to NMFS officials and a National Research Council report, to obtain reliable results each stock assessment should include at least one NMFS data source of sufficient scope and accuracy because such data are derived from unbiased, statistical designs. However, in the yelloweye assessment, no NMFS data were used, and in the darkblotched, canary, and bocaccio assessments, the NMFS data were limited because the NMFS' surveys were conducted in trawlable waters only. A 2003 NMFS report concluded that darkblotched groundfish are less abundant and bocaccio and canary are more abundant in untrawlable waters. Also for all five species, NMFS lacks a standard approach for ensuring the reliability of non-NMFS data used in stock assessments. Some assessors reviewed the quality of non-NMFS data; others did not. The assessors who reviewed the quality of the non-NMFS data found errors that made some of the data unusable or that could have impaired the reliability of certain stock assessments. Finally, for four species, the stock assessment reports were questionable because they did not present the uncertainty associated with the population estimates. For example, the canary stock assessment review panel recommended that standard estimates of uncertainty be included in the assessment report because without them it is difficult to determine their reliability. NMFS has taken steps to implement some of the recommendations contained in the NMFS stock assessment improvement plan, but much remains to be done. NMFS has concentrated its efforts mostly on improving data quantity. For example, NMFS increased the frequency of groundfish stock assessments and extended the geographic ranges of the shelf and slope surveys to cover over 300 more miles along the southern California coast. However, because of staffing and funding limitations, NMFS has not yet implemented many of the recommendations aimed at obtaining more types of data and improving data quality. For example, NMFS has not collected enough ecosystem data, and the frequency and range of recruitment surveys (estimated production of new members of a fish population) are limited. Finally, because of other program priorities, NMFS has not implemented the recommendation to create a comprehensive plan that combines the improvement plan and its complementary plans. NMFS records indicate at least $8.9 million is needed to complete ongoing and planned stock assessment improvements--$2.6 million that NMFS' Northwest Fisheries Science Center requested but did not receive in fiscal years 2001 to 2003, and $6.3 million requested for fiscal years 2004 and 2005. It will cost about (1) $7.7 million to improve the types of data used, such as more untrawlable water and recruitment surveys and (2) $1.2 million to improve the quality of data used in stock assessments, such as enhanced calibration of vessel equipment and standardized trawl survey procedures. The actual cost of the remaining improvements may be even higher than the $8.9 million estimated because the estimates primarily reflect the amount of money that agency officials believed could be realistically obtained, rather than what the improvements might cost. |
The Air Force’s Air Mobility Command (AMC) has 104 C-5, 199 C-141, and 16 C-17 strategic airlift aircraft in its fleet. It also has 54 KC-10 and 448 KC-135 tanker aircraft, which can carry cargo. The C-5 aircraft, the largest airlifter, can carry 73 troops and 36 standard cargo pallets or outsize cargo, such as tanks and helicopters. The Air Force received its C-5A models from 1969 to 1973 and its C-5B models from 1986 to 1989. The C-5B model incorporates over 100 reliability and maintainability changes from the previous model and has substantially higher mission capable rates. The C-5 has been used more than planned since Operation Desert Storm in response to various contingencies as well as shortages of C-141 aircraft and delays in C-17 deliveries. AMC developed a plan to guide the modernization of the C-5 aircraft into the next century and help ensure that the C-5 remains a viable mobility asset. AMC officials believe this modernization effort is important to address concerns regarding the aging aircraft and improve the aircraft’s reliability and maintainability. In addition to being a major command of the Air Force, AMC is a component of the U.S. Transportation Command, a unified command that provides air, land, and sea transportation for DOD. As a component, AMC is responsible for providing global airlift services and air refueling operations. AMC developed a mission capability rate goal for the C-5 fleet of 75 percent, which means that C-5s must be able to perform one of their major missions 75 percent of the time. Mission capability is a standard used on all military aircraft that allows for easier comparisons among aircraft. Although Air Force planners count on increasing aircraft mission availability in wartime by adding more maintenance personnel and deferring some maintenance inspections, little can be done to increase the spare parts initially available for each plane. Peacetime mission capability rates, especially as they are affected by adequate spare parts availability, are therefore good predictors of likely wartime aircraft mission capability. AMC currently estimates that C-5 aircraft can attain a 14.6 million ton miles per day airlift capability, which would represent almost one-half of the Air Force’s total military aircraft airlift capacity. Mission capable rates for AMC C-5 aircraft averaged just under 68 percent from July 1994 to June 1995. These rates have been declining since Operation Desert Storm, when AMC achieved mission capable rates of 75 percent or higher. In addition, the C-5 mission capable rates were considerably below comparable airlift and tanker aircraft during the same period, as shown in table 1.1. For example, AMC C-5 mission capable rates averaged over 5 percentage points below those of the troubled C-141 aircraft, which is gradually being retired. Factors accounting for the relatively poorer C-5 mission capable rates included inadequate spare parts support, higher complexity associated with a large aircraft, and the generally poorer reliability characteristics of the older C-5A model aircraft. C-5 aircraft are classified as not mission capable when they are either undergoing maintenance or lack spare parts. Between 25 and 50 percent of all not mission capable problems in recent years have been due to a lack of spare parts, as shown in figure 1.1. AMC has established a goal that the total not mission capable supply (TNMCS) rate should not exceed 7 percent for its operational C-5 fleet. Although the TNMCS rate has shown some improvement in the last few years, it still remains considerably above AMC’s goal, as shown in figure 1.2. Air Force officials said that the C-5 has historically not received enough spare parts primarily because spare parts procurement was budgeted and allocated based on the number of programmed flying hours. Also, the Air Force funds C-5 spares based on a projected 12.6-percent TNMCS rate. Since the C-5 has been exceeding the number of planned flying hours each year, fleetwide TNMCS rates have been even higher than 12.6 percent; in fiscal year 1994, for example, the rate was about 16.5 percent. Air Force personnel are sometimes able to work around spare parts shortages by taking parts from one aircraft and using them for another (referred to as cannibalization). According to a recent C-5 Program Management Review, cannibalization tends to decrease the life expectancy of aircraft systems and consumes vast amounts of labor that could better be employed elsewhere. AMC’s goal is one cannibalization action a month per aircraft. Figure 1.3 shows that AMC C-5 aircraft cannibalization actions have remained at a level well above the AMC standard for several years. To address the spare parts problem, the Air Force changed its calculation method for fiscal year 1994 to recognize that the C-5 has been flying more than its number of programmed hours. Also, for fiscal year 1994, the Air Force allowed some high-priority weapon systems, such as the C-5, to receive more spare parts funding than lower priority systems. These changes may have partly accounted for the improved TNMCS rate during fiscal year 1995. However, neither change had helped improve the cannibalization rate. For fiscal year 1996, the Air Force has proposed raising C-5 spares funding to a level designed to achieve a 7.5-percent TNMCS rate rather than the current 12.6-percent goal. Air Force officials expect raising the spares support level will add about $4.6 million to annual C-5 spares costs. The C-5’s mission capability rates could increase if the Air Force were to conduct a readiness evaluation similar to the operational readiness assessment conducted for B-1B bomber aircraft. That assessment, conducted by the Secretary of the Air Force at the direction of the 1994 National Defense Authorization Act, was to determine if the B-1B could sustain a 75-percent readiness rate, about 18 percentage points higher than it was achieving at that time. The Air Force Operational Test and Evaluation Center (AFOTEC) was enlisted as an independent agent to direct the test and report on the assessment activities. An AFOTEC official estimated the total costs of conducting the assessment was about $2.2 million. During the B-1B operational assessment, AFOTEC used the results from a test wing to project that the B-1B fleet could achieve mission capable rates of 75 percent by better managing spare parts repair cycles and making better use of existing spares with few new assets. AFOTEC also found that these changes would increase annual program funding by $11 million to $12 million over and above funds already committed for various improvements, initiatives, and spare parts. AFOTEC’s findings were evaluated by the DOD Operational Test and Evaluation Agency as well as by us. Both evaluations supported AFOTEC’s conclusions. After the assessment was completed, the test wing’s mission capable rate rose to 84 percent, and the entire fleet mission capable rate rose to 66 percent. According to the DOD Operational Test and Evaluation Agency, the primary reason the mission capable rate increased was better spares support—that is, more spares available at the test location and faster turnaround at the intermediate or depot levels. Leadership attention and the significance of the test were important motivating factors, but the mission capable rate could not have been raised without spare parts improvements. Maintenance downtime was reduced when spares were immediately available, and more spares lessened the chance that parts would have to be cannibalized. One of the major factors accounting for better B-1B spare parts support was the use of the Distribution and Repair in Variable Environments (DRIVE) model. DRIVE manages repair requirements by prioritizing repairs based on their effect on mission capable rates. Current systems, including the one used for the C-5, prioritize repairs based only on the amount of time the part has been in the repair process. In addition, a 1992 Rand report advocated using the DRIVE model to emphasize the effect of repairs on mission capability rather than relying on more traditional indicators.The Air Force mandated use of the DRIVE system at its depots in January 1994, but the system has not yet been implemented by the San Antonio Air Logistics Center, the C-5 depot. Although the C-5 and B-1B are different aircraft with different missions, we believe a C-5 readiness evaluation could yield similar results to those experienced during the B-1B evaluation. For example, both aircraft have had historically low mission capable rates and poor spare parts support. Also, before the B-1B test, Air Force officials did not think the mission capable rate for the B-1B could be raised nearly as high as the evaluation later demonstrated. However, the officials are now projecting a fleetwide increase in B-1B mission capable rates of 15 percentage points. Air Force airlift officials have stated that improvements to the spares process would have little impact on C-5 mission capability. However, we think improvements similar to the B-1B spare parts process changes could be applied to the C-5 spares process as well. Officials from the C-5 manufacturer stated that improving the C-5 spares process by analyzing parts that most affect mission capable rates, similar to the DRIVE model philosophy, and improving the spare parts pipeline could result in a 40-percent reduction in TNMCS rates. That reduction would increase the mission capable rate fleetwide by about 6.6 percentage points. An increase of this magnitude would give DOD an additional 1.3 million ton miles a day of cargo-carrying capability—the equivalent of 10 C-17 aircraft. AMC officials identified several difficulties in reducing TNMCS rates for the C-5 aircraft by 40 percent. Officials noted that the practical requirement to maintain an aircraft at each of the two active bases for cannibalization constitutes a significant portion of the TNMCS rate. They further noted that aircraft undergoing refurbishment or unit inspections also contribute to the TNMCS rate. Notwithstanding this position, we note that if AMC achieved its 7-percent TNMCS goal, it would have accomplished about a 40-percent reduction in the TNMCS rate—which C-5 manufacturer officials projected. AMC established a C-5 modernization plan to increase mission capability rates and reduce personnel requirements and life-cycle costs. According to AMC officials, modification initiatives are generally prioritized based on potential reliability and maintainability improvements to the aircraft as well as cost. The resulting priorities are later modified and updated by various reviewing officials. AMC’s top 10 proposed modifications, at the time of our review, and our estimate of their impact on mission capability, are shown in table 1.2. Many of these modifications will not be funded until at least the year 2000 and completed several years after that. Even though we were able to calculate potential mission capable rate increases for each of the top priority modifications, AMC has not analyzed how much the modifications would contribute to increasing mission capability. Until AMC does that analysis, decisionmakers cannot consider the impact that the proposed improvements could have on mission capability or total airlift capability. Also, if AMC considered mission capability increases as a key factor in prioritizing planned C-5 modifications, the current order of priorities would most likely change. However, we recognize that AMC might have to consider other factors, such as safety considerations, when it prioritizes modifications. We identified the 10th-priority modification—hydraulic valve replacement—as being relatively low in cost but having the most potential for increasing aircraft mission capability. Failures associated with the C-5’s hydraulic system are one of the leading causes of reliability problems. The hydraulic valve replacement is designed to eliminate surges when opening selector valves on the landing gear, cargo doors, and ramps. Because this modification was only recently identified as one of the top 10 priorities, it has not been scheduled for funding. However, AMC estimated that the modification could be funded as early as fiscal year 1997. The C-5 manufacturer estimates that failures in hydraulic system plumbing, mounting fixtures, and components should decrease by two-thirds to three-fourths when the hydraulic valve modification is completed. More importantly, the 1.1-percentage point potential increase in C-5 mission capability resulting from the modification would provide DOD with an additional 0.18 million ton miles per day of cargo-carrying capability—equating to 1.4 C-17 aircraft. In comparison, the two top priority modifications—autopilot replacement and engine turbine improvement—would likely only increase mission capability a little at a relatively large cost. Other high-priority efforts, such as floor corrosion prevention and courier compartment flooring, are improvements that would not result in any potential increase in aircraft mission capability. DOD has not been providing adequate funding to meet the original schedule for proposed C-5 improvements. For example, two major upgrades to improve the C-5’s reliability, the malfunction detection analysis and recording system and the main landing gear actuator, were first identified in fiscal year 1985 and scheduled to be completed by fiscal year 1994. However, funding delays have stretched these modifications by 4 years to fiscal year 1998. According to our 1992 report, one of the major factors contributing to the C-141’s recent severe problems was inadequate funding to implement necessary modifications. AMC stated in its 1995 Air Mobility Master Plan that not completing scheduled improvements would degrade capability and increase operating costs. We recommend that the Secretary of Defense direct the Secretary of the Air Force to (1) conduct a readiness evaluation to determine how C-5 peacetime mission capability can be improved and the costs of such improvements and (2) assess the impact of proposed aircraft modifications on C-5 mission capability and then reprioritize the proposals according to the results of the assessment. We also recommend that the Secretary direct the Commander in Chief, U.S. Transportation Command, to include in strategic mobility planning the potential increase in airlift cargo capability made possible by a higher C-5 mission capable rate. DOD partially concurred with our report (see app. I). DOD stated that it has initiated some actions that would satisfy the intent of our recommendation that the Air Force conduct a readiness evaluation. These actions include conducting a 1994 logistics demonstration project to improve and streamline the C-5 management structure and policies for handling spare parts and repairing components, as well as incorporating lessons learned from the B-1B operational readiness assessment to better manage the C-5 program. Although these actions are good first steps, DOD must ensure that they are fully implemented. In particular, DOD needs to use the DRIVE model, which was successfully demonstrated during the B-1B assessment, to allocate C-5 spare parts and prioritize their repair. DOD agreed with our recommendation that the Air Force assess the impact of proposed aircraft modifications on mission capability and reprioritize the modifications accordingly. DOD noted that the San Antonio Air Logistics Center was developing a computer model that will be able to quantify the effects of proposed aircraft reliability improvements on mission capability. DOD expects this model, scheduled for completion in July 1996, to help improve the method for prioritizing C-5 modifications. DOD did not agree with our recommendation that the Transportation Command’s strategic mobility planning include the potential increase in C-5 cargo capability resulting from a higher mission capable rate. DOD stated that the potential cargo capability increase would not translate directly into increases in cargo delivered to a theater of conflict because of the limited airfield infrastructure (including ramp space, refueling facilities, and material handling equipment). Although potential increases in cargo capability identified in our report may not translate directly into cargo delivered to the theater under some scenarios, the potential capability still exists under more unconstrained scenarios with many available airfields or fields with areas large enough to accommodate substantial numbers of C-5 aircraft. To maximize potential C-5 cargo deliveries, DOD should consider using C-5 aircraft in the more unconstrained scenarios. DOD bases many of its conclusions about a more capable C-5 aircraft on studies of buying additional quantities of a new C-5D aircraft, which has not yet been developed. These conclusions could be substantially different if DOD looked at current quantities of more capable existing C-5A and C-5B aircraft. Therefore, we continue to believe DOD should consider the implications of more capable existing C-5 aircraft in its modeling efforts and decisions on the mix of future aircraft. We conducted our review at AMC, Scott Air Force Base, Illinois; 436th Airlift Wing, Dover Air Force Base, Delaware; C-5 System Program Director’s Office, San Antonio Air Logistics Center, Kelly Air Force Base, Texas; Lockheed Aeronautical Systems Company, Marietta, Georgia; and Air Force Headquarters, Washington, D.C. We interviewed various officials at these locations and reviewed pertinent regulations, guidance, and reports pertaining to the subject areas. We also interviewed officials regarding the B-1B readiness assessment and DRIVE model at the Air Force Operational Test and Evaluation Center, Kirtland Air Force Base, New Mexico; Air Combat Command Headquarters, Langley Air Force Base, Virginia; and Air Force Materiel Command, Wright-Patterson Air Force Base, Ohio. To calculate potential aircraft availability and mission capability increases, we relied on Air Force and C-5 manufacturer estimates of increases in mission capable hours attributable to the proposed changes. We added the mission capable hours attributable to those improvements to the 1994 total fleet mission capable hours and calculated a revised mission capable rate. We used the revised mission capable rate to calculate a new aircraft utilization rate, which we used to recalculate a C-5 million ton mile per day cargo contribution. We divided increases in the C-5 cargo contribution by the currently estimated AMC million ton mile per day contribution of a C-17 to determine the equivalent number of C-17s. We conducted our review from August 1994 to August 1995 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Ranking Minority Member of your Subcommittee and the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services, House Committee on National Security, and Senate and House Committees on Appropriations; the Secretaries of Defense, the Army, the Air Force, and the Navy; the Commandant of the Marine Corps; the Commander in Chief, U.S. Transportation Command; and the Director, Office of Management and Budget. If you or your staff have any questions concerning this report, please contact me at (202) 512-5140. The major contributors to this report are listed in appendix II. The following is our comment on the Department of Defense’s (DOD) letter dated October 23, 1995. 1. DOD stated that it could not substantiate the additional 1.3 million ton miles per day of capability that we reported the C-5 aircraft could provide. Our calculation was based on the 40-percent improvement in total not mission capable supply (TNMCS) rate projected by the C-5 manufacturer. We discussed how we calculated utilization rates and million ton mile contributions in the Scope and Methodology section. We used standard Air Mobility Command (AMC) formulas in those calculations. In addition, as noted in the report, if AMC met its own 7-percent goal for TNMCS, it could achieve the 40-percent TNMCS reduction projected by the C-5 manufacturer. Gregory Symons Claudia Saul Norman Trowbridge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the reliability and mission capability of C-5 aircraft and the Department of Defense's (DOD) plan for modifying C-5 aircraft. GAO found that: (1) DOD is relying on C-5 aircraft to deliver about half of the wartime cargo carried by military aircraft, but C-5 mission-capable rates have fallen short of the Air Force's goal and those of other aircraft, because of a lack of spare parts and the complexity and poor reliability of the C-5; (2) the Air Force could improve the C-5 mission capable rate by conducting a readiness evaluation similar to the one it completed for B-1B aircraft and by giving a higher priority to certain C-5 modernization initiatives; (3) the Air Force has not prioritized proposed C-5 modifications and decisionmakers have not fully assessed the impact that these proposed improvements would have on overall aircraft mission capability; and (4) if peacetime C-5 mission capable rates were raised to the Air Force's goal, DOD could better meet its airlift requirements. |
The 193 million acres of public land managed by the Forest Service as national forests and grasslands are collectively known as the National Forest System. These lands are located in 44 states, Puerto Rico, and the Virgin Islands and make up about 9 percent of the United States’ total land area (see fig. 1). Stewardship of the National Forest System is carried out through nine regions that oversee 155 national forests; the forests, in turn, oversee more than 600 ranger districts. Each region encompasses a broad geographic area and is headed by a regional forester, who reports directly to the Chief of the Forest Service and provides leadership for, and coordinates the activities of, the various forests within the region. Each forest is headed by a supervisor, who allocates the budget and coordinates activities among the various ranger districts within the forest. Ranger districts, in turn, are headed by a district ranger, who conducts or oversees on-the-ground activities such as construction and maintenance of trails; operation of campgrounds; management of wildlife habitat; and the sale and harvest of forest products, including timber. Ranger districts vary in size from 50,000 acres to more than 1 million acres. Collectively, these field units are overseen by the Chief of the Forest Service, who operates out of the Forest Service’s national headquarters in Washington, D.C. The Chief and other headquarters officials provide broad policy and direction for the agency, monitor the agency’s activities, and inform Congress about agency accomplishments. In fiscal year 2012, the Forest Service had nearly 34,000 full-time-equivalent employees, about 97 percent of whom were in the field, and an enacted budget of about $5.6 billion. At the close of fiscal year 2012, the Forest Service reported having about 158,000 miles of trail used for both recreation and management. (See table 4 in app. II for information on the Forest Service’s trail mileage, usage, and visitors.) Under the National Forest Management Act of 1976, the Forest Service manages its lands for multiple uses—such as timber harvesting, watershed and wilderness protection, protection of fish and wildlife habitat, forage for livestock, and recreation—and the agency’s trails provide access both for agency officials managing lands and for people visiting those lands. Located throughout Forest Service lands, these trails include many that existed before national forests were established and are managed under various land management authorities. For example, the Forest Service manages about 32,000 miles of trail in designated wilderness areas, which, under the Wilderness Act of 1964, are to be administered so as to leave them unimpaired for future use and enjoyment and to protect and preserve their wilderness character, among other goals. Trails in wilderness areas are thus usually less developed and more rugged than nonwilderness trails. The Forest Service’s trail system also includes parts of national scenic and historic trails established under the National Trails System Act of 1968. These long, national scenic trails—such as the Appalachian and Pacific Crest Trails—are to “provide for maximum outdoor recreation potential and for the conservation and enjoyment of . . . the area through which such trails may pass.”follow a historic travel route of national significance. National historic trails, such as the Oregon Trail, closely The Forest Service’s trails program aims to ensure recreation opportunities, public safety, and backcountry access through operation, maintenance, rehabilitation, and improvement of forest trails. Forest Service trails are categorized by trail type, trail class, and the managed use of each trail. Trail type reflects predominant trail surface and general mode of travel for each trail. The three trail types are standard (or “terra”) trails, which have a surface consisting predominantly of earth; snow trails, which have a surface consisting predominantly of snow or ice; and water trails, which have a surface consisting predominantly of water (but may include portage routes over land). The majority of Forest Service trails are terra trails, and in some cases, a trail may be classified as a terra trail in the summer and a snow trail in the winter. All Forest Service trails must also be categorized by trail class, which are general categories reflecting the prescribed scale of development for each trail. Specifically, class 1 trails are minimally developed, such as those with natural fords instead of bridges in wilderness areas, and are designed to provide a challenging recreation opportunity, usually in a natural and unmodified setting. Conversely, class 5 trails, such as those found at visitor centers or high- use recreation sites, are fully developed, have gentle grades, and are often paved. About half of National Forest System trails are class 3 trails, which may have some minor obstacles, such as rocks, and generally pose a moderate level of challenge to users. (For more information on miles of trails by trail class, see table 5 in app. II.) All Forest Service trails must have at least one managed use, which reflects the mode(s) of travel appropriate on a trail, given its design and management. For example, a trail may be designed and actively managed for hiker and equestrian use, although other uses, such as bicycling, might be allowed. Information on a trail’s type, class, use, and related design parameters is applied by land managers to set trail management objectives, which document each trail’s intended purpose and how it is to be managed. Forest Service trails are to be maintained to the agency’s national quality standards for trails, which describe conditions that trail users can expect to encounter and the level of trail quality the Forest Service plans to provide. For example, the standards state that trails and trailsides will be free of litter and human waste. Maintenance to keep trails in good condition may include, among other tasks, clearing encroaching vegetation and fallen trees, as well as repair; preventive maintenance; and replacement of trail signs, water drainage features, trail bridges, and other trail structures. For reporting purposes, the agency divides trail maintenance activities into three categories: (1) miles maintained, (2) miles meeting standard, and (3) miles improved. The Forest Service defines these categories as follows: Miles maintained: includes miles of trail on which at least one maintenance task was performed to quality standards during a given year, indicating that one or more—but not necessarily all—needed maintenance tasks were completed. Miles meeting standard: includes all trail miles that meet quality standards and have been maintained in accordance with a specific maintenance cycle associated with each trail’s management objective. Maintenance cycles vary by trail; some trails, for example, may be on annual maintenance cycles, and others may be on 3- or 5-year cycles. Thus, a trail can meet the Forest Service’s standards even if it was not maintained in a given year. Miles improved: includes all trail miles where any improvements were made during a given year through activities such as widening the trail and adding or improving trail bridges or trail components, such as barriers, trail surfacing, kiosks, and wildlife viewing platforms. The Forest Service sets performance targets for miles maintained and miles improved, and collates accomplishment data from local units, including national forests or ranger districts, and reports data for each category in the agency’s annual budget justification to Congress. In addition to using its own appropriations and staffing, the Forest Service is authorized to use volunteer labor and nonfederal funds in carrying out trail maintenance activities. Specifically, the Volunteers in the National Forests Act of 1972 authorizes the Forest Service to recruit, train, and accept the services of volunteers for a variety of activities related to national forests, including trail maintenance. The agency may provide these volunteers transportation, uniforms, lodging, and subsistence support. The National Trails System Act also authorizes federal agencies, including the Forest Service, to encourage volunteer and volunteer organization involvement in the planning, development, maintenance, and management of trails, where appropriate. Under this act, volunteer work may include operating programs to organize and supervise volunteer trail- building efforts; conducting trail-related research projects; or educating and training volunteers on methods of trail planning, construction, and maintenance. Agencies are also authorized to provide volunteers with equipment, tools, and technical assistance. According to an agency official, the Forest Service does not track how many volunteer and challenge cost-share agreements are signed at the local level each year. The agency does track national cost-share agreements and reports on them in the agency’s annual budget justification. which outline the relationship between the Forest Service and a partner organization, identifying an exchange of funds or services between the agency and the partner group. In this type of agreement, the partner organization certifies that it has liability insurance covering its volunteers. Generally, this type of agreement is used with certain organizations having long-standing relationships with the agency, such as youth and conservation corps. In addition to having the authority to accept volunteer labor, the Forest Service has authority to accept and use nonfederal funds to support trail maintenance. The Cooperative Funds Act authorizes the Forest Service to accept money received as contributions toward cooperative work in forest investigations or protection, management, and improvement of the National Forest System. Under the act, the Forest Service may also apply for and receive grants under certain circumstances. The Forest Service has undertaken a large planning effort regarding the use of recreational motor vehicles in national forests and grasslands. Each national forest is to identify the minimum road system needed for safe and efficient travel and for administration, use, and protection of the National Forest System; roads that are no longer needed are to be decommissioned or considered for other uses, such as for trails. In addition, in 2005, the Forest Service promulgated a regulation known as the travel management rule, which, among other things, requires each national forest and grassland to identify and designate the roads, trails, and areas open to motor vehicles. In deciding whether to designate trails for motor vehicle use, the rule directs the Forest Service to consider, among other criteria, the need for and availability of resources to maintain and administer the trail if it were designated. The Forest Service has more miles of trail than it has been able to maintain, resulting in a long-standing deferred maintenance backlog. Trails not maintained to the Forest Service’s standards may inhibit trail use and harm natural resources, and deferred maintenance can lead to increased maintenance costs in the future. The Forest Service is unable to regularly maintain many of its 158,000 miles of trails. According to Forest Service data, over the last 5 years the agency performed at least some maintenance on an average of about one-third of its trail miles annually, with officials telling us that some trails had not received any maintenance in the last 10 years. For fiscal year 2012, the agency reported that it accomplished at least some maintenance on about 37 percent of its trail miles, or 59,274 miles of trail, exceeding its fiscal year 2012 target of 46,580 miles. Maintenance conducted ranged from minimal maintenance, such as pruning brush, to more extensive maintenance, such as repairing a bridge. In addition to maintenance, the agency improved about 1 percent of its trail miles each year over the last 5 years. Improvements could include, for example, adding platforms or upgrading trail surfaces. According to an agency official, the agency focuses more on conducting needed maintenance than on improving existing trails or constructing new ones. Over the past 5 years, from 17 to 41 percent of overall trail miles met Forest Service standards each year, with 26 percent (or about one-quarter) of trail miles meeting standards in fiscal year 2012. Figure 2 shows mileage totals for various measures relating to maintenance conducted and trail conditions over the past 5 fiscal years. The lack of annual maintenance has led to a persistent deferred trail maintenance backlog, whose value in fiscal year 2012 was estimated by the Forest Service at $314 million. The Forest Service estimated an additional $210 million for that year in three other trail maintenance- related needs: annual maintenance, capital improvement, and operations. Together, these four estimates—deferred maintenance, annual maintenance, capital improvement, and operations—constitute the agency’s annual estimate of its trail maintenance needs, which totaled about $524 million in fiscal year 2012 (see table 1). These estimates, however, may understate the scale of the agency’s maintenance needs. Estimates are based on trail condition surveys conducted by local Forest Service staff on a random sample of approximately 1 percent of the agency’s trail miles each year—the minimum number of trail miles that the agency has determined is required Some to generate a statistically valid estimate of its maintenance needs.staff we interviewed, however, told us they do not always complete the surveys or ensure that they are providing accurate information for all trails included in the sample. They cited a number of difficulties associated with carrying out the surveys, including lack of available or trained personnel and a cumbersome and inefficient process that requires the surveyor to use a land-measuring wheel to measure the length of the trail and to carry Forest Service a data dictionary while manually recording trail data.headquarters officials told us they were taking steps to streamline the data collection process; these steps are discussed later in this report. Trails not maintained to the Forest Service’s standards have a range of negative effects, including inhibiting trail use and posing potential safety hazards, harming natural resources, and adding to agency costs. Among the 18 national forests included in our review, officials at 15 forests cited various negative effects on visitors; officials from 10 forests specifically cited potential safety hazards as a consequence of deferred maintenance. For example, fallen logs across trails can impede hikers or block horseback, mountain bike, or OHV riders entirely (see fig. 3). Officials from one forest noted that a safety hazard could arise from their inability to remove standing dead trees along a trail. Officials from another forest said that trail bridges needing replacement could be hazardous (see fig. 4), and officials at two other ranger districts cited concerns that users could get lost attempting to follow overgrown trails. Most forests we visited did not have trails that were closed because of deferred maintenance, but officials from a number of forests noted that they had some trails that were “functionally closed” because they were so overgrown or crowded with downed trees. Officials from several forests indicated that they had installed signs at trailheads warning of potential hazards. Outside the agency, nearly all the stakeholders we interviewed said they were concerned with the condition of the Forest Service’s trail system and the agency’s inability to maintain it adequately. Unmaintained trails can also harm natural resources. For example, according to officials we interviewed at several forests, erosion resulting from unmaintained trails can create ecological damage. Trails with poor or unmaintained drainage features can deposit sediment into streams, degrading water quality and potentially affecting species, such as cutthroat trout. Officials at one forest stated that deferred maintenance had prevented them from conducting trout recovery activities in their forest. Officials from three other forests added that waterlogged or obstructed trails, which force visitors to create alternate routes around obstacles, have negative effects on the visitors, as well as on resources. For example, on one trail, OHV riders created trenches in a meadow to avoid water on the trail (see fig. 5), and, according to an agency official, at $100,000 per mile of trail, fixing the rutting by installing boardwalks to raise the trail above the surrounding meadow would be cost prohibitive. Another official gave an example of horseback riders’ creating new stream crossings to avoid unsafe bridges. In addition to being potentially dangerous, such new crossings could damage resources by depositing additional sediment in creeks. Delaying maintenance can also increase the effort required to perform routine maintenance and lead to increased maintenance costs in the future, as we have previously reported in other contexts. Forest Service estimates of deferred maintenance needs include the one-time cost to conduct maintenance that has been deferred, but these estimates do not quantify the extent to which costs have increased over time as maintenance continues to be delayed. One forest official gave two examples of circumstances in which deferred maintenance could later increase costs—although the extent to which costs would increase depends on such factors as length of trail segment needing to be restored, distance from trailhead, and soil type—as follows: Water-eroded trenches: If drainage features such as water bars or drainage dips—which direct water away from trails to reduce erosion—are not regularly cleaned out, the drainage features can fail, and water can flow down the trail, creating deep trenches over time (see fig. 6). As a result, expensive maintenance is later needed to restore the trail in its existing location or to reroute it. Inadequate trailside brush removal: If brush alongside trails is not routinely removed, vegetation may grow and eventually take over the whole trail. Such overgrowth is especially common in areas of heavy rainfall, such as the Pacific Northwest and the Southeast, where, officials said, a trail can become overgrown in 5 years or less. Once a trail is overgrown, heavy maintenance is required to chop through roots and reestablish the trail’s tread. Officials from another forest told us that some trails in their forest are maintained so infrequently that by the time crews get to them, so much maintenance has been deferred that the trails need to be completely rebuilt. As one official said, “The longer one waits to fix a problem, the harder it will be to fix.” The Forest Service relies on a combination of internal and external resources to help maintain its trail system. For example, the agency allocates some of its congressionally appropriated funds to support trail maintenance. In addition, the agency received about $100 million under the American Recovery and Reinvestment Act of 2009 for trail maintenance activities. External resources used by the agency for trail maintenance include volunteer labor and funding from federal programs, states, and other sources. The Forest Service uses a variety of internal funding sources to support trail maintenance, according to officials we spoke with. The agency receives annual appropriations from Congress for capital improvements and maintenance, which it allocates to a variety of budget line items, including trails. This trails allocation is the agency’s primary source of funding for trail maintenance activities. In fiscal years 2006 through 2012, the agency’s annual trails allocation ranged from a low of about $73 million to a high of about $88 million, averaging about $80 million (see fig. 7). Not all of this money goes directly toward trail maintenance, however. As with other agency programs, a portion of the overall trail maintenance allocation is retained at the Forest Service headquarters level to cover agency overhead costs, before the remainder is distributed to the regions.cover costs at the regional level before in turn distributing funds to The regions likewise use a portion of the trails allocation to individual forests for trail maintenance activities. For fiscal years 2010 through 2012, from 29 to 32 percent of the trails allocation was held at the national level for overhead costs. The regions also reported holding trails allocations at the regional level for purposes such as overhead costs, capital investment projects, and emergency reserves, before the remainder was distributed to forests. Headquarters officials told us that since fiscal year 2007, they have used a historical model to determine how trails allocations should be distributed to each region. According to an agency budget official, the model evaluates three primary elements: the inventory of trails in the region, including trail miles and classes; status of the travel management planning process; and the region’s performance relative to agency priorities. For fiscal years 2011 through 2013, headquarters officials prorated and adjusted regional funding to meet national and region- specific needs identified by the agency’s national and regional recreation directors, such as allocating funds to address an epidemic of mountain pine beetles in the Rocky Mountains. Regional portions of the Forest Service’s trails allocation varied substantially; in fiscal year 2012, for example, after national cost pools were accounted for, regions received trails allocations ranging from $3.1 million to $9.7 million (see table 6 in app. III). After receiving their trails allocations, the regions in turn direct funding to national forests, and, regional officials told us, they take a variety of factors into account when doing so. As is done at the national level, six of the nine regions consider total number of trail miles, and one of these six also considers emerging issues, such as mitigation of mountain pine beetles, when determining annual allocations. Another region recently initiated a new process in which it gives a base administration amount of $60,000 to each forest, plus an additional amount tied to each “user visit” to the forest.allocations to national forests are based on the amount of work forests can accomplish toward regional targets and extra trail needs, such as bridge replacements. Four of the nine regions noted that they hold back a portion of the trails allocation for capital investment projects related to trails. For example, one region funds one large trails capital investment project each year, valued at $125,000 to $250,000. According to regional officials, they established this practice to address high costs related to large capital investment projects, such as complex bridges, because a single large project could deplete a forest’s entire trails allocation otherwise, and no other trail maintenance would be performed. Officials from another region noted that their region’s trails In addition to the Forest Service’s trails allocation, the agency allocates funding to other programs that help support trail maintenance activities.For example, officials from one forest reported that because trails staff also work for recreation programs, part of their salaries are paid from the national forest recreation and wilderness allocation, as well as from the trails allocation. Officials from this forest said interns and wilderness rangers funded through the national forest recreation and wilderness allocation do trails work in addition to interacting with visitors. The officials said that this practice has been very effective for addressing trail maintenance needs. Officials at other forests reported accomplishing trail maintenance through activities funded by the agency’s integrated resource restoration allocation. This allocation was implemented on a pilot basis in certain regions in fiscal year 2012. Incorporating several existing allocations, the new allocation is intended to support actions to restore or sustain water quality and watershed processes, including road and trail restoration activities. Officials from some forests noted that because unmaintained trails may produce erosion adversely affecting water quality, they had used some of their integrated resource restoration allocation to conduct trail maintenance. Additionally, officials from a number of forests that had experienced wildland fires said they had used burned area emergency response allocations to address some trail maintenance needs on forests and rangelands affected by fires. These funds are available to support emergency response projects on lands damaged by wildfires. In addition, the Forest Service allocated about $100 million of the funding it received under the American Recovery and Reinvestment Act of 2009 (Recovery Act) to trail maintenance and decommissioning activities, which some forest officials told us they used to help address their trail maintenance backlogs. maintenance and decommissioning distributed to the regions ranged from $540,000 to the Intermountain Region to over $19 million to the Pacific Southwest (see table 7 in app. III for information on Recovery Act funds allocated to regions and states). Of the 90 trail maintenance projects supported by Recovery Act funds, agency documents show that 76 addressed deferred maintenance, including 27 that repaired or replaced bridges. For example, Mt. Hood National Forest in Oregon received $1,400,000 to refurbish and repair trails to improve public access and hiker safety, which officials told us they used for a number of activities, including replacing 22 bridges and some signs (see fig. 8). Pub. L. No. 111-5 (2009). These funds were to be obligated by September 30, 2010, and Forest Service headquarters officials told us that nearly 100 percent of the Forest Service’s total Recovery Act funds had in fact been obligated by the deadline. The Department of Agriculture’s Office of Inspector General has reported on agency trail maintenance-related expenditures under the Recovery Act, including questionable expenditures such as those related to unallowable costs charged by a cooperator. Department of Agriculture, Office of Inspector General, American Recovery and Reinvestment Act: Forest Service Capital Improvement and Maintenance Projects: Trail Maintenance and Decommissioning, 08703-0004-SF (Washington, D.C.: July 3, 2012). In addition to internal resources, Forest Service officials reported using a number of external resources to support trail maintenance efforts, including volunteer labor and funding from other federal programs, states, and other sources. Volunteer labor is a particularly important resource for trail maintenance. In fiscal year 2012, the Forest Service reported that 1.2 million volunteer labor hours—or the equivalent of 667 full-time volunteers, valued at $26 million—directly supported its trail maintenance By comparison, in that same year, the Forest Service had the activities.equivalent of 666 full-time trails employees. The contributions of volunteers to trail maintenance may be higher than these figures indicate because volunteer hours may be underreported. According to agency documents, Forest Service staff are required to report the number of hours volunteers work on trails, but, according to an agency headquarters official, there are no annual agency targets for working with volunteers, and not all staff find the data valuable. Therefore, Forest Service staff may see little benefit in taking the time to collect and enter volunteer data, and, consequently, not all volunteer hours may be recorded. Moreover, some agency officials and stakeholders told us that not everyone who conducts maintenance on Forest Service trails is under a volunteer or challenge cost-share agreement, and informal contributions are not captured in the agency’s volunteer data. For example, an official from one forest said that some visitors carry saws with them and remove deadfall or other vegetation they come across while using trails. These informal volunteer activities are not technically authorized or recorded in agency data, but an official from one forest said that forest officials “welcome the help.” Regarding external funds, all agency officials we interviewed at forests and ranger districts reported receiving external sources of funding from several sources, including other federal and state agencies. While the Forest Service tracks national grants and challenge cost-share agreements, it does not centrally track external funding received by national forests and is unable to fully quantify how much total external funding the agency has received for trails. One key source of funding for trail maintenance is the Recreational Trails Program. Under this program, the Federal Highway Administration, in consultation with the Secretary of the Interior and the Secretary of Agriculture, makes funds available to states to award for trail maintenance or trail assessments. In fiscal year 2013, $80.2 million was set aside for this program nationally and was apportioned to the states. According to the officials we interviewed, states often grant a portion of these funds to national forests for trail maintenance or construction. Officials from one forest we interviewed stated that they used funds from this source to install signs and reroute trails, and officials from another forest stated that they used the funds for major projects, including trail bridges. A third forest used $150,000 in Recreational Trails Program grant funding, combined with a grant from a local nonprofit, to pay for a professional trails assessment. Officials from many forests we interviewed also told us they received state grants to support maintenance of trails for motor vehicles from their state’s OHV program. Some states use funds collected from OHV registration fees to provide grants to local entities, including national forests, to maintain and improve trails for motorized users. Officials at one forest stated that the forest’s ranger districts receive approximately $400,000 per year from their state’s OHV registration fees, which the districts use to fund special projects, hire trails crews, and buy supplies to complete trail maintenance on Forest Service land. Officials at a ranger district stated that they received $239,000 per year in state OHV funding, which they used to fund a nine-person crew to maintain trails, among other activities. An official from this ranger district stated that much of the trail maintenance work funded by this grant was used to restore unauthorized routes that OHV users had created. Officials from another forest told us they receive $10,000 to $20,000 per year to maintain snow trails, plus an additional $10,000 to $20,000 per year to support OHV patrols, from their state’s OHV program. Officials from some forests we interviewed stated that they have also relied on funding from Title II of the Secure Rural Schools and Community Under Title Self-Determination Act of 2000 to conduct trail maintenance.II of the Secure Rural Schools Act, projects may be funded for certain land management purposes that benefit federal lands, including projects related to the maintenance or obliteration of Forest Service roads, trails, and infrastructure.had received from $18,950 to almost $97,000 in Title II funds each year and that their trail maintenance projects have relied heavily on this funding. Another forest reported receiving from $157,000 to $317,000 in Title II funding annually since 2009 for trail maintenance. These funds have allowed the forests to address some of their deferred maintenance backlog, as well as to complete annual maintenance. The authority to obligate funds for these projects is scheduled to expire in 2013, and officials at this forest stated that if they lost the funding, they would no longer be able to fund their seasonal trails crews and would be dependent on volunteers for needed maintenance, adding that some of their less- used trails would “go back to nature.” In our interviews with agency officials, including those at the national, regional, forest, and ranger district levels, we found that national forests and ranger districts combine funding and personnel resources in different ways to accomplish trail maintenance. Officials from a number of ranger districts told us that they rely on a combination of resources to maintain an effective trail maintenance program; as one regional official put it, the trail maintenance program “is held together by Band-Aids and baling wire.” For example, a ranger district in one forest we visited used state grant dollars to pay for maintaining motor vehicle trails while volunteers conducted most maintenance on trails closed to motor vehicles. Officials from another forest told us that they use their trails allocation to pay for their basic trails program, including trails crew salaries and overhead costs, and grants and other external funding to pay for on-the-ground trail maintenance. An official in one district described his district’s trails program as having a “large quiver of financial resources,” which includes the trails allocation, state OHV grant funding, and partnerships with various organizations that contribute funding. Officials from this district also said that they benefit from a statewide trails crew that works on trails open to motor vehicles; the crew is paid for by the state’s OHV program and works on motor vehicle trails on public lands throughout the state. Additionally, a number of forests we visited stated that they combined funding sources with volunteer or other labor sources to maintain their trails. For example, some forests have local groups who adopt trails or coordinate trail workdays, thereby taking responsibility for trail maintenance on one trail or trail segment. One ranger district we interviewed used its Secure Rural Schools Act Title II funding to pay for a trails crew on one side of the district, while relying entirely on volunteers on the other side. In another ranger district, officials reported that most of the maintenance of trails closed to motor vehicles is done by volunteers and that for heavy maintenance, such as tree removal, the district borrows a machine from another district. Some forests we visited are seeking new ways to complete trail maintenance. For example, officials from several of the forests and ranger districts we interviewed in Arizona, Colorado, and Idaho stated that they sometimes use prison crews because the crews are inexpensive and complete high-quality work. An official from one forest told us that although the forest must pay for the foreman and materials, it pays prisoners only $0.50 per day. As a result, it can generally accomplish maintenance work for 60 percent of what it would ordinarily cost to contract out the work, although an official noted that it takes forest officials more time to manage prison crew contracts than regular contracts. According to agency officials and stakeholders we spoke with, a number of factors complicate the Forest Service’s trail maintenance efforts, including (1) factors associated with the origin and location of trails, (2) some agency policies and procedures, and (3) factors associated with management of volunteers and other external resources. No single factor was identified as the most problematic; the types of factors identified, and the extent to which they complicate trail maintenance, varied across forests and regions. The origin of many system trails as legacy trails, roads converted to trails, or user-created trails, as well as the location of trails in designated wilderness or in areas affected by insect or disease outbreaks, wildland fire, or other natural events, complicate trail maintenance by requiring more frequent and resource-intensive trail maintenance efforts. Factors associated with the origin of many trails present a variety of complications in maintaining them, according to a number of agency officials and stakeholders we interviewed. Many Forest Service trails are legacy trails created for purposes other than recreation, such as access for mining, timber harvesting, or firefighting. Some of these trails were carved straight up steep slopes, leaving erosion-prone trails requiring continual maintenance; even on less-steep slopes, if a trail is built along a hill’s fall line—the natural line down which water flows—it will naturally erode over time. Other trails were built through meadows, resulting in standing water on certain stretches, or in other problematic locations, such as on a stream bank (see fig. 9 for examples of these conditions). In addition, as part of the travel management process, many forests in recent years have converted Forest Service roads into trails open to motorized vehicles. Not all forests have been affected by these conversions, but officials from some forests said that conversion of hundreds or even thousands of miles of roads to motor vehicle trails had added new trail maintenance challenges and strained already-limited budgets. Some officials told us they need heavier equipment and engineering expertise to address maintenance issues on many roads converted to trails; for example, as a result of one road-to-trail conversion, the trail system in one forest we visited had gained a two-lane car bridge across a wide river (see fig. 10). Further, unauthorized trails created by users, which are not part of the agency’s official trail system, take time and resources away from maintaining system trails because officials must address safety and resource concerns associated with the trails, according to officials we interviewed. Some officials told us their forests have hundreds of miles of user-created trails; in some areas, more of these trails exist than system trails. Many legacy and user-created trails are not sustainable over the long term, according to recent research and agency officials and stakeholders. These trails occupy terrain that is subject to severe erosion, require considerable ongoing maintenance, and do not meet users’ needs without ecological damage. As a result, such trails require a disproportionate share of resources to maintain—akin to bandaging a wound that will never heal, in the words of one official. For example, one stakeholder told us about a Forest Service bridge to a waterfall, whose railing had been replaced 10-15 times in the past 20 years because the bridge was situated where, during severe weather, water would rush over a nearby cliff and rip out the handrail. The stakeholder commented that relocating the bridge would be more sustainable in the long term than continually repairing it. Similarly, officials from a Pacific Northwest forest told us that some of their forest’s trails were built with major design flaws, such as trail segments where snow never melts. These officials said they have considered rerouting such sections to make them more sustainable, but doing so would require environmental review under the National Environmental Policy Act, which, they said, would be expensive; on the other hand, not going through this process contributes to the agency’s backlog of deferred maintenance.emphasized that despite the up-front costs of rerouting and reconstructing unsustainable trails, maintaining well-designed trails is much more cost- effective over the long term. For example, one official noted, the majority of the agency’s trail maintenance costs are related to moving trails crews and equipment to the trails that need maintenance, and that well- designed trails cost less to maintain in the long term because crews do not have to visit them as often. Section 4 of the Wilderness Act prohibits the construction of temporary roads or structures, as well as the use of motor vehicles, motorized equipment, and other forms of mechanical transport in wilderness areas, unless such construction or use is necessary to meet the minimum requirements for administration of the area, including for emergencies involving health and safety. Generally, the land management agencies have regulations that address the emergency and administrative use of motorized equipment and installations in the wilderness areas they manage. character. In contrast, many officials and stakeholders we interviewed said that the general prohibition against power tools is not a complicating factor because crosscut saws are as efficient or nearly as efficient as chain saws, and chain saws are heavier to transport. Several officials told us that accessing wilderness trails, often located deep in the backcountry, requires considerable time and effort. For example, officials from one forest said that it may take hours to drive to a wilderness trailhead, take 1 to 2 days to hike to the site needing maintenance, and require crews to stay overnight—adding to the cost and complexity of backcountry trail maintenance. The Forest Service’s trail maintenance efforts are also complicated when trails are located in areas affected by insect or disease outbreaks, wildland fire, and other natural events. National forests in some western states have suffered heavily from a mountain pine beetle epidemic, which has left many dead or dying trees that are starting to fall, sometimes across or near trails. Officials from one forest told us their forest’s entire trails program does little beyond removing hazardous trees because beetles have killed so many trees. Officials in other parts of the country told us that their trail maintenance programs were being affected by other insects, such as the hemlock woolly adelgid, or by diseases, such as laminated root rot in Douglas-fir trees. Wildland fire also complicates trail maintenance. According to officials, a number of steps may be needed before a trail can be reopened after a wildland fire, such as removing hazardous trees, relocating drainage features, and stabilizing rocks. In addition, a number of forest officials told us that other natural events, such as tornadoes, hurricanes, floods, and windstorms, sometimes complicated their trail maintenance. For example, in the Pacific Northwest, officials from two forests told us that storms may cause flooding and landslides that easily wash out trails because of the region’s loose volcanic soils. Additional factors complicating the Forest Service’s trail maintenance activities include the absence of a career path or training program for trails staff, which can limit agency expertise; burdensome data collection efforts; and certain administrative procedures that take time away from conducting maintenance on the ground. Career path, training. Many officials noted that the Forest Service has no career path or training programs for trails staff, which makes it difficult for the agency to develop and retain professional expertise and leadership for the trails program. For example, because full-time, permanent trails positions do not always exist at the district or forest levels, the agency often hires temporary or permanent-seasonal employees to maintain trails. These employees, however, often work for only one or two summers, requiring local officials to hire and train new trails employees the following season. Several officials and stakeholders told us that because of retirements and attrition, the agency has lost almost all of its trails expertise in recent years, and other officials noted that certain technical skills—such as using crosscut saws, working with horses, or blasting rock—are becoming more difficult to find when seeking new trails employees. The Forest Service currently has no national, standardized training for these skills. (Staff training, retention, and expertise are discussed in more detail later in this report.) Collecting trail condition data. Many local trail managers told us that the effort needed to collect trail condition data each year is burdensome and takes time away from conducting on-the-ground trail maintenance—an important consideration given the limited resources available to them. Many also said they do not use the collected information for making decisions, such as setting priorities, at the local level and use it only for upward reporting. Agency headquarters officials, however, emphasized to us the importance of data collection for estimating trail maintenance costs nationwide, as well as for providing information on trail conditions to local officials. Administrative procedures. Officials and stakeholders also identified a number of administrative and other factors that complicate trail maintenance, some of which are outside of the agency’s control: Efforts to reduce travel costs. Many officials said that agency efforts to reduce travel costs have hindered their ability to complete trail maintenance on the ground, especially on remote trails. Several officials told us that trails crews who in the past may have been allowed to spend the night near a work site must now travel back and forth each day to avoid food or lodging costs. As a result, more time is spent transporting crews—up to several hours each way—and less time is spent completing work on the ground. Environmental review processes. Other officials and stakeholders said that analyses required under the National Environmental Policy Act can be expensive and time-consuming, thereby detracting from actual maintenance activities. Routine trail maintenance does not require detailed environmental analysis, but the agency sometimes performs such an analysis for new trail construction, trail relocations, and other substantial trail work. Budget timing. The Forest Service does not always have a final budget in place for a given fiscal year until spring, which some officials said affects their ability to plan and execute trail maintenance. For example, one official said, they cannot sign and execute contracts until they have an approved budget, which may happen late in the fiscal year when contractors are already committed to other projects. Also, officials from one forest told us that because of their forest’s high elevation and persistent snowpack, they can work only during a 6-to-8-week window in late summer. Timing of the budget, along with a short season, can make it hard to complete trail maintenance. Although volunteers and other external resources were repeatedly cited as important to the agency’s trail maintenance efforts, officials and stakeholders we interviewed identified a number of complications related to working with volunteers, including insufficient agency emphasis on managing volunteers; the time and effort it takes to coordinate, train, and supervise them, which decreases the time officials can spend conducting maintenance; safety and liability concerns that limit local use of volunteers; and the tenuous nature of partnerships. In addition, officials noted that managing other external resources for trail maintenance, such as time required to research and apply for grants, can detract from performing maintenance on the ground. Emphasis on volunteers. According to some agency officials and stakeholders, the Forest Service recognizes but does not always sufficiently emphasize managing volunteers when it hires and trains trails employees. Congress and the executive branch, including the Forest Service, have recognized the importance of volunteers to complement the agency’s work in trail maintenance and other activities. For example, Executive Order 13195, issued in 2001, directs agencies to engage volunteers in all aspects of trail planning, development, maintenance, management, and education, as outlined in the National Trails System Act. The Forest Service has also emphasized the importance of volunteers in the chapter on volunteer management in the Forest Service Manual. Even so, at the forest and district levels, volunteer management is generally a collateral duty, and collaboration with and management of volunteers are not clear expectations of trails staff. One official pointed out that it takes the “right type of Forest Service employee to build partnerships,” stating that the agency should be more diligent in hiring trails coordinators with collaboration skills. Moreover, some officials and stakeholders pointed out that the Forest Service provides limited training to staff who manage volunteers. For example, one official noted, the agency conducts quarterly web-based workshops on working with volunteers but offers little additional training to field staff who work with volunteers. (Volunteer management is discussed in more detail later in this report.) Coordinating, training, and supervising volunteers. Many Forest Service officials told us, and we have previously found, that coordinating, training, and supervising volunteers take effort, as well as time away from other tasks;spoke with, “Volunteers aren’t free.” Officials from the majority of forests we visited told us that they did not have sufficient staff or resources to effectively manage additional volunteers; three forests reported turning away volunteers as a result. In contrast, officials from other forests we visited told us that they never turned away volunteers and had the capacity to manage more volunteers, particularly when groups are skilled and can perform maintenance on their own. On the other hand, some groups are not capable of operating without supervision; several officials said that undirected or unsupervised volunteers or youth crews may damage trails and that Forest Service crews sometimes have to revisit volunteer-maintained trails to repair volunteer-caused damage or complete maintenance not done to Forest Service standards. in the words of many officials we Safety or liability concerns. Officials and stakeholders also told us that factors related to safety and liability sometimes complicate working with volunteers. For example, some forests do not allow volunteers to use chain saws, while other forests vary in their certification requirements for volunteers to use equipment such as crosscut saws or chain saws. Officials and stakeholders told us that some forests require a 40-hour training session to use chain saws, while other forests require a 1-day or weekend course. Moreover, some but not all forests accept saw certifications awarded by other forests. Many officials told us that safety is a top priority, and managers are sometimes hesitant to allow volunteers to use equipment if they risk being hurt and filing a workers’ compensation claim. Volunteers are considered federal employees under the Volunteers in the Forest Act for tort or workers’ compensation claims. Since workers’ compensation is generally covered by local units, one claim may consume a local unit’s entire annual trails allocation, according to some officials and stakeholders. Tenuous nature of partnerships. Some officials told us that relationships with partners can be tenuous, which can make volunteers less willing to work with the agency. In some cases, volunteer groups will support the Forest Service as long as the agency is supporting their values but can turn into adversaries if the agency makes a decision they do not agree with—for example, if, to protect natural resources, the agency closes a trail volunteers like. Volunteers also may develop a sense of trail ownership. Such pride of ownership may confer an advantage as volunteers try to do a good job maintaining trails, but, according to officials, it can also present challenges when volunteer groups want to influence agency decisions about trail maintenance priorities. Applying for and managing external funding. Officials we interviewed also observed that, as in working with volunteers, it takes time to apply for external funding and manage requirements associated with this funding, which allows less time for actual trail maintenance. For example, officials told us, it takes time and effort to seek and apply for external grant funding and to meet requirements for such outside funding once received. Officials from one forest said they could not at the time manage additional grants because they did not have the time or staff, and officials from other forests said it is hard to keep up with reporting or other administrative requirements for trails projects funded with external resources. Nevertheless, some officials told us that even with the additional effort needed to comply with these requirements, external funding is critical to their trail maintenance efforts. Agency officials and stakeholders identified numerous options aimed at improving Forest Service trail maintenance, which generally fell into the following categories: (1) assessing the sustainability of the trail system, (2) improving certain policies and procedures associated with the Forest Service’s management of the trails program, and (3) better using volunteers and other external resources. Many officials at all levels of the agency, as well as some stakeholders we met with, stated that the Forest Service’s trails program might benefit if the agency were to systematically assess its trail system. In 2010, the Forest Service issued a document titled A Framework for Sustainable Recreation, in which the agency presented a strategic vision and guiding principles to achieve sustainability in all aspects of its recreation program, including trails. As part of this vision, the Framework noted the importance of the Forest Service’s evaluating its infrastructure investments and program costs to identify “the gap between program needs and available resources . . . along with options for closing the gap.” Many officials and stakeholders we interviewed told us that trail systems should be “right-sized”; that is, units should assess their trail inventories in light of the resources available for maintenance and take steps, such as closing trails or portions of trails or reducing the maintenance on certain trails, so as to narrow the gap between funding and maintenance needs consistent with the Framework. One approach that the Forest Service used in the mid-2000s is the agency’s recreation facilities analysis process, which assessed recreation sites—such as campgrounds, day- use sites, and some trailheads (but not trails themselves)—to “assist forests in creating a sustainable program that aligns recreation sites with visitors’ desires, expectations, and use.” This process resulted in relatively few decommissioned sites, but a headquarters official told us the process benefited local units by helping them identify a variety of tools to address the gap between program needs and available resources. Similar to this past approach, the Forest Service is developing a process intended to help forests achieve a trail system that meets community needs, does not harm natural resources, and can be maintained with available resources. Headquarters officials told us the agency had not yet determined how it will implement the process or the time frames for doing so. Trail assessments could also improve the physical sustainability of individual trails. Numerous stakeholders and officials noted the importance of sustainability in the trail system, stating that redesigning legacy trails and relocating unsustainable trail segments—through rerouting steep segments to reduce erosion, for example—would substantially reduce maintenance work over the long term. Some of these officials and stakeholders acknowledged the potential for considerable up-front costs to relocate unsustainable trail segments but stated that long-term maintenance costs would be significantly lower for well- designed trails. One Interior agency official said that a potential strategy would be to address unsustainable trails in feasible portions by undertaking trail reroutes and redesigns on a certain percentage of the trail system each year. For example, by annually addressing 5 percent of the system, the agency would have “solved its trail problems” within 20 years, according to this official, and be better positioned to address needed yearly trail maintenance. Some officials indicated that training Forest Service employees on sustainable trail design might also improve trail sustainability, noting that agency field staff may not have a full understanding of how to assess trails for sustainability or how to redesign or relocate unsustainable trails because the agency has provided little guidance or training on this. As noted, the agency does not have a robust trails training program, and while the concept of trail sustainability is discussed in some of the agency’s guidance on trail design, little hands-on training is provided to show field staff how to implement this guidance on the ground. Although the Forest Service offers little training on assessing sustainability, some forests we visited had already taken steps to assess the sustainability of their trails and to identify and implement opportunities to reroute or otherwise improve them, consistent with the Framework. For example, one forest surveyed 250 miles of trail and is analyzing data from its assessment to identify unsustainable trails and set priorities for work, including identifying trails to add, decommission, or reroute. Other forests we visited were taking other approaches toward more sustainable trails. For example, one forest assessed its road and trail systems together— rather than focusing on just roads and trails used by motorized vehicles— as part of its travel management planning. The forest has undertaken a separate analysis to look at the efficiency of its current approach to managing and maintaining its trail system. Another forest we visited was conducting systematic assessments on particular trails or trail systems throughout the forest; officials told us they had decided not to spend money on unsustainable trails and were actively relocating these trails. This forest had assessed all of its OHV trails, for example, and, on the basis of this assessment, had repaired and rerouted certain trails and implemented seasonal and weather-related closures. On the other hand, not all forests have assessed the sustainability of their trails or identified opportunities for improvement, and because such assessments—and subsequent changes to trail systems—can be costly, time-consuming, and contentious, the agency has not undertaken or promoted such assessments nationwide. Without doing so, however, the agency may continue to devote substantial resources to maintaining inadequately designed trails. For example, officials we spoke with at one forest were in the process of rebuilding trails destroyed by a fire and told us that rebuilding the approximately 300 miles of trail affected by the fire would cost almost $750,000. They had not, however, assessed the sustainability of those trails to determine the extent to which rerouting unsustainable trail sections now would save the agency funding and resources later. Some officials and stakeholders also identified a number of options related to improving Forest Service policies and procedures to better manage the trails program, including the following: Implement standardized trails training. Some officials and stakeholders stated that the agency would benefit from a training curriculum about basic trail design, construction, and maintenance— to go beyond the sustainability training noted earlier—aimed at providing basic field skills to staff responsible for trails. The agency does not have a robust trails training program, and a number of officials and stakeholders said that training was needed on basic field skills. The Framework states that the agency is to train staff and develop needed skills. Agency headquarters officials agreed that training is important and would be best conducted in the field, but they noted that because providing in-person training in the field is expensive, the agency has shifted heavily to web-based training. Nevertheless, given the nature of trail maintenance work, some officials emphasized to us the importance of conducting such training in person. Without in-person training, agency staff may not have the skills they need to perform on-the-ground trail maintenance activities. Improve expertise by recruiting and retaining skilled trails employees. The Framework calls for the agency to improve its expertise by recruiting and retaining staff with needed skills. As noted earlier, however, the agency has had difficulty hiring and retaining skilled trails employees. Many officials stated that taking steps to hire and retain skilled trails employees would improve trail maintenance; this option was of particular interest to a number of regional representatives we spoke with. For example, officials from one forest said, revised job descriptions might help recruit trails employees who are more knowledgeable about trail maintenance and management. Further, according to some officials, if the agency could create incentives for skilled trails employees, such as hiring them at higher pay or having greater opportunities for promotion, they might be more likely to stay in trails positions, and the agency could retain their expertise. Other officials said that it would also be helpful if the agency’s hiring policies made it easier to move temporary workers into permanent positions. Without policies and practices that promote hiring and retention of skilled employees, the agency cannot ensure that it has the needed expertise to maintain trails. Improve data collection practices. Many officials told us that the agency could streamline or otherwise improve practices for collecting trails condition data to make the process less burdensome and the data more useful. Agency officials, acknowledging that the surveys are time-consuming, said they are pursuing an initiative to streamline how the data are collected—an initiative that has been under way since 2006. Specifically, one official told us the agency intends to replace the current system—which requires staff to fill out paper surveys while on the trail and then manually enter the information into an agency database—with a process for electronic field data collection that relies on handheld tablet computers, synchronized with a wireless distance-measuring device, which automatically upload collected data to the database. Officials told us they hope to introduce the new process in 2013 or 2014. Assess how the agency distributes trails funding. Some officials told us that the agency may benefit from changing the way it distributes its trails allocation funds to regions and forests. Some officials told us that trails allocation funding should be linked to the number of visitors forests receive. Other officials disagreed, however, noting that the agency’s multiple-use mission is to accommodate different recreational experiences, including solitude and a wilderness experience on little-used trails. Moreover, some expressed concern about the reliability of agency data on visitor use and relying on these data as the basis for distributing funds. The trails program currently has a working group composed of regional trail coordinators who are evaluating the national process for distributing trails allocation funds and potential alternatives, including reviewing existing distribution models used by regions to see if any might be applicable at the national level. A headquarters official noted that, since each region experiences different circumstances, the exercise has been difficult because the working group is finding that one model does not necessarily fit the needs of the entire country. This official added that it is not clear when or if a new model will be applied to distribute trail maintenance funding but said that the agency is aiming to implement a new process in fiscal year 2014. Improve the sharing of best practices across the agency. Some officials and stakeholders told us that the agency could improve how it shares best practices or success stories related to trail maintenance across the agency. For example, officials from one forest said they had few opportunities to share with other forests what they had learned over the last few years about designing sustainable trails. Recognizing the considerable time volunteers donate to trail maintenance efforts, some officials and stakeholders stated that improving management of volunteers would make working with them easier and more effective. This option is consistent with the vision the agency has presented in the Forest Service Manual, which articulates the agency’s goal to recruit, train, and use the services of volunteers to complement its trail maintenance and other work. Officials and stakeholders identified a number of ways to enhance the agency’s use of volunteers and partnerships, including the following: Make volunteer and partnership management a clear expectation for trails staff, and increase training. As noted earlier, even with the agency’s emphasis on using volunteers—articulated in the Forest Service Manual—the agency has not established collaboration with and management of volunteers as clear expectations for trails staff responsible for working with volunteers, and training in this area is limited. Given the value of volunteer hours devoted to Forest Service trail maintenance in fiscal year 2012—equivalent to nearly one-third of the agency’s trails allocation—some officials and stakeholders said that making collaboration with and management of volunteers clear expectations for trails staff (e.g., through performance evaluation standards) and offering relevant training could enhance the agency’s management of volunteers, as well as better reflect the central role that volunteers play in trail maintenance. Other officials said that the agency should consider hiring dedicated volunteer coordinators at the forest and ranger district levels. A headquarters official told us that the Forest Service has been slow to update its policies and practices to reflect its increasing reliance on volunteers, in part because the agency has not made it a priority—as evidenced by the agency’s treatment of volunteer management as a collateral duty. Nevertheless, without making collaboration with and management of volunteers a clear expectation for trails staff who work with volunteers and offering relevant training, the agency cannot be sure it is fully capitalizing on the assistance volunteers can offer. Improve consistency of volunteer management policies, including certifications. A number of officials and stakeholders said that making agency policies, regulations, and certification processes more uniform would make it easier for people to volunteer for the agency. Several brought up the issue of inconsistent saw certification requirements across districts and forests, stating that having consistent procedures for certifying volunteers would make it easier for volunteers to help maintain trails in more than one forest. To address this issue, the agency is developing a proposed directive to provide national guidance for training and certification in saw use, which would apply to both crosscut saws and chain saws. A headquarters official said that the agency originally intended to have the new saw directive finalized by summer of 2013 but that it now planned to seek public comment on the proposed directive in fall 2013 before it is made final. Address liability concerns. A number of officials and stakeholders said that changing how the agency handles workers’ compensation claims may increase local volunteer participation. To overcome local officials’ reluctance to use volunteers for fear that a workers’ compensation claim might consume their entire trails allocation, some officials suggested that having a national funding source to pay workers’ compensation claims would make local managers more willing to use volunteers for trail maintenance. A headquarters official told us that the Forest Service had explored moving to a national funding source in the past but had rejected the possibility because of the agency’s interest in diverting less funding to cost pools overall. In addition, some officials and stakeholders said that changing how liability is handled in challenge cost-share agreements—under which liability generally rests with partner organizations—might increase volunteer participation if the agency were to assume this liability, because more organizations would be willing to volunteer under these agreements. A headquarters official said that the Forest Service is considering such changes, which would potentially require new legislation, so that the agency could take on liability for volunteers under both volunteer and challenge cost-share agreements. In addition, officials told us, the agency is preparing guidance on using both challenge cost-share and volunteer agreements simultaneously to address liability concerns in certain situations. Some agency officials and stakeholders also identified ways they believe the Forest Service could better leverage external funds. For example, some officials and stakeholders said, forests could seek more grants to be used for trail maintenance, and officials from one forest said that units might benefit from hiring full-time grant administrators, who could help identify and administer available grants. Other officials said it would be helpful to have a headquarters official coordinate and share grant opportunities and new funding sources with field units. Headquarters officials acknowledged they could improve how they coordinate and distribute information on available funding to the field, but they also told us that the agency does not have enough staff to dedicate someone to looking for and informing field units of grant opportunities. Charged with managing and maintaining some 158,000 miles of trails across the National Forest System, the Forest Service largely succeeds in offering trail users recreational opportunities ranging from solitary wilderness hikes to OHV access. The agency continually brings together personnel, equipment, and funding from numerous different internal and external resources to maintain trails—and indeed, the forests we visited were generally able to maintain their most popular trails and address safety concerns. Nevertheless, maintenance issues abound, and given the magnitude of the trail system, including many unsustainable trails, and limited available resources, the agency is facing a maintenance problem it is unlikely to completely resolve. Without conducting an analysis of trails program needs and available resources, consistent with the agency’s Framework for Sustainable Recreation, and assessing potential ways to narrow the gap between them, the agency is likely to continue operating in a reactive mode, addressing short-term maintenance needs without a long-term understanding of how to better address the issue. The agency has recognized the importance of trail sustainability in reducing needed maintenance—for example, through its Framework—but it has not yet translated this emphasis into action in the form of training on sustainable design or local assessments that reevaluate both the uses of trails and their physical condition with long- term sustainability in mind. Even with such steps toward sustainability in the long term, however, certain agency policies and procedures may still make it difficult to keep up with maintenance needs in the short term. For example, the cumbersome approach to collecting and recording trail condition data, which the agency has been trying to streamline through electronic data collection since 2006, can hinder maintenance activities. Further, without policies that help in recruitment and retention of skilled employees—or basic, standardized in-person employee training on trail skills and on-the-ground maintenance—the agency may find itself without sufficient expertise to conduct needed maintenance. Furthermore, even with its extensive reliance on volunteers and the vision set forth in the Forest Service Manual, the agency continues to assign its employees volunteer management as a collateral duty and has not made collaboration with and management of volunteers clear expectations of trails staff or offered substantial relevant training. Without short- as well as long-term steps to adjust and streamline such policies and procedures, the Forest Service is likely to continue falling behind in maintaining its trails, spending scarce resources on unsustainable trails and presiding over degraded visitor experiences and natural resources. To enhance the overall sustainability of the Forest Service’s trail system, consistent with the vision articulated in A Framework for Sustainable Recreation, and to reduce the trail maintenance backlog, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to take several actions to improve the agency’s trail maintenance approach in both the short and long terms. To improve agency management of its trails program in the long term, particularly in light of the gap between program needs and available resources, the agency should take the following two actions consistent with the agency’s Framework for Sustainable Recreation: In line with the Framework’s emphasis on evaluating infrastructure investments and program costs, (1) ensure that the agency’s management of its trails program includes an analysis of trails program needs and available resources and (2) develop options for narrowing the gap between program needs and resources. In line with the Framework’s emphasis on sustainability, and to enhance trail sustainability over the long term, (1) improve guidance and increase training on sustainable trail design and (2) when appropriate, begin systematic, unit-level trail assessments that reevaluate trails with long-term sustainability as a goal. To improve the agency’s ability to keep up with its maintenance goals in the short term and reduce its maintenance backlog, the agency should take the following two actions: Take steps to improve policies and procedures related to trail maintenance. Such steps should include implementing electronic collection of trail condition data and offering more standardized in- person training on trail skills and on-the-ground maintenance. They could also include, for example, changing policies and practices to improve recruitment and retention of employees with trail expertise. Recognizing the importance of volunteers for trail maintenance, take steps to improve management of volunteers, including by ensuring that collaboration with and management of volunteers are clear expectations of trails staff and offering relevant training. We provided a copy of this report for review and comment to the Department of Agriculture. In written comments responding on behalf of the Department of Agriculture, which are reproduced in appendix IV, the Forest Service generally agreed with our findings and recommendations. The Forest Service emphasized its commitment to implementing its Framework for Sustainable Recreation, including improved guidance and training on sustainable trail design. It also stated its commitment to improving policies and procedures related to trail maintenance, including implementing electronic collection of trail condition data, exploring options to improve recruitment and retention of employees with trails expertise, and improving collaboration with and management of volunteers. The agency noted, however, that its ability to take action in some of these areas, such as providing in-person training on trails skills, may be limited by budgetary constraints. The Forest Service also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Agriculture, the Chief of the Forest Service, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our objectives were to examine (1) the extent to which the Forest Service is meeting trail maintenance needs, and effects associated with any maintenance not done; (2) resources, including funding and labor, that the agency employs to maintain its trails; (3) factors, if any, complicating agency efforts to maintain its trails; and (4) options, if any, that could improve the agency’s trail maintenance efforts. To conduct this work, we reviewed relevant laws and agency documents, including agency handbooks and other guidance. We interviewed Forest Service officials in headquarters and received information from all nine regions about trail maintenance needs and effects associated with any deferred maintenance. We also interviewed officials from a nonprobability sample of 18 national forests located in five of the nine Forest Service regions; we visited 16 of these forests and interviewed officials from 2 more. (Table 2 shows the forests included in our review.) During these visits, we held semistructured interviews with officials to learn about their trail maintenance programs; we also examined trails on which maintenance had been deferred, as well as trails that were well maintained. We selected these forests to represent variation in geography, proximity of forests to urban and rural areas, trail mileage, and type and intensity of trail use, although findings from this selection of forests are not generalizable to the entire population of national forests. We obtained data on the Forest Service’s trail inventory for fiscal years 2008 to 2012 from the agency’s Infrastructure database (known as Infra). To assess the reliability of the data, we reviewed relevant documentation and interviewed agency officials knowledgeable about the data. We determined that these data were sufficiently reliable for the purposes of this report. To evaluate the resources the Forest Service employs to maintain its trails, we reviewed agency budget documents for fiscal years 2006 to 2012. We also collected and reviewed evidence from national, regional, forest, and ranger district officials about how funds are allocated for trail maintenance activities. In addition, we examined the agency’s use of external resources in conducting trail maintenance and also the laws, regulations, and agency guidance governing the Forest Service’s authority to use these resources. During our visits to national forests, we discussed and reviewed documentation related to their use of external funds for trail maintenance. We also interviewed an official from the U.S. Department of Transportation’s Federal Highway Administration to learn more about the Recreational Trails Program, as well as an official from the Colorado Department of Natural Resources’ Parks and Wildlife division to learn about the state’s grants program for trails used by OHVs. To evaluate the extent to which volunteers maintain trails, we reviewed agency volunteer data available for the most recent fiscal years, 2011 and 2012. To assess the reliability of the data, we reviewed relevant documentation and interviewed agency officials knowledgeable about the data; we found these data to be sufficiently reliable for the purposes of this report. We also interviewed headquarters officials to discuss volunteer management policies and officials at regions and forests to discuss the benefits and drawbacks of using volunteers to maintain trails. We also conducted semistructured interviews with representatives from a nonprobability sample of 16 nongovernmental organizations about their organizations’ efforts to help the Forest Service maintain trails and about their views on Forest Service trail conditions. We selected these organizations to represent a variety of trail user, conservation, and industry perspectives. The views of representatives from these organizations are not generalizable to other nongovernmental organizations, but they provided various perspectives on the Forest Service’s trail maintenance efforts. (Table 3 lists the organizations we interviewed.) To obtain information on any factors complicating trail maintenance and what options, if any, could improve it, we asked agency officials at all levels about both topics. Further, we convened a structured discussion group to gather perspectives from knowledgeable Forest Service officials representing all nine regions regarding challenges to maintaining trails and options for improving trail maintenance. We convened the discussion group via conference call and used web-based software to compile participants’ comments. In our interviews with nongovernmental organizations, we asked for their views on challenges faced by the agency in performing trail maintenance and their views on any options for improvement. We also interviewed officials from three other federal land management agencies—the Department of the Interior’s Bureau of Land Management, Fish and Wildlife Service, and National Park Service—to learn about these agencies’ trail maintenance programs. We interviewed an official from Interior’s U.S. Geological Survey to learn about current research on trail design. We conducted this performance audit from June 2012 to June 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The tables in this appendix provide information on the Forest Service’s National Forest System trail inventory from the agency’s Infra database. Table 4 shows, for each region, total trail miles, wilderness miles, miles open to motorized vehicles, and miles closed to motorized vehicles. It also provides estimates of annual visitors to each region. Table 5 shows trail miles by trail class for each region. The tables in this appendix provide information on the Forest Service’s trails allocations. Table 6 provides trails allocation data by region for fiscal years 2006 to 2012. Table 7 describes American Recovery and Reinvestment Act funding, by region and state, to support trail maintenance and decommissioning projects. In addition to the individual named above, Steve Gaty (Assistant Director), Ellen W. Chu, Tanya Doriss, Richard P. Johnson, Lesley Rinner, and Elizabeth Wood made key contributions to this report. Important contributions were also made by Kurt Burgeson, Justin Fisher, Carol Henn, Paul Kinney, Dan Royer, and Kiki Theodoropoulos. | The Forest Service manages more than 158,000 miles of recreational trails offering hikers, horseback riders, cyclists, off-highway-vehicle drivers, and others access to national forests. To remain safe and usable, these trails need regular maintenance, such as removal of downed trees or bridge repairs. GAO was asked to review the agency's trail maintenance activities. This report examines (1) the extent to which the Forest Service is meeting trail maintenance needs, and effects associated with any maintenance not done; (2) resources, including funding and labor, that the agency employs to maintain its trails; (3) factors, if any, complicating agency efforts to maintain its trails; and (4) options, if any, that could improve the agency's trail maintenance efforts. GAO reviewed laws and agency documents; analyzed Forest Service budget data for fiscal years 2006-2012 and trails data for fiscal years 2008-2012; and interviewed agency officials and representatives of 16 stakeholder groups selected to represent trail users, conservation, and industry. Their views are not generalizable. The Forest Service has more miles of trail than it has been able to maintain, resulting in a persistent maintenance backlog with a range of negative effects. In fiscal year 2012, the agency reported that it accomplished at least some maintenance on about 37 percent of its 158,000 trail miles and that about one-quarter of its trail miles met the agency's standards. The Forest Service estimated the value of its trail maintenance backlog to be $314 million in fiscal year 2012, with an additional $210 million for annual maintenance, capital improvement, and operations. Trails not maintained to quality standards have a range of negative effects, such as inhibiting trail use and harming natural resources, and deferring maintenance can add to maintenance costs. The Forest Service relies on a combination of internal and external resources to help maintain its trail system. Internal resources include about $80 million allocated annually for trail maintenance activities plus funding for other agency programs that involve trails. External resources include volunteer labor, which the Forest Service valued at $26 million in fiscal year 2012, and funding from federal programs, states, and other sources. Collectively, agency officials and stakeholders GAO spoke with identified a number of factors complicating the Forest Service's trail maintenance efforts, including (1) factors associated with the origin and location of trails, (2) some agency policies and procedures, and (3) factors associated with the management of volunteers and other external resources. For example, many trails were created for purposes other than recreation, such as access for timber harvesting or firefighting, and some were built on steep slopes, leaving unsustainable, erosion-prone trails that require continual maintenance. In addition, certain agency policies and procedures complicate trail maintenance efforts, such as the agency's lack of standardized training in trails field skills, which limits agency expertise. Further, while volunteers are important to the agency's trail maintenance efforts, managing volunteers can decrease the time officials can spend performing on-the-ground maintenance. Agency officials and stakeholders GAO interviewed collectively identified numerous options to improve Forest Service trail maintenance, including (1) assessing the sustainability of the trail system, (2) improving agency policies and procedures, and (3) improving management of volunteers and other external resources. In a 2010 document titled A Framework for Sustainable Recreation, the Forest Service noted the importance of analyzing recreation program needs and available resources and assessing potential ways to narrow the gap between them, which the agency has not yet done for its trails. Many officials and stakeholders suggested that the agency systematically assess its trail system to identify ways to reduce the gap and improve trail system sustainability. They also identified other options for improving management of volunteers. For example, while the agency's goal in the Forest Service Manual is to use volunteers, the agency has not established collaboration with and management of volunteers who help maintain trails as clear expectations for trails staff responsible for working with volunteers, and training in this area is limited. Some agency officials and stakeholders stated that training on how to collaborate with and manage volunteers would enhance the agency's ability to capitalize on this resource. GAO recommends, among otheractions, that the Forest Service (1) analyze trails program needs and available resources and develop options for narrowing the gap between them and take steps to assess and improve the sustainability of its trails and (2) take steps to enhance training on collaborating with and managing volunteers who help maintain trails. In commenting on a draft of this report, the Forest Service generally agreed with GAO's findings and recommendations. |
The Overseas Presence Advisory Panel was formed to consider the future of our nation’s overseas representation, to appraise its condition, and to develop practical recommendations on how best to organize and manage our overseas posts. Last November, the Panel reported that the condition of U.S. posts and missions abroad is unacceptable. For example, the Panel found that facilities overseas are deteriorating; human resource management practices are outdated and inefficient; and there is no interagency mechanism to coordinate overseas activities or manage their size and shape. A key finding was that our embassies and missions are equipped with antiquated, grossly inefficient, and incompatible information technology systems. According to the Panel, inefficient information systems have left the department “out of the loop,” that is, other agencies, organizations, and even foreign governments are bypassing its cumbersome communications connections. The Panel recommended that all agencies with an overseas presence provide staff with a common network featuring Internet access, e-mail, a secure unclassified Internet website, and shared applications permitting unclassified communications among all agencies and around the globe. The Panel further recommended that agencies initiate planning for a similar common platform for classified information. In response, the President asked the Secretary of State to lead a cabinet- level committee to implement the Panel’s recommendations. This is now known as the Overseas Presence Committee and is chaired by State’s Undersecretary for Management. Three interagency subcommittees have been established to report to this committee, including the Rightsizing Subcommittee, the Overseas Facilities Subcommittee, and the Interagency Technology Subcommittee. The area that you asked us to focus on, Mr. Chairman, involves the Information Technology Subcommittee, chaired by State’s CIO and consisting of CIOs from the eight other major agencies with overseas presence, including the U.S. Agency for International Development, the Peace Corps, and the Departments of Defense, Justice, Transportation, Treasury, Agriculture, and Commerce.Two working groups report to this subcommittee: (1) the Interagency Technology Working Group, which is responsible for defining operational requirements, selecting specific enabling strategies, identifying required funding, and establishing standards for the common platform and (2) the Knowledge Management Working Group, which is charged with making the right information available to the right people. Knowledge management is a very important component of the Panel’s recommendations. The Panel’s intent is that our overseas agencies be able to not only communicate with each other and back to their respective headquarters, but also to obtain and share the information and knowledge that already exists among agencies and around the world, but is currently fragmented and not readily accessible. State is in the process of developing a structured project plan for the lifecycle of its common platform initiative. In doing so, State intends to define user and system requirements; identify risks and assess technical feasibility; identify the major work elements that will be accomplished over the life of the project; analyze costs and benefits; establish project goals, performance measures, and resources; assign responsibilities; and establish milestones. It expects to complete this plan by September 30, 2000. Given the risks, complexities, and potential costs involved in the common platform initiative, it is critical that State carefully scope the effort, anticipate and plan for risks, and establish realistic goals and milestones. Experience with similar undertakings has shown that poor project planning can cause agencies to pursue overly ambitious schedules, encounter cost overruns, and/or find themselves ill-prepared to manage risks. To date, State has developed high-level preliminary project milestones and decided to pilot a prototype common system, from April through September 2001, at two posts, Mexico City, Mexico and New Delhi, India. It has also decided to follow a methodology for managing the project called Managing State Projects, which provides a structured process for planning, applying, and controlling funds, personnel, and physical resources to yield maximum benefits during a project life cycle. The methodology focuses on a number of key factors critical to ensuring the success of any large, complex information technology effort, including (1) clearly defining what users need, (2) determining what the system will ultimately cost, and (3) defining how management will monitor and oversee progress, and ensure that the project stays on track. State is already in the process of taking the first step—defining requirements for the overseas common technology platform. System requirements include such things as system functions, communication protocols, interfaces, regulatory requirements, security requirements, and performance characteristics. State officials responsible for managing the development of the common platform effort told us that they have developed high-level preliminary requirements and are in the process of further defining user requirements. Given the range and number of agencies and employees involved in foreign affairs, this task will need to be carefully managed. Requirements will have to be agreed upon by, and have the same meaning for, each of the participating overseas agencies, and they will need to be fully documented and sufficiently detailed so they can be used to determine what systems will be acquired and what standards will be used. Cost estimates—the second step–cannot be finalized until user requirements are defined. As such, there is not yet firm, supported cost data on how much the new system will cost. The Panel estimated that the ultimate cost of a common solution for both classified and unclassified information will be over $300 million. The President’s FY2001 budget includes $17 million in support of the recommendation for a common information technology platform for overseas offices. State officials characterized the $17 million as a “down payment” on the total anticipated investment. If these funds are appropriated, the department intends to use them on its pilot project. State is now developing preliminary cost estimates for the pilot; however, State officials told us that these estimates will be rough given that detailed user requirements have not yet been fully defined and target systems, hardware, and networks have not yet been identified. State officials also plan to address the third step–instilling the management oversight and accountability needed to properly guide the common platform initiative. The methodology provides a formal approval process with “control gates” to ensure that user needs are satisfied by the proposed project, timetables are met, the risks are acceptable, and costs are controlled. If effectively implemented and adhered to, these control gates can provide management with the opportunity to review and formally approve progress at key decision points. State expects to define the approval process in its overall project plan. As State is in the early stages of project planning, it faces considerable challenges in modernizing overseas information technology systems. First, State will need to obtain agreement among its various bureaus and the agencies in the foreign affairs community on such issues as requirements, resources, responsibilities, policies, and acquisition decisions. This will be a delicate task as these agencies have different needs, levels of funding, and ongoing agency-unique systems development. Second, State needs to complete its detailed information technology architecture–or blueprint–to guide and effectively control its own information technology acquisitions. It currently has a high-level architecture and anticipates completing the detailed layers of the architecture by next year. Third, the security of the common system must be fully addressed before its deployment to ensure that sensitive data is not stolen, modified, or lost. Obtaining the interagency cooperation and funding necessary to achieve the Panel’s recommendations will be a major challenge. Each of the more than 40 agencies involved in foreign affairs has its own unique requirements, priorities, and resource constraints and many are accustomed to developing, acquiring, and maintaining their own systems. Yet State will need to overcome these cultural barriers and secure agreement on a range of issues such as which systems, hardware, and networks to acquire, how much can be spent on these assets, and who should be responsible for managing and maintaining them. In recognizing this dilemma, the Panel highlighted the need for Presidential initiative and support, the Secretary of State’s leadership, and ongoing congressional oversight and support. Addressing cultural and organizational barriers to standardization and cooperation will not be easy. First, it is likely that many agencies may want to continue operating their own technology, especially if these systems were recently acquired or upgraded. Second, no one agency by itself has the authority or ability to dictate a solution or to ensure the implementation of a mutually developed solution. Third, although negotiations are ongoing, details are still being worked out as to who will manage and administer the new collaborative network. The department will also need to obtain cooperation among its various bureaus. Information management activities at State have historically been carried out on a decentralized basis and without the benefit of continuing centralized management attention. Consequently, systems development efforts have not always been synchronized and the systems themselves not interoperable. State acknowledges that many of its systems can be described as “stovepiped” and “islands of automation,” terms which describe their fragmentation and independence. In recognition of this problem, the department is working to establish a shared computing environment but progress has been slow. State officials recognize that they will need to reach out to bureaus and to other agencies with overseas presence to achieve consensus on specific, detailed user requirements, acquisition decisions, standards, policies, and responsibilities and that this will be a difficult endeavor. They have told us that they have begun to explore ongoing common platform initiatives with other agencies and that they will address this challenge as they develop their overall project plan. Even though State is leading the common platform initiative which involves more than 40 other agencies, it does not have a detailed information technology architecture. However, State does have a high- level architecture issued last year in place and is now working to complete supporting architectural layers. An architecture is essential to guiding and constraining information technology acquisition and development efforts. In doing so, an effective architecture will limit redundancy and incompatibility among information technology systems, enable agencies to protect sensitive data and systems, and help ensure that new information technology optimally supports mission needs. System architectures are essentially “construction plans” or blueprints that systematically detail the full breadth and depth of an organization’s mission-based mode of operations in logical and technical terms. In defining architectures, agencies should systematically and thoroughly analyze and define their target operating environment—including business functions, information needs and flows across functions, and systems characteristics required to optimally support these information needs and flows. In addition, they should provide for physical and administrative controls to ensure that hardware platforms and software are not compromised. The importance of thoroughly and systematically identifying and analyzing information needs and placing them in a technical architecture cannot be overemphasized. The Congress recognized the importance of technical architectures when it enacted the Clinger-Cohen Act, which requires chief information officers to develop, maintain, and facilitate integrated system architectures.Additionally, OMB has issued guidancethat, among other things, requires agency information systems investments to be consistent with federal, agency, and bureau architectures. Moreover, our reviews of other agencies have consistently shown that without a target architecture, agencies risk buying and building systems that are duplicative, incompatible, and unnecessarily costly to maintain and interface. In April, 1999, State published a high-level information technology framework. State officials told us that documents will be produced later this year which further define the security, information applications, and technical infrastructure for the department. But, at present, State lacks the detailed framework needed to ensure that it does not build and buy systems that are duplicative, incompatible, vulnerable to security breaches, and/or are unnecessarily costly to maintain and interface. Specifically, State has not detailed its current logical and technical environment, its target environment, or specified a sequencing plan for getting from the current to the target environment. State officials told us they are working to develop these necessary architectural layers. Such a framework is critically needed to ensure that the common platform is in concurrence with State’s own target environment. If State proceeds with the common platform initiative before defining its own target architecture, it may well find that the initiative itself with its resulting decisions on standards, protocols, systems, and networks may end up driving the department’s architecture. Moreover, each foreign affairs agency overseas has its own networks and systems, based on different protocols, systems, and security measures. By not having a defined and enforceable architecture, State may well perpetuate the current stovepiped, redundant, and disparate computing environment. State acknowledges that there is risk in proceeding with modernization initiatives in parallel with developing a complete information technology architecture, and it intends to begin addressing this risk as it proceeds with its pilot projects. As envisioned by the Panel, a common platform could provide overseas agency staff with collaborative applications and Internet access. The Panel recognized that security risks would be increased with this greater connectivity and indicated that solutions, such as the use of industry best practices and security software, would be required to mitigate these risks. In view of these added risks, I would like to discuss specific concerns we raised in a previous review of State’s computer security practices. State has generally made good progress in addressing these concerns; however, issues remain which must be paid attention to in order to ensure the integrity of the proposed platform. Two years ago we reportedthat the State Department’s unclassified information systems and the information contained within them were vulnerable to access, change, disclosure, disruption, or even denial of service by unauthorized individuals. During penetration testing of State’s systems at that time, we were able to access sensitive information and could have performed system administration actions in which we could have deleted or modified data, added new data, shut down servers, and monitored network traffic. The results of our tests showed that individuals or organizations seeking to damage State operations, commit terrorism, or obtain financial gain could possibly exploit the department’s information security weaknesses. For example, by accessing State’s systems, an individual could obtain sensitive information on State’s administrative processes and key business processes, such as diplomatic negotiations and agreements. Our successful penetrations of State’s computer resources went largely undetected during our testing, underscoring the Department’s serious vulnerabilities. Our penetration testing two years ago was successful primarily because State lacked an overall management framework and program for effectively overseeing and addressing information security risks. In particular, State lacked a central focal point for overseeing and coordinating security activities; it was not performing routine risk assessments to protect sensitive information; its information security policies were incomplete; it lacked key controls for monitoring and evaluating the effectiveness of its security programs; and it had not established a robust incident response capability. We also found that security awareness among State employees was problematic. For example, we were able to gain access to networks by guessing user passwords, bypassing physical security at one facility, and searching unattended areas for user account information and active terminal sessions. As such, we recommended that State take a number of actions based on private sector best practices that have been shown to greatly improve organizations’ ability to protect their information and computer resources. In response, State has taken a number of positive steps to address our recommendations and made real progress in strengthening its overall security program. For example, the department consolidated its previously fragmented security responsibilities and made the Chief Information Officer responsible for all aspects of the department’s comprehensive computer security program; clarified in writing computer security roles and responsibilities for the Information Resources Management and Diplomatic Security offices; and enhanced its ability to detect and respond to computer security incidents by establishing a Computer Incident Response Team. In addition, the department revised its Foreign Affairs Manual to require the use of risk management by project managers and resolved the specific physical and computer security weaknesses we identified during our testing. However, State’s implementation of recommendations that are integral to successful implementation of the common platform initiative is incomplete. For example, State’s automated intrusion detection program does not yet cover all domestic and overseas posts. As a result, State does not have a comprehensive overview of attempted or successful attacks on its worldwide systems. Lack of such a process limits State’s ability to accurately detect intrusions, deal with them in a timely manner, and effectively share information about intrusions across the department. State lacks a mechanism for tracking and ensuring that the hundreds of recommendations made by auditors and internal vulnerability studies over the last 3 years are addressed. Again, this limits the department’s ability to ensure that all relevant findings are addressed and resolved. State officials told us that action is underway to develop a tracking system. Lastly, even though State has formally consolidated computer security responsibilities under its CIO, its Bureau of Diplomatic Security will still be responsible for carrying out important computer security related tasks such as establishing policy, conducting security evaluations at diplomatic posts, and conducting training. As stressed in our report, fragmented responsibilities in the past have resulted in no one office being fully accountable for information technology security problems and disagreements over strategy and tactics for improvements. This new process can work, but it will be essential for the department to ensure that the Chief Information Officer effectively coordinates these responsibilities. Consistent with our recommendations, State performed four computer security evaluations of its unclassified and sensitive but unclassified networks over the past three years. In response to your request, Mr. Chairman, we reviewed these evaluations and found that State’s networks remain highly vulnerable to exploitation and unauthorized access. Because three of the four evaluation reports are classified, we are constrained in this forum from discussing specific vulnerabilities. However, each of the reports found problems indicating continuing computer security problems at the department. Collectively, the reports indicate a continuing need for the department to assess whether controls are in place and operating as intended to reduce risks to sensitive information assets. Recent media reports highlighting State problems with physical security also emphasize the need for continued vigilance in this area. At the time of our work for this Committee, State was unable to provide much information about security features for the common platform because its design is still underway. However, based on the fact that State’s networks remain vulnerable to individuals or organizations seeking to damage State operations, we emphasize the importance of effectively addressing the significant challenge that additional external connectivity brings to securing the foreign affairs community’s planned information network. Mr. Chairman, in summary, maintaining an effective presence overseas absolutely requires up-to-date information and communications technology. Officials overseas must have easy access to all agencies sharing the overseas platform and the fastest possible access to all information that might help them do their jobs. State is taking steps to address this need but it faces significant hurdles in doing so. Not only must it secure agreements among a wide range of disparate users and agencies, it must do so while undertaking equally challenging efforts to develop a detailed technical architecture and address continuing computer security issues. As a result, as it completes it project plan over the next few months, it is critical that State Carefully scope the initiative, identify and mitigate risks, analyze costs and benefits, and establish realistic goals and milestones. Instill the management and oversight accountability needed to properly guide the effort and secure agreement on who will manage and maintain the systems once they are implemented. Anticipate the steps needed to overcome cultural obstacles and employ a truly collaborative approach that can effectively facilitate agreement on requirements, priorities, resources, policies, and acquisition decisions. Place high priority on developing a detailed systems architecture for the department that will help ensure that information technology acquired is compatible and aligned with needs across all business areas. Vigorously pursue efforts to strengthen long-standing computer security weaknesses and ensure that new policies, responsibilities, and procedures being implemented are on par with best practices. Mr. Chairman and Members of the Committee, this concludes my statement. I will be happy to answer any questions you or Members of the Committee may have. For questions regarding this testimony, please contact Jack L. Brock, Jr. at (202) 512-6240. Individuals making key contributions to this testimony included Cristina Chaplain, Kirk Daubenspeck, John de Ferrari, Patrick Dugan, Diana Glod, Edward Kennedy, Hai Tran, and William Wadsworth. (511968) | Pursuant to a congressional request, GAO discussed the Department of State's efforts to improve the foreign affairs community's information technology infrastructure, focusing on: (1) State's efforts to implement the Overseas Presence Advisory Panel's recommendations; and (2) the challenges and risks it will face as it proceeds. GAO noted that: (1) the Overseas Presence Advisory Panel was formed to consider the future of the nation's overseas representation, to appraise its condition, and to develop practical recommendations on how best to organize and manage overseas posts; (2) the Panel recommended that all agencies with an overseas presence provide staff with a common network featuring Internet access, electronic mail, a secure unclassified Internet website, and shared applications permitting unclassified communications among all agencies and around the globe; (3) the Panel further recommended that agencies initiate planning for a similar common platform for classified information; (4) in developing its common platform initiative, State intends to: (a) define user and system requirements; (b) identify risks and assess technical feasibility; (c) identify the major work elements that will be accomplished over the life of the project; (d) analyze costs and benefits; (e) establish project goals, performance measures, and resources; (f) assign responsibilities; and (g) establish milestones; (5) the Panel estimated that the ultimate cost of a common solution for both classified and unclassified information will be over $300 million; (6) the President's fiscal year 2001 budget includes $17 million in support of the recommendation for a common information technology platform for overseas offices; (7) as State is in the early stages of project planning, it faces considerable challenges in modernizing overseas information technology systems; (8) State will need to obtain agreement among its various bureaus and the agencies in the foreign affairs community on such issues as requirements, resources, responsibilities, policies, and acquisition decisions; (9) this will be a delicate task as these agencies have different needs, levels of funding, and ongoing agency-unique systems development; (10) State needs to complete its detailed information technology architecture to guide and effectively control its own information technology acquisitions; (11) the security of the common system must be fully addressed before its deployment to ensure that sensitivity data is not stolen, modified, or lost; and (12) the Panel recognized that security risks would be increased with greater connectivity and indicated that solutions, such as the use of industry best practices and security software, would be required to mitigate these risks. |
PPACA directed each state to establish and operate a health insurance marketplace by January 1, 2014. In states electing not to establish a marketplace, the law required HHS (which delegated this role to CMS) to do so. These marketplaces were intended to provide a seamless, single point of access for individuals to enroll in private health plans and apply for income-based financial assistance established under the law. CMS reported that around 12.7 million individuals applied to enroll in healthcare coverage for plan year 2016. PPACA and HHS regulations and guidance required each marketplace to be able to carry out four key functions, among others: 1. Eligibility and enrollment. Assess and determine an individual’s enrollment eligibility, enroll eligible individuals in coverage, and certify private health insurance plans for participation in the marketplace. 2. Plan management. Provide services for activities such as submitting, monitoring, and renewing qualified health plans. 3. Financial management. Facilitate payments to health insurance issuers as well as provide services such as payment calculation for risk-adjustment analysis and cost-sharing reductions for individual enrollments. 4. Consumer assistance. Provide assistance to consumers in completing an application, obtaining eligibility determinations, comparing coverage options, and enrolling in coverage. To provide these capabilities, PPACA further required the states establishing marketplaces, as well as CMS, to design, develop, implement, and operate health insurance marketplace IT systems. States have some flexibility as to the marketplace types, such as: State-based marketplaces. These states developed their own marketplaces with final decision-making authority, and were provided full autonomy in setting user fees and establishing sustainability plans. The marketplaces varied by state, depending on each state’s current and previous health care systems environment, governance and business models, applicable laws and regulations, and other factors. State-based marketplaces using the federal platform. These states initially elected to develop their own state marketplace systems, but due to IT, financial, or other challenges, subsequently decided to use the federal platform to perform certain eligibility and enrollment functionalities. Issuers of health insurance in these states offer plans through the federal platform and are charged a user fee by CMS for the services and benefits they provide. These states also have the authority to collect user fees beyond those collected by CMS. Federal responsibilities, carried out by CMS, include managing the federal IT platform, call center infrastructure, and eligibility determinations. The states must maintain websites that provide information to consumers and direct them to HealthCare.gov to apply for and enroll in coverage. In addition, these states must coordinate with CMS on outreach strategies to reach existing and new consumers; maintain data and timely reporting for all coverage years prior to the transition; and work with issuers to ensure they are prepared to transition to the federal platform and exchange enrollment data with CMS. Federally facilitated marketplaces. These states elected not to develop their own platform and use the federally facilitated marketplace—the federal IT platform, including the website, HealthCare.gov. Federally facilitated partnerships. These states are a variation of the federally facilitated marketplaces, whereby CMS establishes and generally operates the marketplaces and the states assist CMS with operating various functions, such as plan management and consumer assistance. In these cases, states rely to varying degrees on the systems developed by CMS to support the federally facilitated marketplaces and the federal government keeps the user fees paid by the insurers. For plan year 2016, utilizing state and CMS documentation, we categorized 27 states as having federally facilitated marketplaces, 7 as having federally facilitated partnerships, 13 as having state-based marketplaces, and 4 as having state-based marketplaces using the federal platform. Figure 1 shows the type of health insurance marketplace used by each state for plan year 2016. To establish, operate, and sustain health insurance marketplaces, states have used several IT funding sources, which vary based on the marketplaces’ operational model and the IT work being performed. Marketplace grants: PPACA authorized CMS to award federal exchange (marketplace) grants through December 2014 for planning and implementation activities, as well as for the first year of a marketplace’s operation. The federal marketplace grant stages include: Pre-award. A funding announcement notified states about the grant opportunity. States then submitted applications for CMS review. Award. CMS identified successful applicants and awarded funding to be used during specific time periods. Implementation. Grantees draw down funds from preauthorized grant accounts monitored by CMS, and report financial and performance information to the agency. States are allowed to request a no-cost grant extension in order to use remaining approved federal funds to complete project goals and objectives. CMS reviews each no-cost extension request to ensure that expenses are allowable, correctly allocated, and reasonable based on PPACA and CMS grant rules and policies. States also can resubmit budget requests to CMS to authorize reallocation of grant funding among previously awarded budget categories. Closeout. This occurs after the period of performance ends and includes preparation of final reports, financial reconciliation, and any required accounting for property and funds. After a grant is closed, CMS deobligates any remaining amounts. (In this report, we use “deobligated funds” to describe amounts that are no longer available to CMS for new obligations, although they may remain available for certain limited other purposes.) Medicaid funding: PPACA enactment required changes to Medicaid eligibility and enrollment systems so that the program could operate seamlessly with the marketplaces, as well as to implement new Medicaid eligibility policies. Under federal law, states are eligible for an enhanced federal matching rate of 90 percent (referred to as 90/10 funding—states contribute 10 percent of the cost) for the design, development, or installation of Medicaid claims processing and information retrieval systems. Because states’ Medicaid systems had to be replaced or modernized to meet PPACA requirements, CMS expanded the availability of the 90/10 funding for states to make changes to improve Medicaid eligibility IT systems, including the connection to the federal marketplace. In addition, a state could receive funding in the form of a 75 percent federal matching rate for the maintenance and any ongoing costs of operating its upgraded Medicaid eligibility and enrollment system. The funding is generally available when the upgraded system becomes operational. States were also allowed to use Medicaid funds when they transitioned to a different marketplace type since the transition costs were related to updating state Medicaid eligibility systems to be compatible with the federal platform. Revenue through user fees: PPACA authorized state-based marketplaces to charge assessments, or user fees, such as a percentage or a flat monthly rate to participating insurance issuers. For the selected states to sustain their marketplaces, Hawaii charged a user fee of 3.5 percent per insurance plan, Minnesota charged a 3.5 percent user fee, and Oregon charged a user fee of $9.66 per member per month. New York does not impose a user fee. Additional revenue sources: In addition to user fees, states have leveraged their own funds to support and sustain marketplaces, including general funds, fund transfers from other agencies, and broad-based assessments. Examples of these additional revenue sources for the four selected state marketplaces include: Hawaii’s marketplace received funding from state general funds. Minnesota’s marketplace incurred costs associated with enrollment of individuals into public programs by the Minnesota Department of Human Services. Payments from the Minnesota Department of Human Services represent its share (in both federal and state funds) of enrollment costs in public programs. New York’s marketplace received revenue from a broad-based assessment funded by the Health Care Reform Act instead of from user fees. According to New York officials, that revenue included assessments of certain medical services and is used to support several health care programs in the state, including the New York marketplace. Oregon’s marketplace received reimbursements from other state agencies for shared software license fees. Operating reserves: Some states also implemented dedicated state financial reserves for marketplace operations, including IT. Reserves can cover increases in expenditures, such as unexpected development costs, or decreases in revenue, such as decreased user fees due to lower than expected enrollment. Minnesota and Oregon marketplaces were legislatively allowed to use such reserves. Minnesota had less than 1 month of reserves and Oregon’s marketplace had 12 months of reserves in 2016. Hawaii and New York did not allow their marketplaces to have reserves. Table 1 provides a summary of the four selected states’ marketplace revenue sources and reserves. To address the requirements of PPACA and its implementing policies, HHS designated CMS to provide oversight of the IT supporting states’ marketplaces, and CMS assigned three key offices to do so—the Center for Consumer Information and Insurance Oversight (CCIIO), the Center for Medicaid and CHIP Services (CMCS), and the Office of Technology Solutions (OTS). CMS relies on these offices to, among other things, perform the following reviews: Establishment reviews. CMS is to conduct establishment reviews of states that receive federal marketplace grant and Medicaid funds, following the Enterprise Life Cycle framework. These reviews are intended to show the progress that the states made in using federal funding to implement marketplace IT systems. The framework requires states to provide specific artifacts for CMS review, such as the concept of operations, system test documents, and project plans, among others. Each review is incremental and states are expected to show CMS an acceptable level of progress and maturity in their projects’ development before proceeding to the next project phase. Review of annual state financial/programmatic reports. State-based marketplace annual reporting tool (SMART): CMS collects and reviews state-based marketplace financial and programmatic data regularly through SMART. The agency collects and reviews annual financial/programmatic audit reports, and attestations of compliance as part of this process. CMS staff identify observations, action items, and ongoing monitoring activities that could improve marketplace operations. For example, they may identify areas where IT functionality needs improvement, or more clarity on IT expenditures in marketplace financial audit reports. Oversight and program integrity standards for state exchanges: Beyond SMART, CMS requires state-based marketplaces to also annually provide financial statements, eligibility and enrollment reports, and performance monitoring data. Frequent, regular communication. CMS is to monitor and provide assistance to states through frequent and regular communication, including weekly telephone calls with state officials involved with marketplace IT efforts. CMS includes subject matter experts in these calls, as needed. State officials can report concerns or provide further information during the weekly calls. The selected state marketplaces—Hawaii, Minnesota, New York, and Oregon—each had different experiences with regard to establishing and operating their marketplaces. Each state’s establishment, operations, and funding experiences are summarized in the discussions and figure 2 below, which are based, in part, on CMS budget summary documentation for the states as of October 2016. Since 2011, CMS had awarded Hawaii $205.3 million in federal marketplace grants to establish a state-based marketplace. As of October 2016, the state had spent or planned to spend $140.4 million of those grants, including $97.4 million for IT costs associated with building a health insurance marketplace and developing infrastructure needed for the marketplace’s ongoing operations. The Hawaii Health Connector implemented the original state-based marketplace, which began operation in 2013. In 2015, the marketplace ceased operations and transitioned to a state-based marketplace using the federal platform. According to CMS, the transition was due to a variety of system issues and other factors that limited enrollment and had a negative impact on the consumers’ use of the system. Hawaii decided to undergo an additional transition in June 2016 in order to operate as a federally facilitated marketplace for plan year 2017. As a result of the state’s marketplace transition efforts, CMS deobligated $63 million of Hawaii’s grant funds as of October 2016 for grants that had concluded their period of performance and had been closed. As of March 2016, approximately 15,000 individuals in Hawaii were enrolled in qualified health plans. Since 2011, CMS had awarded $189.4 million in federal marketplace grants to Minnesota to build a health insurance marketplace and develop infrastructure needed for the marketplace’s ongoing operations. MNsure had implemented the state-based marketplace and spent or planned to spend $159.6 million in grant funds as of October 2016, which included IT expenditures of $48.2 million. Additionally, CMS had deobligated $102,000 of Minnesota’s grant funds as of October 2016 for grants that had concluded their period of performance and had been closed. Minnesota is 1 of 12 states that have an integrated system for both Medicaid and qualified health plans and is 1 of 2 states that have a Basic Health Program. This allows Minnesota to combine eligibility determinations for qualified health plans, Medicaid, and the state Basic Health Program. As of March 2016, Minnesota had enrolled approximately 84,000 individuals in qualified health plans. Since 2011, CMS had awarded $575.1 million in federal marketplace grants to the state of New York to build a health insurance marketplace and develop infrastructure needed for the marketplace’s ongoing operations. The New York Health Benefit Exchange (later renamed the New York State of Health) within the New York State Department of Health implemented the state-based marketplace and had spent or planned to spend $513.6 million in marketplace grants as of October 2016, to include $209.2 million for IT costs. Additionally, CMS deobligated $4.5 million of New York’s grant funds as of October 2016 for grants that had concluded their period of performance and had been closed. New York is the other state, along with Minnesota, to have implemented a Basic Health Program. As of March 2016, New York State of Health had enrolled approximately 272,000 individuals in qualified health plans. Since 2011, CMS had awarded Oregon $305.2 million in federal marketplace grants to build a health insurance marketplace and develop infrastructure needed for the marketplace’s ongoing operations. Oregon had spent $301.4 million in federal marketplace grants as of October 2016, which included IT spending of $79.7 million. Additionally, CMS had deobligated $1.8 million of Oregon’s grant funds as of October 2016 for grants that had concluded their period of performance and had been closed. Cover Oregon implemented the original marketplace, which operated from 2013 to 2014 as a state-based marketplace. The state-based marketplace then transitioned to a state-based marketplace using the federal platform under the Oregon Department of Consumer and Business Services, due to the technical challenges of IT development and creating a functional website to enroll consumers. As of March 2016, approximately 147,000 individuals had selected a qualified health plan. The four selected states’ overall marketplace federal grant funding, expenditures, including amounts planned to be spent, and deobligations, as well as their enrollment as of plan year 2016 are reflected in figure 2. Over the past 4 years, we have issued various reports highlighting challenges that CMS and the states have faced in implementing and operating health insurance marketplaces. For example, in an April 2013 report, we described the actions of seven states that were in various stages of developing an IT infrastructure to establish marketplaces, including redesigning, upgrading, or replacing their outdated Medicaid and CHIP eligibility and enrollment systems. Six of the seven states were also building the IT infrastructure needed to integrate systems and allow consumers to navigate among health programs. However, the states had identified challenges with the complexity and magnitude of the IT projects, time constraints, and guidance for developing their systems. In December 2014, we reported that all states using the federal marketplace IT solution had faced challenges transferring applications to and from that system. We pointed out that none of the states using the federal marketplace IT solution in the first enrollment period were able to implement application transfers, which required the establishment of two IT connections: one connection to transfer applications found ineligible for Medicaid coverage from the state Medicaid agency to the federal marketplace IT solution, and another connection to transfer applications found ineligible for coverage from the federally facilitated marketplace to the state Medicaid agency. In March 2015, we reported that several problems with the initial development and deployment of HealthCare.gov and its supporting systems had led to consumers encountering widespread performance issues when trying to create accounts and enroll in health plans. We noted, for example, that CMS had not adequately conducted capacity planning, adequately corrected software coding errors, or implemented all planned functionality. In addition, the agency did not consistently apply recognized best practices for system development, which contributed to the problems with the initial launch of HealthCare.gov and its supporting systems. In this regard, weaknesses existed in the application of requirements, testing, and oversight practices. Further, we noted that CMS had not provided adequate oversight of the HealthCare.gov initiative through its Office of the Chief Information Officer. We made seven recommendations aimed at improving requirements management, system testing processes, and oversight of development activities for systems supporting HealthCare.gov. CMS concurred with all of our recommendations and subsequently took or planned steps to address the weaknesses, including instituting a process to ensure that functional and technical requirements are approved; developing and implementing a unified standard set of approved system testing documents and policies; and providing oversight for HealthCare.gov and its supporting systems through the department-wide investment review board. Further, in September 2015, we reported that states had spent $3.22 billion in federal grant funding to establish their health insurance marketplaces, of which approximately $1.45 billion was to establish IT systems supporting their health insurance marketplaces. However, we noted limitations in CMS’s oversight of the states’ IT system development, such as a lack of clearly defined roles and responsibilities leading to challenges with states receiving timely CMS guidance and relevant CMS senior executives not always being involved in state IT funding decisions. We also noted that CMS had allowed state systems to be launched without being fully tested. This led to consumers in some states experiencing long waits for eligibility determinations, websites freezing, system failures, and manual enrollments. We made three recommendations aimed at improving CMS oversight in those areas and CMS concurred with all of the recommendations, noting actions they had taken or planned to take in each of the areas. The agency’s plans to implement the recommendations included steps to develop a comprehensive communication plan, planned coordination between all relevant business and IT units to review and approve state requests for federal IT funds, and continued collaboration with states to test their system functionality. CMS offered various types of assistance to states transitioning to a different marketplace IT platform, including to the two selected states that transitioned to the federal platform—Hawaii and Oregon. Among other things, CMS’s assistance included periodic reviews of these two states’ transition plans and weekly calls to prioritize and address transition issues. CMS also issued a regulation to all states that included requirements regarding states’ transition plans and their coordination with CMS and HHS; however, this regulation was not finalized until after Hawaii and Oregon had initiated their transition efforts. States’ officials reported costs of approximately $84.3 million, collectively, to transition to the federal platform. The two states’ transition efforts included making changes to their Medicaid systems and the states mainly relied on Medicaid matching funds from CMS to do this. Further, the two states encountered challenges during their transitions. These challenges were related to accelerated transition time frames, difficulties reassigning functional marketplace responsibilities, delays in receiving approvals and decisions needed from CMS, and trouble accessing historical consumer data in previous marketplace IT systems developed by vendors. Among the assistance that CMS provided for states that sought to transition to a different marketplace IT platform was conducting periodic oversight activities. Specifically, for all state-based marketplaces, the agency conducted oversight through the Open Enrollment Readiness Reviews, as well as through sustainability consultations and weekly telephone calls during which CMS reviewed states’ plans and milestones. The Open Enrollment Readiness Review process involved CMS reviewing each state-based marketplace’s readiness for the upcoming open enrollment period by discussing topics such as IT system and business function readiness, application processing and notices, inconsistent data, enrollment transactions, and previous reviews’ findings. After the discussions, follow-up written communication from CMS was sent to the states highlighting outstanding action items that needed to be completed. In addition, CMS sustainability consultations evaluated state marketplaces’ self-sustainability in the absence of federal funding from marketplace establishment grants. These sustainability evaluations included reviews of the state marketplaces’ revenue sources, cash reserves, management team stability, and Medicaid and marketplace systems’ integration. Lastly, weekly telephone calls with states included discussions of the states’ plans and milestones associated with IT releases and testing, identification of technical issues, and mitigation and contingency plans. Further, in March 2016, subsequent to Hawaii and Oregon’s transition to the federal platform, CMS issued a regulation that set out more defined transition guidance. The regulation, for the first time, provided generally applicable guidance about the transition process and associated requirements. It required state marketplaces that seek to utilize the federal IT platform for selected functions to, among other things, coordinate with CMS, including joint development of a transition plan, which is to consist of a project plan with proposed milestones. CMS officials are to review the plans and help the states prioritize their planned transition milestone dates. Also, in October 2016, CMS finalized an agreement for states seeking to transition to the federal IT platform. That agreement became effective on November 1, 2016, with 1-year renewal options. The agreement established the mutual obligations and responsibilities of the transitioning states and CMS in areas including eligibility and enrollment, maintenance of related IT systems, call center operations, and casework support. However, the issuance of the regulations and agreement discussed above occurred after Hawaii and Oregon had already begun their April 2014 and June 2015 transitions to the federal platform. Thus, the requirements and documented guidance included in the regulation and the agreement were not available to these states when they initially undertook their transition efforts. Hawaii and Oregon officials described the assistance that CMS provided to their states, as follows: Hawaii began its year-long transition in June 2015 from a state-based marketplace that was operated by the Hawaii Health Connector to a state-run, state-based marketplace using the federal platform. The state completed an additional transition to be a federally facilitated marketplace in January 2017. At the time that Hawaii carried out its transition in 2015, CMS had not yet provided the states with documented guidance; however, CMS officials provided milestones to Hawaii that needed to be completed to fully transition to the federal platform. These milestones included dates when the marketplace plan and issuer data should be submitted and transferred, Medicaid account transfer functionality should be completed and tested, and notices and other related enrollee communications should be completed. In addition, Hawaii officials said that, throughout the transition, they participated in weekly telephone calls with CMS subject matter experts to discuss requirements such as consumer outreach and the interface development needed for account transfers. Oregon began its transition in April 2014 from a state-based marketplace operated by the non-profit organization, Cover Oregon, to a state-run marketplace using the federal platform, under the state’s Department of Consumer and Business Services. According to Oregon officials, CMS did not have guidance available at the time that the state decided to transition. However, state officials said they participated in many phone calls throughout the transition with CMS officials within CCIIO and CMCS to identify whether the state marketplace, state Medicaid, or CMS would be responsible for certain marketplace functions after the transition from Cover Oregon to the Department of Consumer and Business Services. The total reported IT costs associated with Hawaii’s and Oregon’s transition to using the federal platform were approximately $84.3 million, collectively. State officials said that the transition costs were incurred to make changes to their Medicaid systems and connect their systems to the federal IT platform. Hawaii and Oregon officials told us that the two states primarily relied on federal Medicaid matching funds to cover the cost of their transition efforts. Hawaii officials reported to us that their total IT costs associated with the initial transition to using the federal platform, begun in June 2015, were approximately $27.0 million as of June 2016. According to the officials, the transition costs primarily covered the development of functionality to transfer account files between Hawaii’s existing state Medicaid system and the federal platform. These included expenses for staff to carry out project management, technical assistance, and independent verification and validation activities. In addition, the costs covered funding for staff to perform design, development, and implementation work in order to enable the system to determine minimum essential coverage. The officials stated that these transition costs were largely funded by the federal government through Medicaid 90/10 and 75/25 matching funds. With regard to the transition to become a federally facilitated marketplace in June 2016, state officials said the primary reason Hawaii decided to do so was because the state lacked additional federal grant funds and the state legislature had denied subsequent requests for the additional funding for marketplace development and operations. The officials said that, in November 2016 they determined that, since the state was already relying on the federal platform for the previous plan year, the only IT work required to be completed would be a small update to their website. Thus, there was no cost associated with the second transition in January 2017. Oregon officials reported to us that their total IT transition costs to the federal platform, begun in April 2014, were approximately $57.3 million, as of November 2016. According to the officials, the transition costs primarily covered the modernization of the state’s legacy Medicaid system through the use of shared computer software code from the Kentucky marketplace system and the development of functionality to transfer account files between its Medicaid system and the federal platform. Because Oregon’s previous state-based marketplace IT solution was intended to integrate and modernize the state’s legacy healthcare systems, the decision to switch to the federal IT solution left the state still needing to modernize its legacy Medicaid system. In addition, the transition effort included expenses for staffing, professional services, computer hardware, and the software and service fees to host the new Medicaid system. The new Oregon Medicaid system went live in December 2015. Oregon officials reported to us that of the $57.3 million in IT costs, approximately $56.6 million were largely funded by the federal government through Medicaid 90/10 and 75/25 matching funds and approximately $662,000 were funded by marketplace grants from April 2014 to November 2014. The latter amount was needed to enable Oregon residents to apply for coverage through the federal system, HealthCare.gov, for plan year 2015. Hawaii’s and Oregon’s transition time frames and IT-related costs are summarized in table 2. While the two selected state-based marketplaces successfully transitioned to the federal platform, officials from each state identified a number of challenges they encountered in doing so. Hawaii officials identified challenges that stemmed from difficulties dealing with: Accelerated transition time frames and reassigned marketplace responsibilities. According to Hawaii officials, the state had originally planned to transition to the federal platform in October 2016 but then accelerated its efforts to address the unexpected shutdown of the Hawaii Health Connector in December 2015, due to financial problems. The accelerated and abrupt transition forced the state’s officials into the difficult task of having to find a way to continue operations and provide consumer support almost a year ahead of their original plan, since that plan did not have the Hawaii Health Connector transferring its marketplace responsibilities to state officials until October 2016. The unexpected shutdown also meant that the state had to quickly reassign functional marketplace responsibilities among seven different state departments, which made it more complicated to coordinate and find funding to continue marketplace- related operations to support the open enrollment for plan year 2016. Additionally, the reassignment of responsibilities was complicated by the fact that the Hawaii First Circuit Court appointed a receiver to understand all of Hawaii Health Connector’s obligations, dissolve its assets, and settle the organization’s financial dealings with creditors and debtors. Delays in communications with CMS. Hawaii officials stated that the distance and time differences between CMS officials working in the Eastern time zone and Hawaii officials working in the Hawaii- Aleutian time zone caused delays in communications between these officials. State officials said the geographical distance and resulting time difference of 6 hours in working with CMS officials primarily based in the District of Columbia and Maryland caused delays in receiving marketplace related information from CMS, such as notifications about when service disruptions would occur. However, Hawaii officials noted that, for the most part, this challenge subsequently was overcome by improved communications with the agency’s officials, thereby resolving former marketplace service disruption issues, such as planned federal platform system outages and automatic re-enrollment of individuals. Oregon officials identified challenges that stemmed from difficulties dealing with: Transitioning without the benefit of a transition guide to follow. According to Oregon officials, since their state was the first to transition to the federal platform, there was no other state model or guide to follow. In addition, CMS at the time, did not have documented guidance and requirements for states that wanted to transition to a different marketplace type. This required Oregon officials to spend many hours in discussions with CMS officials to figure out what functions CMS and the state each would be responsible for performing. Delays in receiving approvals from CMS. According to Oregon officials, CMS did not always make decisions related to Oregon’s transition in a timely manner. Oregon officials noted that there were delays with CMS issuing the approvals needed to allow the state’s residents to create accounts with HealthCare.gov before open enrollment started in November 2014. The officials said that being able to create those accounts in advance would have made enrollment easier for Oregon residents. In addition, the federal call center scripts initially were not customized with the HealthCare.gov system, so the responses Oregon residents received from HealthCare.gov were not always clear or accurate, according to the officials. CMS officials within CCIIO and CMCS stated that, at the time when Oregon began discussions with the agency about transitioning to the federal platform, CMS was also addressing priorities from the 2014 open enrollment and needed to balance resources spent on Oregon’s transition with other IT development and enhancement activities that were needed. Correcting errors with account transfers in the Medicaid system. Oregon officials noted that, in December 2015, initial account transfers between Oregon and CMS encountered errors due to a configuration issue related to outdated CMS guidance. Oregon had to make changes to its eligibility system in order to process account transfers because the actual transfers varied from CMS’s previously published specifications in certain areas. Meeting CMS’s 60-day notice for technical changes. State officials said CMS’s practice of providing approximately 60 days’ notice for technical changes to account transfer specifications was challenging to comply with in a timely manner due to the length of time that was needed for Oregon’s system integrator to make the requisite system changes. As noted above, these state transition challenges stemmed from a variety of issues including compressed time frames, split priorities, and unclear marketplace responsibilities. Some of these challenges may have been alleviated if the states had the benefit of documented transition-related guidance that exists now, available to them when they first initiated their transitions. In addition to the aforementioned challenges, for both selected states, a common challenge involved accessing the historical marketplace IT systems containing consumer data that had been developed by the initial contractors for Cover Oregon and the Hawaii Health Connector. In Hawaii, the court-appointed receiver for the state said the Hawaii Health Connector system integrator contractor had possession of the previous Hawaii marketplace IT system that contained the consumer data. As of November 2016, Hawaii and the contractor were still discussing the terms of compensation under that contract, and Hawaii officials did not know when those issues would be resolved. Oregon officials informed us in July 2016 that their state and its contractor were in litigation, and the contractor was storing Oregon’s previous marketplace IT system and the data within it in an archive. Oregon officials within the Department of Consumer and Business Services stated that they had a local archive of the data from the previous marketplace IT system, but did not have access to the actual contractor- based IT system after March 2016. With regard to this matter, CMS officials within CCIIO and CMCS noted that their continued work had included extensive discussions with both states on options and mitigation strategies to retain the data and maintain compliance with the marketplace 10-year archival requirement for consumer and enrollment data. The officials said that the historical data are needed so that states can maintain the ability to process actions that include marketplace eligibility appeals (which could span multiple years) and submission of enrollment data. The CMS officials also stated that they had continued to refine their operational processes and policy associated with transitioning to the federal platform, as well as provide guidance and regulatory requirements for states that transitioned their marketplaces to other models, with particular focus on those that transitioned to the federal platform. Further, the officials stated that they continued to work with Hawaii and Oregon to explore opportunities to enhance the federal platform’s functionality to better support the state and other state-based marketplaces on the federal platform, where feasible. They added that their continued work also included frequent communications with both states to ensure a smooth coverage transition for residents of each state. CMS had processes in place to assist the selected states’ with their efforts to financially sustain the development and operations of their marketplaces, including supporting IT systems. These processes included reviewing sustainability plans, reviewing annual independent financial audit reports, and conducting and responding to sustainability risk assessments. However, in providing its assistance, CMS did not always ensure that the four selected states’ sustainability plans and financial audit reports were complete, or that the states had complied with PPACA and CMS requirements regarding financial audit reporting. Additionally, CMS did not clearly define its risk assessment processes, as suggested by Standards for Internal Control. As previously noted, PPACA required state marketplaces to be self- sustaining as of January 1, 2015, and, in turn, CMS developed marketplace blueprint requirements to assist states with meeting the act’s requirements. As part of the blueprint for approval of a state-based marketplace, the agency required states to submit an operational budget and management plan for its oversight, and to include proposed budget information for the upcoming 5 years from the initial year of operations, and long-term strategies for financial sustainability. According to CMS officials within CCIIO, the states’ plans are intended to inform CMS regarding the state-based marketplaces’ sustainability. According to agency guidance, the plans are also used by CMS to assess and respond to marketplace sustainability risks. To aid in the reporting of the sustainability plans, CMS created a 5-year budget forecast template for states to complete and submit. The template called for high-level reporting of sustainability factors, to include marketplace enrollment, revenue, expenditures, reserves, and marketplace status over 5 years. In addition, contractual spending and questions related to ongoing IT costs were to be included in the plans. The 5-year span of budgets and enrollments that were to be reported in the plans included current-year projections and actuals, as well as forecast projections for the forthcoming 4 years. Sustainability guidance, dated October 2016, further required states to update and submit their sustainability plans to CMS twice a year. While CMS received complete 2016 sustainability plans from New York and Oregon, it did not ensure that it had complete sustainability plans with the full 5-year budgets from Hawaii and Minnesota. Specifically, Minnesota and Hawaii did not include the entire 4 years of budget forecasts in their plans. Hawaii’s sustainability plan dated May 2016 only contained the 2016 budget and enrollment amounts, but forecasts for the forthcoming 4 years were missing. Instead of forecasted amounts, Minnesota’s 2016 sustainability plan used duplicated budget and enrollment amounts, in which certain budget numbers, such as equipment and supply costs, were the same from 2016 to 2019. Hawaii, Minnesota, and New York state officials said that CMS did not provide them policies or procedures to help guide state marketplaces on aspects of sustainability planning, such as budget and enrollment forecasts. Minnesota and Oregon officials also reported that completing 5-year budgets was difficult due to problems with forecasting, and that budgets beyond the regular 3- and 2-year state budgetary cycles, respectively, were difficult to formulate. CMS officials further stated that Hawaii’s transition to a federally facilitated marketplace for plan year 2017 had precluded any future budgets or enrollment forecasts. CMS officials within CCIIO and CMCS stated that they communicate with states to resolve issues in providing complete budget forecasts. The officials acknowledged that states face uncertainties because the marketplaces are new programs, and stated that they are considering whether they should ask for 3-year budgets verses 5-year budgets. While asking states for a 3-year budget instead of a 5-year budget may be less of a burden on states to provide complete budgets, the smaller timeframe may not fully inform CMS oversight of the long-term financial sustainability for marketplaces, which are new systems that face multiple uncertainties. Further, if CMS does not take steps to ensure that states provide sustainability plans with complete 5-year budget forecasts, per its guidance, then CMS may not be fully informed of the state-based marketplaces’ sustainability factors. Incomplete sustainability plans may also limit the agency’s ability to assess and respond to state marketplace sustainability risks. PPACA and HHS regulations require state-based marketplaces to provide an annual independent financial audit report, to include activities, receipts, and expenditures, for CMS’s oversight. Also, submission of the independent financial audit report is a requirement of CMS’s annual reporting SMART process. HHS regulations stipulate that audit reports, in addition to accounting for receipts and expenditures, should follow generally accepted government auditing standards (GAGAS). According to GAGAS, audit reports should include information such as a review of compliance, internal controls, and related financial policies and procedures. Further, according to CMS guidance, the independent financial audit reports are to provide CMS with insight into marketplace IT self-sustainability efforts and compliance, since IT costs are a large part of the marketplace budgets. This guidance calls for the agency to use the reports to inform sustainability risk assessments and responses provided to the states. While CMS took steps to collect and review marketplace financial audit reports, it did not ensure that the four selected states always provided audit reports or that the reports were complete. CMS reviewed New York’s and Oregon’s relevant financial audit reports for 2015, which included the information that GAGAS required, such as information on compliance and internal controls. However, CMS did not ensure that Hawaii provided a financial audit report for 2015. In addition, although Minnesota submitted a financial audit report in 2015, CMS did not ensure that the report included all necessary information. For example, the Minnesota report was not specific to the marketplace business operation or its IT platform and, instead, included financial activities for all state programs and activities. While marketplace receipts and expenditures were included, a review of marketplace compliance, internal controls, and financial policies and procedures were not. Since these reviews were not included in Minnesota’s submission of its financial audit report to CMS that year, CMS did not have visibility into these aspects of the state’s marketplace- specific activities or complete insight into its sustainability efforts. In discussing their reporting, Hawaii officials in the Office of Community Services and Department of Human Services said they are not required to develop a financial audit report now that the state has switched over to being a federally facilitated marketplace. Further, according to the state’s officials, a court ruled that an independent financial audit of the defunct Hawaii Health Connector would be unfeasible and impractical since the Connector’s records are not amenable to audit, nor is there a Connector official available to sign off on the audit. A Minnesota marketplace official responsible for compliance and program integrity stated that MNsure was in compliance with PPACA and HHS regulations requiring the submission of financial audit reports, since its report included marketplace financial statements. In addition, the official stated that the cost to complete an independent financial audit specific to the marketplace would be an undue burden. The official added that CMS had accepted the overall state financial audit report as sufficient. CMS officials stated that the agency accepted the overall Minnesota state audit report given Minnesota’s challenges with reporting and said they were working with the state to mitigate challenges in providing the required audit report. Further, these officials stated that relevant audit information may be gathered through alternate channels and that they do not plan to enforce Minnesota’s compliance in providing an independent financial audit report specific to the marketplace in the short term. CCIIO officials further stated that, while they can communicate a lack of compliance to the states, they are statutorily limited in regard to enforcement mechanisms for state marketplaces. The officials added that, in the long term, they hope to address reporting limitations through providing additional guidance to the states. Nevertheless, although CMS took steps to ensure that the states submitted annual financial audit reports, the agency did not ensure the states followed regulations and guidance, which decreased CMS’s visibility into state marketplace sustainability and could increase sustainability risks. While individual states may have unique situations that preclude the submission of complete independent financial audit reports, the law and guidance are clear that states must submit annual financial audit reports that follow GAGAS requirements, including a review of compliance, internal controls, and related financial policies and procedures. Further, although CMS officials stated that alternate channels may be used to gather audit information, the agency’s guidance specifically refers to the annual financial audit reports as one of the primary sources for evaluating state marketplace sustainability. If CMS does not ensure that states provide complete annual financial audit reports, it may not have visibility into marketplace IT-related financial activities such as receipts, expenditures, internal controls, and financial policies and procedures. A lack of relevant financial audit reports can lead to uninformed CMS sustainability risk assessments and responses, which could increase state marketplace sustainability risks. Standards for Internal Control calls for agencies to assess and respond to risk using clearly defined measurable terms, objectives, and risk tolerances, to include a clear categorization process, reliable information, and a clear response to risks. Clearly defined objectives state what is to be achieved in specific, measurable terms, as well as how the objectives will be achieved, who will achieve them, and in what time frames. In addition, clearly defined risk tolerances set the acceptable level of variation in performance relative to the objective. These risks should be assessed using relevant data from reliable sources in a timely manner based on identified information requirements. According to CMS policy, these required information sources include state sustainability plans and independent financial audit reports. Additionally, agencies should respond to risk based on the significance of the risk. According to CMS guidance, these processes address issues that may impact state marketplace IT required financial self-sustainability. In order to assess and respond to risk, CMS had established processes for the following activities for all state-based marketplaces, including those that use the federal platform: collects sustainability information. CMS uses financial audit reports and sustainability plans to create sustainability risk assessments and conduct consults for each state-based marketplace. assesses state marketplace risk. The agency conducts risk assessments by reviewing factors such as marketplace IT costs, functionality, and operations stability. It then scores states’ marketplace self-sustainability risk factors, and categorizes marketplaces based on a risk rating of low, medium, medium-high, or high. Low risks indicate a stable IT infrastructure and budget while higher risks indicate challenges to marketplace sustainability, such as insufficient projected revenues, insurance issuer volatility, or IT system challenges. responds to assessed risk. CMS responds to identified marketplace sustainability risk, including IT system sustainability risks, through sustainability consults. The agency discusses marketplace sustainability with the state, addressing specific issues that may impact sustainability, and identify areas for technical assistance. Agency officials said they also provide assistance as needed through phone calls with state officials. follows up with the state. CMS provides the results of consults to the states in a site visit report. This report conveys action items concerning sustainability compliance and/or recommendations for self-sustainability based on industry best practices. In addition, according to CCIIO officials, marketplaces assessed at medium-high or high risk receive more intensive assistance from the agency. While CMS took steps to establish a sustainability risk assessment process, there were numerous shortcomings with the agency’s implementation of that process for the four selected states. Specifically, the agency’s sustainability risk assessment and consult activities were not based on a fully defined risk process, to include having fully defined measurable terms, a clear categorization process, reliable information, and a clear response to risk. CMS’s procedures outlined the steps in the agency’s sustainability risk assessment process, but the procedures did not always define risk factors in clear and measurable terms. For example, the procedures did not define the risk factor for enrollment target tolerances in quantifiable terms. Instead, the agency used the terminology “close to expected” or “lower than expected.” Additionally, while the agency assigned the four selected state marketplaces in categories of related sustainability risk, there was no clear categorization process defining how risk assessment scores of low, medium, medium- high, or high risk were obtained. Specifically, CMS did not provide documentation of defined score thresholds, such as what score out of the maximum determined the sustainability risk categorization. It also did not define a consistent baseline for risk scores in its documentation or assessments—we identified three different baselines in CMS’s risk assessments of the four selected states. As an example, we found that CMS assessed one state based on a scale of 59 points of weighted risk factors, while other states were assessed on scales of 67 or 76 points. In addition, as mentioned previously, the sustainability plans and financial audits from the selected states that were used for the risk assessments were not always complete and, thus, were not always reliable sources of information. Further, CMS did not provide documentation defining a process for responding to assessed risks and its response to risk as documented in the site visit reports was not based on the significance of the risk. The site visit reports addressed no more than one risk factor per each of the four states we reviewed, despite the agency categorizing some of the selected states at a medium-high or high risk level, with multiple assessed risk factors. For example, in one state’s site visit report, CMS provided a recommendation for a lower weighted risk factor— marketplace reserves. However, the agency did not address other factors that it designated in the state’s associated risk assessment as being of highest risk, such as a limited revenue source or IT functionality shortcomings. Moreover, the agency’s policy and procedure for risk response did not vary based on risk, so the internal guidance for sustainability assistance is the same for a high-risk marketplace as a low- risk marketplace. CCIIO officials said that risk assessments were never meant to be quantitative and that they used their best judgment after looking at a number of areas to rate marketplace risks. Further, the officials said they did not tailor responses to different levels of assessed risk ratings because the assessments were considered an internal guide to provide CMS leadership with a general idea of a marketplace’s sustainability risk and were used as a tool to determine what technical assistance states needed. However, if CMS does not take steps to define sustainability objectives in measurable terms, to include a clear marketplace risk categorization process, use of relevant data from reliable sources, and responses based on risk significance, then it is possible that risks will not be correctly assessed and responded to by the agency. Accordingly, if marketplace sustainability is not correctly assessed and responded to, CMS may not be able to assist states in achieving their required financial self- sustainability. Among other things, leading practices emphasize the importance of having performance metrics and developing performance plans to identify the most important metrics to guide decisions and measure IT performance. In addition, CMS’s guidance calls for it to ensure that states have documented performance measurement plans and to conduct operational analysis reviews to examine the operating status of state marketplace IT systems using key performance indicators. Also, during the open enrollment period, CMS requires states to submit a report of weekly performance indicators, which includes some metrics related to the operational performance of marketplace systems, such as the number of applications completed electronically, the total number of website visits, and website offline time, among others. However, CMS did not consistently monitor the performance of IT systems for Minnesota and New York—the two selected states that operated state-based marketplace systems. Specifically, CMS did not ensure that the two states had developed, updated, and followed performance measurement plans. In addition, it did not conduct reviews to analyze the operational performance of the selected states’ marketplace IT systems against an established set of performance parameters to evaluate whether the states were performing in an efficient and effective manner. As for IT metrics that were collected from the states, CMS did not link these metrics to performance measurement goals or establish targets for performance. In applying its Enterprise Life Cycle framework to monitor states’ marketplace IT systems, CMS is to ensure, as part of its operational readiness review, that states have documented performance measurement plans. These plans are to be used to assess the business value of states’ marketplace IT. Further, the Enterprise Life Cycle framework requires that states evaluate performance metrics and share their results with responsible parties, such as federal officials and state project and business managers. CMS also provides states a template that they can use, which includes a section on project measurement objectives, performance metrics, and thresholds, which set parameters for target performance. In addition, our previous work has emphasized the importance of performance metrics to assess the actual results, effects, or impact of a program or activity compared to its intended purpose. We also emphasized that leading practices include the development of plans to identify the most important metrics to guide decisions, and document goals and metrics to measure IT performance. Our previous work noted that the performance measurement approach should be holistic, or seen in terms of the operation as a whole, in order to identify a comprehensive suite of metrics. Further, our work has stated that the performance measurement approach should be continuously assessed and followed by regularly reviewing metrics, goals, and targets; and adjusting these as necessary. CMS reviewed the two selected states’ progress on marketplace IT projects, but had not ensured that these states documented, updated, and followed their performance measurement plans to demonstrate that they had identified and selected the most important metrics to guide decisions. In addition, CMS had not ensured the selected states continuously assessed and adjusted performance metrics and targets as appropriate. Minnesota: CMS did not ensure that Minnesota updated and followed its performance measurement plan to show that the state had continuously assessed its performance measurement approach and adjusted metrics and targets as necessary. Specifically, in the September 2013 operational readiness review, CMS noted that Minnesota officials had partially identified performance metrics in their project planning documentation, and had partially evaluated performance metrics and shared results with responsible parties. CMS also noted that the state had not developed or identified its operational metrics. Subsequently, to address CMS’s observations, in December 2013, Minnesota drafted a performance measurement plan to identify performance measurement goals and targets. The December 2013 plan included technology goals such as ensuring that at least 90 percent of users have real-time, online access to the marketplace website and decreasing code defects, or software errors, per release by at least 60 percent. Information in the plan stated that it was to be reviewed quarterly and updated as needed. However, in June 2016, Minnesota officials from the state’s IT Services organization said that the 2013 performance measurement plan was likely developed by a contractor and had not since been updated or followed. According to the officials, while the state did not follow its performance measurement plan, the state was monitoring marketplace IT metrics related to its technology goals, such as system availability and unexpected down time. In addition, Minnesota developed a service-level agreement that included target metrics for system availability, although it did not require the reporting of defects. Neither CMS nor state officials provided evidence that the state was actively monitoring these metrics related to their technology goals. New York: CMS had not ensured that New York developed a performance measurement plan. In August 2013, CMS provided New York a satisfactory rating for evaluating performance metrics and sharing results with responsible parties; however, it noted that the state had only partially documented performance metrics in a plan. During this review, CMS also stated that New York had not developed or identified operational metrics. Although New York did not have a performance measurement plan, according to state officials, performance metrics were documented in its June 2016 oversight and monitoring plan. This plan included metrics for the contractor responsible for New York’s marketplace IT operations. Information in the oversight and monitoring plan referred to IT metrics such as system downtime, timeliness of file processing, real-time transaction processing, backup and recovery implementation, failover and fallback capability, and disaster recovery infrastructure. In addition, New York developed a service-level agreement that included targets. For example, the service-level agreement required that New York’s marketplace system be available at least 98.5 percent of each month. However, the oversight and monitoring plan and service-level agreement did not include IT performance goals or related metrics. For example, the oversight and monitoring plan did not include metrics that New York IT officials said they use to monitor the performance of marketplace IT systems, such as metrics related to defect creation and remediation, the number of batch jobs, electronic data interface files that were rejected, and notices and related backlogs. These metrics were not documented or tied to performance goals or targets in a plan. In addition, according to New York marketplace officials, the goal of the IT service-level agreement metrics for its contractor is to ensure that consumers can readily and easily access the health insurance application and systems needed to enroll in coverage. However, these stated goals were not documented in the oversight and monitoring plan or clearly linked to the IT performance metrics. CMS officials from OTS said states were expected to address the issues that it identified in its reviews, such as partially completed performance measurement plans, and that the agency monitors marketplace system performance through daily calls with states. The officials said that Minnesota had not updated its performance measurement plan as requested and that it falls upon the state to ensure that service level agreement metrics are met, and if not met, that corrective actions will be taken. In addition, the officials stated that, instead of a performance measurement plan, the agency reviewed New York’s IT contractor service level agreements, which represented the state’s operational metrics. However, CMS did not ensure that these metrics were tied to performance goals in a plan in accordance with leading practices. Further, according to CMS officials, the Enterprise Life Cycle required project measurement plans in the planning and design phase, but not in the operations and maintenance phase, and operational states were not required to submit updated performance measurement plans. However, while CMS’s Enterprise Life Cycle guidance did not explicitly require states to update performance measurement plans during operations and maintenance, leading practices stress that it is important to have updated performance measurement plans to properly assess effectiveness in meeting stated operational goals. Because CMS had not ensured that states documented, updated, and followed performance measurement plans for their state marketplace systems, the agency did not have the assurance that states had taken a holistic approach to developing performance metrics to assess actual results as compared to intended goals. Further, without reviewing states’ plans, CMS could not ensure that states had carefully identified and selected the most important metrics to guide decision making and organizational operations, or that states continuously assessed and adjusted performance metrics and targets as appropriate. As part of the Enterprise Life Cycle framework, to monitor system performance, CMS and states are required to conduct an operational analysis review to examine the operating status of the marketplace system through a variety of key performance indicators and determine whether the system is performing in an efficient and effective manner. In addition, according to leading practices, operational analysis is a key management tool to examine the performance of an operational initiative and measure that performance against an established set of performance parameters. The operational analysis should consider how objectives could be better met and how costs could be saved. CMS had not conducted operational analysis reviews for states, including Minnesota and New York. Instead of conducting operational analysis reviews, the agency developed OTS reports in November 2015 regarding selected states’ systems to prepare for the 2016 open enrollment period. The reports included discussions of these states’ system performance. For example, the OTS report for Minnesota included a summary of the state’s system functionality and performance. Specifically, the report noted that Minnesota discovered quality issues with the software used for eligibility determinations and put a plan in place to remedy quality issues. In addition, the report noted that Minnesota was using scenarios to test site performance and assessing whether the system had sufficient capacity to meet business needs. For New York, the OTS report included a summary of the state’s system functionality, system performance, and stability of application bandwidth to handle peak volumes. It discussed business value, such as the number of eligibility determinations (2.7 million eligibility determinations, and over 78.9 percent enrolled in a health care plan). The report also identified risks for the open enrollment and estimated operating costs for the system. Nevertheless, while these reports discussed metrics on New York’s performance, such as the number of eligibility determinations, and Minnesota’s consideration of performance issues, they did not include key performance indicators to show whether the states were performing in an efficient and effective manner. Specifically, the reports did not include metrics or targets that might define performance success. For example, the reports did not discuss metrics related to application processing time or application backlogs in the marketplace IT system. CMS officials from OTS and CCIIO said that they had not conducted operational analysis reviews because they conduct open enrollment readiness reviews instead. The officials also said that states were still developing their systems for the 2017 enrollment period. Additionally, CMS officials said that they had not established key performance indicators because states are responsible for developing such measures and adjusting performance, and that the key performance indicators would be in the states’ service-level agreements. The officials added that throughout the year, OTS meets with states biweekly to track their progress on their software releases, and discuss and assist with resolution of any issues they may be encountering. However, the open enrollment readiness review included operational areas such as IT system and business functions, but did not note discussion of performance in terms of key performance indicators or other elements of operational analysis, such as how objectives could be better met or costs could be saved. In addition, while officials from the states in our review said they were still developing or updating certain aspects of their marketplace systems, their marketplaces were operational. Further, while service-level agreements and other metrics can be useful indicators, conducting an operational analysis is an opportunity to perform qualitative analysis of the utilization of technology in a holistic and strategic way in order to see where the states are relative to their performance indicators. Because CMS had not conducted operational analysis reviews to monitor the performance of the selected states’ marketplace IT systems in a systematic way, it had limited assurance that these states’ systems were performing in an effective and efficient manner. During the open enrollment period, CMS requires states to submit a report of weekly performance indicators. This report is to include metrics related to the operational performance of marketplace systems. For example, CMS requests that states include: the number of applications completed electronically and on paper In addition, according to our prior work, performance metrics should be linked to the program’s IT performance measurement goals and define what is important to the organization and what the baseline and target performance should be in order to determine how efficiently and effectively the systems are performing. Metrics can be used for an organization to define success, structure improvement efforts and identify early warning indicators of problems. CMS collected IT metrics from the two selected states that operated marketplace systems in the open enrollment weekly indicator template. The template included metrics related to marketplace systems’ operational performance, such as the number of applications completed electronically and the total number of website visits, among other metrics. However, the metrics were not clearly linked to performance measurement goals and did not include baselines and targets to indicate how effectively they are performing. Specifically, CMS did not define performance measurement goals, such as timely processing of applications, and therefore was unable to link metrics to those goals. In addition, CMS did not set, nor require states to set, performance baselines and targets for metrics such as the number of applications completed by electronic means or on paper, or the duration of times when the website was offline. According to CCIIO officials, the agency collects data to effectively and consistently monitor state-based marketplace performance and to identify any barriers to eligibility and enrollment in the marketplaces. The data are used to inform programmatic understanding of operations, but are not collected to diagnose or interpret IT system performance. According to these officials, because states have different internal goals, CMS assists states to oversee their own marketplace IT systems based on each state’s need, goals, and resources. Because CMS and states did not clearly link metrics to their own performance measurement goals and did not include baselines or targets for the marketplaces, the agency is limited in its ability to monitor whether states’ systems are performing efficiently and effectively. In addition, without baselines or targets, the agency may not have data to accurately monitor progress for continual improvement. Through periodic oversight and guidance, CMS offered assistance to states we selected for review that sought to transition their health insurance marketplaces to the federal IT platform. This oversight included reviews of transition plans and milestones and weekly calls between the agency and state officials to discuss transition progress. However, CMS guidance for states transitioning to the federal platform was not documented and finalized until after two states had already initiated their transitions. Those two states primarily utilized federal Medicaid funding to make the associated changes to their Medicaid systems to connect with the federal IT platform. The states encountered challenges in their transitions in part because CMS had not issued its transition-related guidance until after these states had transitioned. CMS assisted selected states with their effort to financially sustain the development and operations of their marketplaces (including supporting IT systems) by reviewing sustainability plans; reviewing annual independent financial audit reports; and conducting sustainability risk assessments. However, CMS did not provide consistent oversight of the four selected states’ programs because the agency did not take steps to collect complete sustainability plans or financial audit reports from all of these selected states. In addition, CMS did not clearly define its sustainability risk assessment process to assist states. Until CMS addresses these issues, the agency’s assistance with, and assessments of states’ marketplace sustainability may not fully account for risks that could impact or interrupt state marketplace IT operations. CMS’s guidance includes steps to monitor the performance of state- based marketplace IT system operations, including the collection of related IT performance metrics such as electronic enrollments and website traffic volume. However, its oversight did not ensure selected states’ systems performance was monitored in a way that was consistent with its guidance to states and leading practices. Specifically, CMS did not ensure that two selected states (Minnesota and New York) had developed, updated, and followed performance measurement plans. In addition, the agency did not conduct operational reviews to determine if marketplace IT systems for the two selected states were operating in an efficient and effective manner; it also did not establish performance measurement goals or targets for certain metrics it collected from states. As a result, CMS has been limited in its ability to determine whether state marketplace IT systems are performing efficiently and effectively and to provide early warning of potential problems for the overall state marketplace IT systems’ service delivery to consumers. We are making the following six recommendations to the Secretary of Health and Human Services to direct the Administrator of the Centers for Medicare & Medicaid Services to take action. 1. The Administrator of CMS should take steps to ensure that state- based marketplace annual sustainability plans, to the extent possible, have complete 5-year budget forecasts. (Recommendation 1) 2. The Administrator of CMS should take steps to ensure that all state- based marketplaces provide required annual financial audit reports which are in accordance with generally accepted government auditing standards. (Recommendation 2) 3. The Administrator of CMS should take steps to ensure that marketplace IT self-sustainability risk assessments are based on fully defined measurable terms, a clear categorization process, and a defined response to high risks. (Recommendation 3) 4. The Administrator of CMS should take steps to ensure that states develop, update, and follow performance measurement plans that allow the states to continuously identify and assess the most important IT metrics for their state marketplaces. (Recommendation 4) 5. The Administrator of CMS should take steps to conduct operational analysis reviews and systematically monitor the performance of states’ marketplace IT systems using key performance indicators. (Recommendation 5) 6. The Administrator of CMS should take steps to ensure that metrics collected from states to monitor marketplaces’ operational performance link to performance goals and include baselines and targets to monitor progress. (Recommendation 6) We provided a draft of this report to HHS for comment. In its written comments (reproduced in appendix II), the department concurred with two of our recommendations, partially concurred with two recommendations, and did not concur with two recommendations. HHS concurred with our second and third recommendations which, respectively, called for CMS to ensure that all state-based marketplaces provide required annual financial audit reports that are in accordance with generally accepted government auditing standards, and ensure that the marketplace IT self-sustainability risk assessments are based on fully defined measurable terms, a clear categorization process, and a defined response to high risks. HHS stated that it will continue to provide technical assistance to state marketplaces regarding independent financial audits. The department also stated that it will refine its marketplace self-sustainability risk assessment processes to provide greater insight into the state marketplace sustainability efforts and to identify areas where states may need assistance. Taking steps to provide technical assistance to the states is important and CMS’s efforts to refine the risk assessment processes can provide greater insight into the state marketplace sustainability efforts and areas of needed assistance. HHS partially concurred with our fourth recommendation that CMS ensure that states develop, update, and follow performance measurement plans that allow the states to continuously identify and assess the most important IT metrics for their state marketplaces. While HHS did not specifically identify which aspects of our recommendation it concurred with and which it did not concur with, the department stated that, as part of its Enterprise Life Cycle framework, state marketplaces were required to submit performance measurement plans during the planning and design phases. The department also stated that it will continue to monitor the state marketplaces’ IT metrics in the implementation phase. For the state marketplaces that are in the operations and maintenance phase, it stated that each marketplace is accountable for managing and reporting its own IT metrics in accordance with federal and state law. The department also emphasized its consideration of states’ variations in marketplace systems and reporting capabilities and the associated burden of reporting IT metrics. However, as we noted in our report, CMS and certain states were not always able to provide evidence of performance measurement plans that were in accordance with the agency’s policy, nor was evidence always provided that these states updated and followed their performance measurement plans according to best practices. While CMS required submission of performance measurement plans in the planning and design phases, best practices state that performance measurement plans should be continuously updated and followed. Without ensuring that the states documented, updated, and followed performance measurement plans, CMS may not have reasonable assurance that the states established IT metrics to assess their results compared to their intended goals for their marketplace systems. HHS also partially concurred with our sixth recommendation that CMS ensure that metrics collected from the states to monitor marketplaces’ operational performance link to performance goals and include baselines and targets to monitor progress. HHS did not specifically identify which aspects of our recommendation it concurred with and did not concur with; however, the department stated that, while it requests performance measures from the state marketplaces, once the marketplaces are operational, states are responsible for monitoring their own performance measures. HHS also stated that it will continue to review IT metrics of state marketplaces in the implementation phase of their systems, but emphasized the burden on states and variations in state system reporting capabilities. However, as we noted in our report, CMS did not ensure that the metrics it is collecting from the states are linked to performance goals as suggested by best practices. Without this linkage, the agency may continue to be limited in its ability to monitor whether the state systems are performing efficiently and effectively. Additionally, CMS may miss the opportunity to refine its current IT metrics collection to better balance its need for visibility into states’ performance without unnecessarily burdening states. HHS did not concur with our first recommendation that CMS ensure that state-based marketplace annual sustainability plans, to the extent possible, have complete 5-year budget forecasts. The department stated that it has updated its requirements, and is now requesting 2-year budget forecasts instead of 5-year budget forecasts. It also stated that this is part of a new streamlined and simplified process to collect timely, accurate, and relevant data while taking into consideration the burden on states and the variations in state budget cycles. However, CMS did not provide documented evidence of this process or justification for stating that a 5- year budget is not a reasonable time frame for sustainability planning. This also contradicts previous CMS blueprint guidance for state marketplace approval. While asking states for a 2-year budget instead of a 5-year budget may streamline the process and be less of a burden on states to provide complete budgets, the shorter time frame may not fully inform CMS oversight of the long-term financial sustainability and associated risks for marketplaces, which are new systems that face multiple uncertainties. Further, if CMS does not take steps to ensure that states provide sustainability plans with complete 5-year budget forecasts, per its 2016 sustainability guidance, then it may not be fully informed of the state- based marketplaces’ sustainability factors. Incomplete sustainability plans may also limit the agency’s ability to assess and respond to state marketplace sustainability risks. Thus, we continue to believe that CMS should ensure that state-based marketplace sustainability plans have, to the extent possible, complete 5-year budget forecasts. Lastly, HHS did not concur with our fifth recommendation that CMS conduct operational analysis reviews and systematically monitor the performance of states’ marketplace IT systems using key performance indicators. The department stated that it conducts Open Enrollment Readiness Reviews to assess marketplace key performance indicators, which, according to CMS officials, are similar to operational analysis reviews. However, as we noted in our report, Open Enrollment Readiness Reviews did not systematically report the key performance indicators or include discussions of other elements of operational analysis reviews, such as how objectives could be better met or costs could be saved. In not ensuring that these reviews include clearly identified key performance indicators, CMS may miss an opportunity to perform strategic analysis of the states’ utilization of their marketplace systems and it may continue to have limited assurance that these states’ systems are performing in an effective and efficient manner. Therefore, we continue to believe that CMS should conduct operational analysis reviews, as required by its guidance, to systematically monitor the performance of states’ marketplace IT systems using key performance indicators. HHS also provided technical comments, which we incorporated in the report as appropriate. We also provided relevant excerpts of this product to each of the four states included in our review—Hawaii, Minnesota, New York, and Oregon—and received responses, via e-mail or in writing, from all four states. In written comments, the State of Hawaii’s Department of Labor and Industrial Relations noted that the Hawaii Health Connector’s IT provider would be turning over consumer data to the state by the end of June 2017. The department also indicated that the sustainability budget discussed in our findings was prepared by the Hawaii Health Connector and not the state. The department further commented that the state submitted a sustainability budget in April 2016 that included complete forecasts through 2019, and that this budget was updated in May 2016. Further, it said that the updated budget only reflected 2016 as Hawaii planned to operate as a federally facilitated marketplace and no longer needed to plan for financial sustainability. We noted in our report that the Hawaii Health Connector’s IT provider plans to turn over consumer data to the state by the end of June 2017. We also recognize that the April 2016 sustainability budget had the required budget forecast. However, our discussion of the Hawaii sustainability budget referred to the more recent and updated sustainability budget that the state prepared in May 2016. With regard to Hawaii’s April 2016 budget, the budget was based on revenue assumptions, such as state funding from the legislature that had not been approved at the time of its submission and, thus, required revision once those uncertainties were resolved in May 2016. With regard to the applicability of our findings for the May 2016 budget, Hawaii did not officially begin the transition to a federally facilitated marketplace until June 2016, and was still operating as a state-based marketplace at the time the May 2016 budget was revised. In addition, as indicated by CMS’s 5-year budget template and sustainability guidance, the sustainability budget should include forecasted years. According to CMS officials, they use this data to inform its sustainability risk assessments. By not showing the total effect of certain assumptions or outcomes across all forecasted years, such as the decision of the state legislature to not fund marketplace operations, the marketplace missed an opportunity to demonstrate the total negative impact on the marketplace sustainably and associated budget numbers. In that way, a fully forecasted budget could have served as evidence of Hawaii’s justification for transitioning to a federally facilitated marketplace. The State of Hawaii Department of Labor and Industrial Relations’ comments are reprinted in appendix III. In written comments, the state of Minnesota’s MNsure marketplace noted that the organization continues to focus on making improvements related to accountability and transparency, including in areas such as federal and state audit reporting for the marketplace. Minnesota provided additional comments in which the marketplace noted that its 3-year budget sustainability plans show that the marketplace is and will be sustainable. While Minnesota’s marketplace may have policies that do not align with CMS’s requirements for projected budget time frames, it is nonetheless important that the states submit information as required to the agency so that CMS officials can perform oversight of states’ marketplace sustainability in a consistent manner. Further, the shorter time frame may not fully inform CMS oversight, including risk assessments and responses, of the long-term financial sustainability for marketplaces. MNsure also disagreed with our characterization of Minnesota’s independent financial audit as not specific to the state’s marketplace. The marketplace provided details, including statements of the sufficiency of its state financial audit report because it is in adherence with Minnesota’s state statutes and financial policies. Additionally, MNsure officials stated that alternate audits may detail relevant audit information. However, CMS’s guidance specifically requires states to develop an annual financial audit report specific to the marketplace as one of the primary sources for evaluating a state marketplace’s financial sustainability. As we stated in our report, if MNsure does not provide complete financial audits specific to the marketplace, CMS may not have the necessary transparency into marketplace IT-related financial activities such as receipts, expenditures, internal controls, and financial policies and procedures. MNsure’s comments are reprinted in appendix IV. In addition, technical comments provided by marketplace officials were incorporated into our final report as appropriate. In e-mail comments, the Executive Director of the New York State of Health marketplace disagreed with our conclusion that the New York marketplace did not have a performance measurement plan that directly tied to goals. The Executive Director stated that the state’s overall goals were to enroll New Yorkers in coverage and reduce the rate of uninsured persons in the state. However, these stated goals were not documented nor were they tied to any specific metrics, as noted in our report. Until New York’s marketplace has documented these goals in a performance measurement plan with clear ties to its metrics, CMS may not have visibility into New York’s marketplace performance metrics which state officials said guide their decision making and operations, or be able to ascertain that the state continuously assessed and adjusted performance metrics and targets as appropriate. The Executive Director also noted that the New York marketplace could not reconcile the amounts provided by CMS for their state. According to their records, the New York marketplace had spent or planned to spend $487.2 million in marketplace grants that included $182.8 million for IT costs. Additionally, the Executive Director stated that CMS had deobligated $64.8 million. However, the amounts provided by New York were as of April 2016. Our report included more recent data obtained from CMS in October 2016 that provided a consistent view of spending for the four states in our analysis. Other technical comments provided by marketplace officials were incorporated into our final report as appropriate. In e-mail comments, the Oregon Interim Administrator provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 28 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. Should you or your staffs have questions about this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to: (1) describe what actions the Centers for Medicare & Medicaid Services (CMS) has taken, if any, to assist states that have chosen to transition to a marketplace IT platform different from the one they originally used and identify the costs and challenges for states in making this transition; (2) assess what actions CMS has taken to assist selected states’ plans to ensure that the development and operations of marketplace IT systems can be financially self-sustained; and (3) assess the steps that CMS has taken to monitor the performance of the states’ marketplace IT systems. To address the objectives, we reviewed marketplace activities conducted by CMS and four selected states: Hawaii, Minnesota, New York, and Oregon. We selected these 4 states from the 17 states that operated their own marketplaces as of March 2016. To make the state selections, we considered four selection factors for the plan year 2016 enrollment period: total enrollment, total federal marketplace grant dollars, a previous GAO review, and whether or not the state transitioned its marketplace to the federal platform. Specifically, we first sorted states operating marketplaces by enrollment levels, from highest to lowest, based on plan year 2016 numbers reported by the Department of Health and Human Services (HHS). We then divided the states into four groups. Within each group, we sorted the states from highest to lowest by the total amount of federal marketplace grant funds awarded and selected the state with the highest amount of awarded grant funding. In two cases, we selected the state with the second highest federal marketplace grant award in the group because the states with the highest award levels had been included in a recent GAO review of state health insurance marketplace IT security and privacy. We also reviewed selected states to verify that the states used different systems integrator contractors. The selection resulted in two states that transitioned to using the federal platform (Hawaii and Oregon) and two that remained the same (Minnesota and New York). The four selected states were based on a nongeneralized sample and, thus, findings from our assessments of these states cannot be used to make inferences about to the full population of all state marketplaces. To assess the reliability of CMS’s data on state marketplace enrollment figures, we reviewed the agency’s data and interviewed state marketplace officials for the four selected states and asked how the state-reported enrollment figures were utilized in their sustainability plans. We determined that the data were sufficiently reliable for our purposes. To assess the reliability of CMS’s data on grant funds awarded and state- reported IT spending to establish, support, and connect to marketplaces, we assessed the reliability of the systems used to collect the information. We asked officials responsible for entering and reviewing the grants information in these systems a series of questions about the accuracy and reliability of the data. Among the sources of data used for our study, we reviewed a spreadsheet compiled by CMS’s Center for Consumer Information and Insurance Oversight (CCIIO) officials that contained state-reported grant funding data and marketplace IT project status information drawn from two separate information systems: CMS’s Grant Solutions and the Payment Management System. The spreadsheet was a consistent source of information that reflected the same cost factors for all states as of October 2016. Specifically, the spreadsheet tracked, among other things, the type and total amount of grant funding provided and available to each state, deobligated grant funding, as well as the time period for expending those funds. We also reviewed the data to determine if there were any outliers and other obvious errors in the data. For any anomalies in the data, we followed up with CMS officials to either understand or correct those anomalies. We determined that the data were sufficiently reliable for our purposes and noted any limitations in our report. While our report discusses state-reported IT spending based on CMS data, we did not verify the accuracy of the data states reported to CMS. To address the first objective, we obtained and analyzed CMS’s transition guidance that was distributed to assist all states. We also reviewed the actions CMS and states performed, such as communications and transition planning for the two selected states—Hawaii and Oregon—that transitioned from state-based to federal marketplace IT systems. To identify transition guidance and transition costs for the states, we also observed CMS’s and the selected state’s management tools, such as the Collaborative Application Lifecycle Tool, State-based Marketplace Annual Reporting Tool (SMART), Payment Management System, and Grant Solutions, for reporting and tracking of grant funding. In addition, we reviewed relevant CMS and state budget and grant documentation to determine associated transition costs. We also interviewed state marketplace officials within the two selected states to further identify transition costs and challenges faced during their transitions. We also interviewed CMS officials regarding identified challenges and their actions to assist the states in addressing them. To address the second objective, we reviewed the four selected states’ sustainability plans and CMS sustainability guidance provided to the states. To identify states’ plans for self-sustainability we reviewed development plans, financials audits, and grant documentation. To identify CMS sustainability guidance we reviewed the agency’s procedures for financial audit and sustainability plan collection, risk assessments, and sustainability consults. We compared CMS financial audit collection against applicable laws, regulation, and agency guidance. We also compared sustainability plan collection and risk assessments against leading practices. To understand how CMS monitors state sustainability, we observed its and the four selected states’ web-based management tools, such as SMART, Payment Management System, and Grant Solutions, for reporting and tracking of state marketplace self-sustainability. To determine the reliability of state sustainability plans, we reviewed the plans and relevant source data for anomalies and outliers, as well as related documentation including state audits and budgets. We also interviewed CMS and selected state officials regarding collection and processing of the data. We determined that the data in the sustainability plans were sufficiently reliable except where noted in our report. For the third objective, we reviewed CMS guidance provided to the state- based marketplaces which called for the monitoring and tracking of the performance of states marketplace IT systems. We identified steps CMS established for monitoring the performance of states’ IT systems. We compared the steps established by CMS to leading practices identified in our prior work and by the Office of Management and Budget. In addition, where available, we reviewed the two selected states’ system performance measurement plans and reports. This included Minnesota and New York—which operated state-based marketplace IT systems— and did not include Hawaii and Oregon—which relied on the federal marketplace IT platform operated by CMS and did not collect system performance metrics. We reviewed the use of tools such as CMS’s Collaborative Application Lifecycle Tool and Open Enrollment Weekly Indicators reports to facilitate the monitoring of state marketplace operations and performance. We analyzed whether CMS ensured that states followed leading practices for IT performance measurement guidance by assessing evidence of CMS and the two selected states’ marketplace performance measurement plans and reporting. To determine the reliability of state performance metrics reports, we reviewed the selected states’ reports for anomalies or missing data and conducted interviews with CMS and selected state officials regarding the collection and processing of the data. We determined that the data in the performance metrics reports were sufficiently reliable. We also assessed whether CMS had conducted operational analysis reviews of the two selected states’ marketplace IT systems using key performance indicators to determine whether states’ systems were performing in an efficient and effective manner. For all three objectives, we supplemented the information and knowledge obtained from our assessments of the program, project, and technical documentation by holding discussions with relevant CMS officials and interviews with state officials at selected state sites regarding their marketplaces. We conducted this performance audit from February 2016 to August 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Tammi Kalugdan (Assistant Director), David Hong (Analyst in Charge), Alexander Anderegg, Christopher Businsky, Debra Conner, Sandra George, Conor McPolin, Brian Palmer, Monica Perez-Nelson, Priscilla Smith, Merry Woo, and Elizabeth Wood made key contributions to this report. | The Patient Protection and Affordable Care Act required the establishment of health insurance exchanges—or marketplaces—to allow consumers to compare, select, and purchase health insurance plans. States can elect to establish a state-based marketplace, or cede this authority to CMS to establish a federally facilitated marketplace. Some states had difficulties with the rollout and operation of their marketplaces, and some states that struggled with IT implementation are now using the federal marketplace IT platform. GAO was requested to review CMS's and states' actions to implement the marketplaces. This report (1) describes CMS's actions to assist states that have chosen to transition to a different marketplace IT platform and identify costs and challenges those states incurred in making this transition; (2) assesses CMS's actions taken to assist selected states to ensure that the development and operations of marketplace IT systems can be financially self-sustained; and (3) assesses CMS's steps to monitor the performance of the states' marketplace IT systems. GAO reviewed documentation from CMS and four states selected based on different types of marketplaces, federal grants provided, and enrollment numbers, and interviewed CMS and the states' officials. The Department of Health and Human Services' (HHS) Centers for Medicare & Medicaid Services (CMS) has offered assistance through providing periodic oversight and issuing regulation and guidance to states transitioning from state-based marketplaces to the federally based marketplace IT platform, including two states that GAO reviewed—Hawaii and Oregon—that had made that transition. While CMS provided these states with assistance, documented CMS transition guidance was not finalized until after the two states had completed their transition. The two states incurred costs of approximately $84.3 million, collectively, to transition to the federal platform. The two states' transition efforts included making changes to their Medicaid systems, with these states mainly relying on Medicaid matching funds from CMS to do this. While the selected states successfully transitioned, they encountered challenges during their transitions, due to accelerated transition time frames, difficulties reassigning marketplace responsibilities, delays in receiving approvals from CMS, and trouble accessing historical consumer data in previous vendor-developed marketplace IT systems. CMS took steps to assist Hawaii and Oregon, as well as two states that GAO selected for review that operated state-based marketplaces, Minnesota and New York, in developing plans for marketplace IT system sustainability. CMS assisted these four states by consulting with the states' officials and providing oversight of their sustainability plans, financial audit reports, and risk assessments. However, CMS did not fully ensure the states provided complete sustainability plans and financial audit reports. Further, CMS did not base its risk assessments on fully defined processes. These weaknesses limit CMS's oversight and assurance that it can be informed of the state marketplaces' sustainability efforts. Although CMS established a process to monitor the performance of state-based marketplaces, CMS did not consistently follow its processes. For example, CMS did not ensure that the two selected states, Minnesota and New York, had developed, updated, and followed their performance measurement plans. Also, CMS did not conduct reviews to analyze the operational performance of these states' marketplace IT systems against an established set of parameters. Further, while CMS collected IT performance metrics from the two states, such as the number of electronic enrollments and website traffic volume, it did not link state metrics to goals or establish targets for performance. These weaknesses limit CMS's ability to determine if states' marketplace systems are performing efficiently, effectively, and to provide early warnings of potential problems (see table). GAO recommends that CMS take six actions: ensure that states provide complete sustainability plans; complete financial audit reports; fully define its risk assessment process; complete updated performance measurement plans; align metrics with goals; and conduct operational analysis reviews. HHS concurred with two, partially concurred with two, and did not concur with two of GAO's recommendations, which GAO continues to believe are valid. |
Most Medicare beneficiaries participate in Medicare Part B, which helps pay for certain DME and other equipment and supplies. This includes, for example, wheelchairs, walkers, oxygen, and hospital beds. In 2007, Medicare spent a total of $430.3 billion. Of that, $8.3 billion was spent on DME and other medical equipment and supplies covered under Part B. Since 1989, Medicare has paid for DME through fee schedules. These fee schedules are based on the average amount that suppliers charged on Medicare claims in 1986 and 1987 for individual DME items adjusted for inflation. Medicare uses a fee schedule for each state to reflect geographical price differences. The applicable state fee schedule is determined by the beneficiary’s residence, not the DME supplier’s location. Medicare generally pays the lesser of either the supplier’s actual charge or the Medicare fee schedule amount for the item or service. For suppliers, Medicare assignment—accepting Medicare’s reimbursement amount for an item as payment in full and limiting the amount the beneficiary can be billed for that item—is optional. If a supplier agrees to assignment, then Medicare generally pays 80 percent of the amount to the supplier and the Medicare beneficiary is responsible for paying the supplier the remaining 20 percent (referred to as the coinsurance payment), once the beneficiary’s annual deductible has been met. If the supplier does not accept assignment, the supplier is not limited to charging the beneficiary 20 percent of the Medicare reimbursement for that item or service and the beneficiary can be billed for whatever balance is due. The Balanced Budget Act of 1997 required CMS to test competitive bidding as a new way to set payment rates for Part B services and supplies selected by CMS. CMS conducted three CBP demonstration projects, two in Florida (1999–2002) and one in Texas (2000–2002). Evaluations of the demonstration projects estimated that they saved nearly $9.4 million. About a year after the demonstrations ended, the MMA was enacted, requiring CMS to implement a broader CBP in 2007. Changing the long- standing policy that any qualified provider be allowed to participate in Medicare, the MMA provided that generally only suppliers who were awarded contracts could be reimbursed by Medicare for providing covered Part B items and services in the selected areas. The MMA imposed certain criteria that CMS was required to follow—for example, eligible suppliers had to meet quality and financial standards, the total amount to be paid to contractors was expected to be less than would be paid otherwise, access of beneficiaries to multiple suppliers in their area must be maintained, and CMS must consider the ability of suppliers to meet the anticipated needs of beneficiaries in the covered geographic area. CMS was also required to ensure that small suppliers would be considered. The MMA required the establishment of the PAOC to advise CMS on various aspects of the CBP, including financial and quality standards. The MMA also prohibited any administrative or judicial review of the designation of CBP’s CBAs, the selection of items and services, the establishment of payment amounts, the bidding structure and number of contract suppliers selected, the awarding of contracts, and the phase-in of the CBP. CMS published a final rule, effective on June 11, 2007, governing implementation of the CBP. CMS contracted with Palmetto GBA to implement the CBP bidding and contract award process and with Maricom to develop the Web-based CBSS. CMS established the bidding process and approved policies and procedures developed by Palmetto GBA. CMS implemented the CBP in 10 CBAs which were among the largest statistical metropolitan areas. CMS chose the items and services to include in round 1 by focusing on the highest cost and highest volume items and services with the largest potential for savings. It selected 10 product categories, with 371 unique items and services; spending for those product categories in the 10 CBP CBAs accounted for about 9 percent of total Medicare spending on those product categories in 2006. Within the 10 CBAs, the product categories chosen accounted for 48 percent of Medicare's spending for DME, prosthetics, orthotics, and related supplies that year. On July 15, 2008, MIPPA was enacted, which terminated the CBP contracts awarded during round 1. MIPPA reinstated Medicare reimbursement based on the Medicare fee schedule for all items and services included in CBP round 1, subject to a 9.5 percent reduction nationally for 2009. MIPPA also required that CMS implement the CBP round 1 rebid in 2009, and imposed additional criteria for this rebid and later rounds. CMS issued an interim final rule implementing these MIPPA provisions. The competitive bidding process had several steps: bidder registration, bid submission, bid review, winner selection, setting Medicare payment amounts, and contract offers (see fig. 1). To participate in CBP round 1, DME suppliers must have met enrollment, quality, and financial standards, obtained all the state and local licenses required to provide the relevant services, and been accredited by a CMS-approved accrediting organization. In addition, bids for each item had to be bona fide—that is, not higher than the Medicare fee schedule but not lower than the supplier’s cost. Bidder registration. The first step in the CBP bidding process was bidder registration. Suppliers had to register with a CMS identity management and authentication system to gain access to the CBSS. Suppliers registered in the CBSS as one of three types of bidding entities: a supplier with a single location, multiple suppliers sharing common ownership or control, or 2-20 small suppliers forming a network. Bid submission. For purposes of the CBP, a bid was an offer by a supplier to furnish all items within a product category throughout the entire CBA. The bid had to include a proposed price for each item in the product category. The number of items in a product category ranged from 3 to 142. A bid package consisted of two electronic forms, A and B, and documents specified in the request-for-bid instructions and in other communication with suppliers. Hard copies of the documents had to be submitted to Palmetto GBA. Form A requested information about suppliers, including Medicare billing numbers, addresses, ownership, current or prior sanctions, and accreditation status. Each Form B required suppliers to disclose annual revenues for the product category in each CBA; estimates of the number of item units currently provided and that could be provided in the future for that product category in that CBA; expansion plans; and item prices, models, and manufacturers. Each Form B constituted one bid—that is, suppliers had to submit a separate form for each product category in each CBA. Suppliers could submit a Form B for any product category up for bid in any CBA. Hard-copy documents required to complete the bid package included financial documents, proof of accreditation status, letters of intent to enter into agreements with subcontractors, network agreements, and statements certifying the accuracy of the submissions. Financial documentation requirements included 3 years of annual financial statements, selected forms from the last three annual tax returns, and credit reports and credit scores for a 90- day period ending close to the date of the bid’s submission. (See table 1.) Bid review. After the bid window closed, Palmetto GBA began to review the bids. It determined whether each bid package was complete, compliant with bidding requirements, and whether the submitting supplier’s financial score satisfied a minimum threshold to qualify to compete on price. The financial score was determined using criteria developed by CMS for this purpose including suppliers’ credit scores and 10 financial measures—described by CMS as standard accounting measures. (See table 2.) If the bid package was complete, compliant with bidding requirements, and the submitting supplier had a financial score that was equal to or greater than the minimum threshold, the bid qualified to compete on price. But before comparing prices, Palmetto GBA also reviewed each qualified bid’s capacity projections—the supplier’s ability to provide the volume of items claimed in the bid in light of the supplier’s historical capacity, expansion plans, and financial score. It adjusted some bids’ capacity projections according to certain guidelines. Winner Selection. Palmetto GBA used several steps to identify the winning bids based on price. Item prices submitted by competing suppliers were compared using a composite pricing methodology. A bid’s composite price was calculated as each item’s price multiplied by an item weight summed across all items in the product category. Table 3 illustrates the calculation for three hypothetical bids’ composite prices in a product category containing three items. Each weight is based on the item’s share of units billed to Medicare in 2006 as a percentage of all of the units for the product category billed to Medicare nationwide that same year. For each auction—a competition by qualified suppliers to deliver all items within a single product category in a single CBA—Palmetto GBA ordered the bids by composite price from lowest to highest. Starting with the bid with the lowest composite price, Palmetto GBA calculated the cumulative projected capacity of the competing bids. Palmetto GBA identified the bid where cumulative projected capacity met or exceeded CMS’s estimated beneficiary demand as the pivotal bid (see table 4). In table 4, the pivotal bid was submitted by Supplier 9 with a composite price of $7.64, since cumulative supply (1,765 units) reached CMS’s estimated demand (1,500 units) at that bid. If projected beneficiary demand could not be met by qualified suppliers, a pivotal bid could not be established and the auction was considered nonviable. Otherwise, bids with composite prices equal to or less than the pivotal bid were winners on the basis of price. Setting Medicare single payment amounts. Bids that won on price were used to establish Medicare’s single payment amounts for each item in the auction. For each item, Palmetto GBA ordered these winning bids’ price offers for each item from lowest to highest. The median price offered for that item would be Medicare’s payment for that auction item in that CBA. The use of the median in setting the item’s single payment amount meant that Medicare’s payment amount could be less than or more than a particular winning supplier’s actual bid for an item. Because CBP payments may only be paid on assignment, Medicare would pay the supplier 80 percent of the single payment amount for an item and the beneficiary would be responsible for the remaining 20 percent. Contract offers. In addition to winning on price, small suppliers’ bids could also win if there were an insufficient number of small suppliers that won on price alone. Before the initial set of contract offers, Palmetto GBA determined whether CMS’s target—that 30 percent of the qualified suppliers be small suppliers—had been met by small suppliers winning on price. In the auctions where the goal had not been met, Palmetto GBA moved up the composite pricing order, above the pivotal bid, for small suppliers only as a means to include additional small suppliers. These additional small suppliers would then be offered contracts, in addition to those suppliers whose bids won on price alone. In March 2008, CMS and Palmetto GBA notified suppliers of the auction results and CMS extended contract offers to winning suppliers. In May 2008, CMS announced the suppliers that had accepted contracts for the 3-year CBP contract period from July 1, 2008, through June 30, 2011. However, MIPPA, which was enacted on July 15, 2008, terminated the CBP round 1 contracts. About one-quarter of the bids submitted during CBP round 1 resulted in awarded contracts. Of the 6,374 bids submitted by 1,010 suppliers, half were disqualified before competing on price—most often for missing financial documentation or noncompliance with accreditation requirements. Nearly two-thirds of the 85 auctions saw the number of suppliers decrease by 50 percent or more compared to the number of suppliers billing Medicare for the product category in 2006. CMS estimated that the volume-weighted reduction in Medicare’s payment amounts for round 1 would have averaged 26 percent. Once the contract award process was completed, 22 percent of the bids submitted (1,372 of 6,374) resulted in contracts between CMS and suppliers to provide DME and other items to Medicare beneficiaries. (See table 5 for step-by-step results.) CMS initially extended contract offers for 1,335 bids. Contracts were offered to additional suppliers when some winners rejected the contract offers associated with 86 bids, as well as after CMS reversed Palmetto GBA’s determinations to disqualify 27 bids. Winning suppliers may have rejected contracts because the CBP single payment amounts were less than the item prices the supplier had bid. By the end of initial bid review, almost half of the bids submitted were disqualified (3,143 of 6,374 submitted). A bid could be disqualified for more than one reason. (See table 6.) Nearly 9 of every 10 disqualified bids (86 percent of the 3,143) did not submit complete financial documentation. Twenty-two percent of the bids were disqualified for noncompliance with accreditation requirements; that is, they failed to receive accreditation by the deadline established by CMS. Two percent of the bids were disqualified because the bidding suppliers did not meet supplier financial standards; that is, in CMS’s judgment, they were unlikely for financial reasons to be able to fulfill their contract obligations. Disqualified bids were ineligible to compete on price and were not considered for a contract award. In the preamble to the CBP final rule, CMS acknowledged that the number of suppliers would decrease as the result of competitive bidding. In 2006, the median number of suppliers per CBA for a product category was 31. For the 2 weeks the CBP contracts were effective, the median number of suppliers fell to 14, or 55 percent less than the number in 2006. Nearly two-thirds of the auctions conducted during CBP round 1 had decreases in the number of suppliers of 50 percent or more. (See app. II for auction- specific detail.) Mail-order diabetic suppliers had the largest decrease (88 percent) while walkers and related accessories had the smallest decrease. One of the 10 product categories, negative pressure wound therapy pumps, had an increase in the number of suppliers as the result of CBP. Compared to the other nine CBAs, the Miami CBA had the largest number of suppliers in eight of nine product categories in 2006 and had the greatest decreases in suppliers after CBP round 1. The median number of suppliers across the 10 product categories decreased 87 percent in the Miami CBA. In 76 of the 85 auctions, at least 30 percent or more of the suppliers that were awarded contracts were small. Small suppliers represented at least 57 percent of all suppliers registered on CBSS and 63 percent of the winning suppliers (see table 7). Because small suppliers submitted fewer bids on average, slightly less than half (48 percent) of all bids resulting in contracts were from small suppliers. CMS estimated that, compared to the 2008 Medicare fee schedule, the volume-weighted reduction in Medicare’s payment amounts for items acquired under CBP round 1 would have averaged 26 percent. (See app. III for specifics by CBA and product category.) The items in the mail- order diabetic supply category had the largest reductions, with differences between the CBP single payment amounts and the Medicare fee schedule averaging 43 percent. CBP single payment amounts were reduced the least for items in the complex rehabilitative power mobility devices and negative pressure wound therapy pumps categories—on average, 15 and 16 percent lower than the 2008 Medicare fee schedule. CMS’s implementation of CBP round 1 presented several challenges to suppliers. Some bid submission information was poorly timed and unclear, confusing suppliers about bidding requirements and compelling some to revise and resubmit their bids. In addition, the CBSS experienced several problems that made submitting bids difficult. CMS did not notify all suppliers of its postbidding review process, which reinstated some bids that CMS found to have been incorrectly disqualified. While the PAOC alerted CMS to potential challenges for round 1, some were not resolved before the bid window opened. CMS clarified CBP bidding information after the bid window opened and extended the bid window deadlines three times—actions making it more difficult for suppliers to submit correct bids. (See fig. 2.) While the CBP request-for-bid instructions, posted the day that the bid window opened, were only revised once, CMS and Palmetto GBA provided additional information explaining the instructions throughout the bid window. Although suppliers could revise their submissions throughout the bid window, when additional information was provided those that believed they had submitted completed bids had to review them to ensure they were still correct. For example, if a supplier revised any of its financial documentation, it had to resubmit the entire financial documentation package and certification statement in hard copy. CMS’s bid window extensions resulted in a 4-month bid window, open May 15, 2007, through September 25, 2007—about 2-½ months longer than originally planned. A first 1-week extension was announced on June 29, 2007—about a week after the open bidder conference call to respond to suppliers’ questions. Palmetto GBA and CMS then conducted a special 30-minute bidder conference call on July 9, 2007, to address suppliers’ concerns about CBSS data losses from an automated logout security feature that caused suppliers to lose unsaved information. CMS announced the second 1-week extension from July 20, 2007, to July 27, 2007. On July 27, 2007, CMS announced a third, 2-month deadline extension to September 25, 2007, and explained there would be a targeted period to address suppliers’ remaining questions and requested that suppliers e-mail their questions to Palmetto GBA by August 10, 2007. CMS allowed suppliers to submit CBP bids while their DME accreditation was pending, and when the final bid window extension was made, the accreditation deadline was also extended. Although CMS had encouraged suppliers to begin the accreditation process before the bid window opened, some suppliers were submitting bids while completing their accreditation process. A CMS official told us that some suppliers did not appreciate or understand the amount of information needed before the accrediting organizations could conduct an accreditation site visit. Whether suppliers had the required DME state licenses was to be determined as part of the accreditation process. However, CMS acknowledged that it checked supplier licenses after contract offers were made and Palmetto GBA officials acknowledged that some suppliers were awarded CBP contracts even though they did not have the necessary state licenses at the time contracts were awarded. CMS and Palmetto GBA acknowledged that suppliers did not always understand the request-for-bid instructions. CMS provided guidance to suppliers through the CBP final rule and the request-for-bid instructions, and CMS and Palmetto GBA provided additional information throughout the bid window through multiple sources. These sources included the Palmetto GBA Web site and its frequently asked questions section, bidder conference calls, CMS and Palmetto GBA listservs, and the Palmetto GBA customer service center. We found that these sources sometimes had unclear or inconsistent information about the bidding instructions, including the specialty supplier definition, how to estimate supplier capacity, and how to complete bid application Forms A and B. (See app. IV for examples.) Some suppliers told us that Palmetto GBA service center employees could not answer their questions and one supplier told us it was uncomfortable using the center because it was unsure the information provided was correct. CMS also acknowledged that many suppliers had particular difficulty complying with the financial documentation requirement. A supplier told us, for example, that it was a wholly owned subsidiary of a parent company and did not understand which financial documentation requirements in the request-for-bid instructions applied to it. A CMS official told us that some suppliers did not understand that they had to provide all of the required financial documents, and that the statement of cash flow—described as a statement of changes in financial position—was the document most often missing. We also found that CMS’s financial documentation instructions did not clearly address differences among supplier business types—for example, a sole proprietorship business versus a publicly traded national corporation—and among the financial documents needed to submit a bid for each supplier type. Because business types did not easily link to the request-for-bid instructions, suppliers were at risk of submitting incomplete or inaccurate financial documentation. We found that CMS’s request-for-bid instructions had inconsistent information about the requirements for a credit report and credit score. The Form A bid instructions for financial information discussed different types of suppliers and their financial documents in six paragraphs. In two paragraphs—for suppliers that submit individual tax returns that include business taxes and for suppliers that submit corporate tax returns—the instructions stated that those supplier types had to submit a current credit report but stated nothing about a credit score. In the remaining four paragraphs—for limited partnerships, publicly traded suppliers, new suppliers, and networks—nothing was stated about either a credit report or a credit score. The bid submission Form A stated that a credit rating and score—rather than using the term credit report—had to be submitted. Near the end of the bid window on September 13, 2007, Palmetto GBA issued a “required document reminder” that stated that all bidders, regardless of their business structure, had to submit both a credit report and a credit score. The feedback that CMS provided to suppliers that had bids disqualified because of bid submission deficiencies was vague. CMS provided suppliers that had bids disqualified with seven general reason codes to explain the grounds for the disqualifications. (See table 8.) The suppliers with disqualified bids received letters dated March 20, 2008, from Palmetto GBA with attachments that indicated which reason code or codes applied for each CBA and each product category for which the supplier submitted a bid. The reason codes provided as feedback may not help a supplier understand how to resolve its bid issues for future CBP rounds. For example, if a supplier’s bid did not provide all required financial documentation, it was disqualified under the BSE-4 reason code. (See table 9.) The BSE-4 reason code does not inform the supplier which financial document or documents were not submitted. Likewise, if the supplier did not meet the financial standards, the bid was disqualified under the FS-1 reason, and the supplier would not know the standard or standards it had not met. In addition, CMS did not always provide a supplier with all reasons why a bid was disqualified. Palmetto GBA officials told us that suppliers were informed of an accreditation disqualification reason (BSE-3) if it was the bid’s only disqualifying reason. If a supplier was disqualified both for a reason code other than BSE-3 and for not being accredited, the supplier would not have been informed about the accreditation reason. CMS conducted a postbidding review process through which the agency reversed Palmetto GBA’s decision to disqualify the bids of certain suppliers. Specifically, Palmetto GBA and CMS reviewed a total of 1,935 bids from 357 suppliers from March 21, 2008, through July 9, 2008. They only reviewed the disqualified bids of suppliers who contacted them with questions or requested a review. As a result of this review, CMS determined that 10 suppliers had 58 bids incorrectly disqualified; the agency subsequently offered CBP contracts to 7 of these suppliers for 27 bids. CMS did not effectively communicate to suppliers that they had an opportunity to have disqualified round 1 bids reviewed. CMS officials informed us that the agency made a decision on or about March 5, 2008, as part of a quality assurance process, to permit Palmetto GBA to review disqualified bids after suppliers received their March 20, 2008, letters notifying them of their disqualifications. After the letters were sent to suppliers on March 20, 2008, CMS officials told us that suppliers learned about the bid review opportunity if they contacted Palmetto GBA with questions about their bids, participated in an April 2008 CMS Open Door Forum about the CBP program, or attended the June 16, 2008 PAOC meeting. CMS and Palmetto GBA, however, did not provide any written notification explaining this review process to suppliers prior to or after they were informed of their bid disqualifications, and some suppliers were not aware of this opportunity for review. For example, two suppliers informed us that they were unaware that a postbidding review was an option. Another supplier informed us that the company’s bids were disqualified and when he called Palmetto GBA to follow up, he was informed that there would be a review and response in 30 days, but he had not received a response as of March 25, 2009. An additional supplier informed us that in response to his inquiries, CMS stated that there was no formal appeal process. Moreover, the postbidding review was inconsistent with CMS’s earlier interpretation of its authority to conduct such reviews. Before soliciting bids for round 1, the agency determined that it would not have the authority to review the results of bid evaluations. The MMA prohibited administrative and judicial review of certain round 1 determinations, including the awarding of contracts, the bidding structure, and number of contractors selected. Neither the MMA nor its legislative history defined the phrase “administrative review.” In the preamble to the CBP final rule, however, CMS interpreted this provision as prohibiting review of the results of bid evaluations. CMS did not explicitly address such a review or any reversals of bid disqualifications elsewhere in its regulations or other policy guidance. In the preamble, CMS also recounted that commenters requested that it establish a grievance and review process for suppliers. Among other things, commenters also expressed concern about the potential for errors in disqualifying suppliers and requested that CMS provide an opportunity for review to confirm the accuracy of these disqualifications. In response to these comments, CMS indicated that it did not have the authority to review the outcome of bid evaluations. Specifically, it cited the prohibition on administrative or judicial review, explaining that Congress enacted this prohibition to avoid any delay or disruption in the implementation of the program as a result of challenges brought by bidders. In response to our inquiries during this evaluation, CMS officials informed us that the postbidding review process was not an administrative review prohibited by statute, but rather a quality assurance measure. In our view, CMS’s characterization of the postbidding review process as a quality assurance measure does not fully address the inconsistency with the agency’s earlier position that it did not have the authority to conduct such a review. In the preamble to the CBP final rule, CMS advised that it would notify losing bidders but would not provide debriefings due to logistics, volume of bidders, and time constraints. As an alternative, CMS explained that the agency would conduct an extensive education and outreach program for suppliers and was developing a quality assurance program. But the postbidding review process was distinct from the specific quality assurance steps that CMS described it would take in the preamble to the CBP final rule. In addition to its own quality assurance system, CMS indicated that Palmetto GBA would implement a quality assurance program, but did not elaborate on the form this program would take. However, the agency’s response to commenters rejected any suggestion of a postbidding review citing prohibitions under federal law. CMS officials have since informed us that the language in the CBP final rule was ambiguous, thereby not precluding it from conducting the postbidding review to be considered a quality assurance measure. Even if that were the case, CMS did not provide any clarifying guidance to suppliers that explicitly informed disqualified suppliers of the opportunity for a postbidding review. Instead CMS made its March 5, 2008, decision to conduct these reviews about 2 weeks before suppliers were mailed notice of their bid disqualifications. The notification simply stated that suppliers could call customer service with questions, and CMS and Palmetto GBA conducted these reviews only for suppliers who contacted them or requested a review. After the CBP round 1 bid window closed, CMS acknowledged that the CBSS had information technology (IT) operational problems that affected suppliers’ ability to submit their bids. CMS also acknowledged that loss of bid submission data was a major problem for suppliers. During the early part of the bid window, a CBSS security feature automatically logged a supplier out of the system after 2 hours, which caused some suppliers to lose data. Another security feature timed suppliers out of CBSS if there was no activity for 30 minutes. To address suppliers’ concerns with the CBSS’s bid submission data losses, CMS and Palmetto GBA conducted a special bidder conference call July 9, 2007. Some suppliers stated that the CBSS was difficult to use, which impeded their ability to submit a bid. CMS officials acknowledged that the CBSS user guide was not very detailed or user friendly. Some error messages also used technical language that suppliers did not understand. In addition, CBSS required data to be manually reentered for the same product category in multiple CBAs because the CBSS did not have a “cut and paste” function. The data reentry was time-consuming and increased the risk of suppliers’ inputting incorrect data that could disqualify a bid. CMS officials stated that there were cases when the CBSS was unavailable to suppliers to submit their bids. CMS explained that CBSS had unscheduled downtimes that inconvenienced the suppliers, particularly those working in CBSS at the time. According to CMS, privacy and security rules required that each user ID and password allow only one user to access the CBSS at a time. However, the system did not have the controls to prevent multiple users from attempting to do so. When this scenario did occur, the system became inaccessible for all user IDs and passwords. A supplier told us that it had to wait until nonworkday hours to access the CBSS to submit its bids. On the last day of the bid window, CBSS was unavailable for several hours. Although a CMS official said that the original PAOC was generally helpful to CMS in developing and implementing CBP round 1 and that it provided CMS with assistance in the overall design of the program, two members of the original PAOC and three DME trade association representatives told us that CMS did not always use the PAOC effectively. Though the PAOC provided input to CMS to address potential supplier challenges during the development and implementation of CBP round 1, some issues raised were not fully resolved, such as concerns about missing or lost financial documentation, the absence of a formal CMS bid review process, the concern that small suppliers would be disadvantaged, and that the supplier quality standards were not finalized before the CBP round 1 bid window opened. One PAOC member stated that although the PAOC’s role was to advise and to oversee the CBP, members were not provided enough information and opportunities to provide feedback to fulfill these responsibilities. One PAOC member also reported having insufficient time to discuss and react to the CMS and Palmetto GBA presentations and expressed dissatisfaction at not being able to formulate or vote on recommendations. A CMS official stated that the PAOC had cochairs—one CMS official and one industry representative—to encourage mutual collaboration. However, the two PAOC members said this approach was not effective because the CMS cochair had a greater role on the committee than the industry cochair. CMS has taken several steps to improve future rounds of the CBP. It issued an interim final rule in 2009 to implement certain provisions of MIPPA that affect the round 1 rebid. It has taken several additional actions to make the round 1 rebid bidding process easier for suppliers to navigate and the bidding information easier to understand. CMS’s new bid submission system, DBidS, may address the IT operational deficiencies that occurred during round 1. Finally, though MIPPA extended the termination date of the PAOC, CMS disbanded the original PAOC and appointed new members to the current PAOC to provide new expertise and input for the round 1 rebid. CMS’s interim final rule, effective April 18, 2009, implemented certain MIPPA provisions, including changes that CMS is required to make for the CBP round 1 rebid and future rounds. Notification of missing financial documentation. CMS will notify and provide feedback about any missing financial documentation to bidding suppliers that submit their required financial documentation within a time period known as the covered document review date. Once notified, suppliers will have 10 business days to submit the missing documentation. Subcontractor information. Suppliers that enter into CBP contracts with CMS must disclose (1) each subcontracting arrangement the supplier enters into to provide items and services covered under its CBP contract and (2) whether the subcontractor meets accreditation requirements, if applicable. The supplier must provide this information to CMS within 10 days of entering into a CBP contract and within 10 days of entering any subcontracting arrangement subsequent to the award of the contract. In addition to the changes specifically required under the interim final rule, MIPPA also included other changes to the CBP. Accreditation deadline. Suppliers, including subcontractors, providing items or services on or after October 1, 2009, must have submitted evidence of accreditation prior to this date. CBP ombudsman. A competitive acquisition ombudsman, within CMS, must be appointed by the HHS Secretary to respond to CBP questions and complaints made by suppliers and individuals. The ombudsman must submit an annual report detailing CBP-related activities to Congress. PAOC extension. The termination date for the PAOC is extended from December 31, 2009, to December 31, 2011. CMS has made several additional changes for the CBP round 1 rebid in response to problems that occurred during CBP round 1. First, to reduce the burden on bidding suppliers providing financial documentation, CMS, as stated in the preamble to the interim final rule, will require suppliers to submit 1 year of documentation instead of 3 years, which CMS now believes is adequate to determine a supplier’s financial soundness. The request-for-bid instructions now provides a chart that lists the required financial documents by supplier type. For example, the chart distinguishes the financial documentation required for a sole proprietorship versus a corporation. In addition to the chart, the rebid’s request-for-bid instructions also include a sample of a completed income statement, balance sheet, statement of cash flow, and corporate tax return. Second, CMS announced the timeline for the round 1 rebid bid window in advance, and to improve the quality and availability of information to bidding suppliers, CMS launched an intensive bidder education campaign to provide suppliers with all the information necessary to submit a complete bid during the round 1 rebid bid window. According to CMS, the request-for-bid instructions has been made clearer and more understandable. A Palmetto GBA official said that, if necessary, the request-for-bid instructions will be updated until the bid window closes, although CMS will notify suppliers if the bidding instructions are revised or clarified during the bid window. Furthermore, to ensure that suppliers can easily locate the most current CBP information, CMS will date every page, article, and frequently asked question so that suppliers know when new information has been posted. As in CBP round 1, suppliers may enter into subcontracting arrangements with other suppliers to provide items and services covered under their CBP contract to eligible Medicare beneficiaries. However, CMS clarified that subcontractors may be used only to purchase inventory, deliver and instruct on the use of Medicare-covered items, and repair rental equipment. Contract suppliers are responsible for furnishing items and services in compliance with physicians’ orders and Medicare rules and guidelines. These services include coordination of care with physicians, submitting claims on behalf of beneficiaries, assuming ownership and responsibility for equipment furnished to beneficiaries, and ensuring product safety. In addition, the original PAOC was concerned that suppliers new to a product category were given the same consideration as experienced suppliers during CBP round 1. For this reason, a CMS official announced at the June 4, 2009, PAOC meeting that the agency is now considering whether to apply a different standard to evaluate the capacity of suppliers new to a DME product category. CMS later explained the new proposal to us. For the CBP, all suppliers, both new and experienced, estimate the number of items they can provide to meet the projected demand of beneficiaries for a product category in a CBA. Currently, a supplier must meet a minimum threshold based on CMS’s determination of its financial strength in order for CMS to continue to evaluate its bid. If a supplier meets that threshold, it is then evaluated against a second threshold to determine whether CMS will accept the supplier’s estimate of its ability to expand its current capacity. CMS is proposing that the second threshold be higher for suppliers new to a product category than for experienced suppliers. According to a CMS official, new suppliers that did not meet the second higher threshold could still be offered a contract, although the proposal would generally result in awarding more contracts to suppliers with experience. Suppliers participating in the round 1 rebid must have all local and state licenses for a product category in a CBA at the time of bid submission in order to be considered for a CBP contract. According to CMS, this is not a change from CBP round 1. However, there were issues during the first round that complicated licensure verification. CMS and Palmetto GBA acknowledged and some trade association representatives told us that some suppliers were offered CBP contracts during CBP round 1 for product categories for which they were not properly licensed. Therefore, for the round 1 rebid, CMS has further clarified the licensure requirement, stating that suppliers must be licensed for the product category in the CBA in which they are bidding and if a CBA covers more than one state, the supplier needs to obtain applicable licensure in all states. To ensure that the licensure requirement is met, CMS is improving quality assurance checks to confirm that suppliers are properly licensed prior to accepting suppliers’ bids in the CBP round 1 rebid. On January 2, 2009, CMS published a final rule, effective March 3, 2009, to implement a statutory requirement that certain DME suppliers post a $50,000 surety bond. In responding to comments on the rule, CMS stated that the surety bond is designed to reduce the amount of money that is lost due to fraudulent or abusive billing schemes by suppliers. Existing Medicare suppliers had until October 2, 2009, to comply, and as of May 4, 2009, new suppliers were required to post the bond as a condition of their enrollment in Medicare. Suppliers that participate in the rebid will have to comply with the surety bond requirement. According to CMS system and Palmetto GBA personnel, the agency developed a new IT system to replace the CBSS and correct the operational problems that were identified. This system, DBidS, was developed in accordance with the agency’s defined system development process and was designed to address the operational deficiencies identified with CBSS. DBidS software testing, including user testing, was completed in August 2009 and CMS management has accepted and approved the system for operation. CMS system development is guided by its Integrated IT Investment and System Life Cycle Framework, which prescribes steps, activities, and documents required to develop CMS IT systems. For example, the framework describes processes to be followed in developing, validating, and agreeing on requirements for system features and capabilities. It also describes required testing, including user acceptance testing, which validates that business requirements are met, as well as performance and stress testing, in which large volumes of input data or simulated concurrent users are introduced to determine levels beyond which the system will fail. Finally, it describes the operational review that the agency must perform to determine whether to accept and approve the system for operation. According to experts in the software development field, having a defined process increases the likelihood of a successful system development, although it does not guarantee it. In accordance with the framework, CMS officials assessed CBSS business requirements and reviewed these with the contractors to establish a new set of baseline requirements for DBidS. The agency used these requirements to develop a design for the system, which was reviewed by CMS in 2008. Based on this design, the system was developed and testing began. On May 29, 2009, CMS began advising all DME suppliers to update their National Supplier Clearinghouse files to ensure that they contained correct and current information. CMS stated that this was especially important for suppliers planning to bid in the round 1 rebid because it would enable them to avoid the registration issues that occurred during CBP round 1 because some of the information in the suppliers’ National Supplier Clearinghouse files did not match the information that was submitted into the Individuals Authorized Access to CMS Computer Systems. In May 2009, a CMS official stated that DBidS was designed to address specific deficiencies identified in CBSS; it is designed to be more user friendly and easier for suppliers to navigate, and it is to provide a logical flow of the data that are requested, as well as detailed bidding instructions in user- friendly language. It is to have status indicators to indicate whether the bidding forms are “complete,” “incomplete,” or “pending approval,” and links in the system to direct suppliers to the incomplete data. In addition, CMS said that DBidS will have a “copy and paste” function for the transfer of certain data and many data-saving points to minimize loss of data. DBidS is expected to also allow a supplier to have more than one employee access DBidS at the same time, but to control data input the system will not allow more than one employee to input the same data at the same time. In addition to the DBidS changes to address specific deficiencies identified in CBSS, CMS also recognized that more thorough testing of CBSS might have prevented certain systems deficiencies. As of August 2009, CMS has completed testing DBidS, including two changes to correct a critical defect and addressed the policy requirement that all suppliers be accredited. As of September 2009, CMS has accepted DBidS for operation and agency officials indicated that previous deficiencies have been satisfactorily addressed. However, until DBidS is put into operation its effectiveness in correcting these deficiencies is unknown. On October 2, 2008, CMS formally announced that because of the length of the MIPPA extension, and because the PAOC was to perform additional duties, the agency had ended the terms of service for the original PAOC members, and was soliciting nominations for new individuals to serve on the PAOC. On January 15, 2009, CMS announced the 17 new members of the current PAOC who were chosen because of their expertise in a broad range of issues, including quality standards, accreditation, and Medicare beneficiary issues. Although CMS stated that this PAOC was to review the bidding process for the round 1 rebid and consider all of the MIPPA changes, CMS did not schedule the first meeting until June 4, 2009, 4-½ months after CMS had issued its interim final rule for public comment to implement the MIPPA provisions on January 16, 2009. Like the original PAOC, the current PAOC is cochaired by a CMS official and a DME industry representative. Similar to the meetings of the original PAOC, the June 4, 2009, PAOC meeting included several presentations by CMS officials with limited time allowed at the end of each for PAOC member discussion. The presentations included information concerning DBidS; CBP requirements and bidder responsibilities; suppliers’ financial documentation, licensure, accreditation, and subcontracting requirements; new supplier issues; mail- order diabetic supplies; and the tentative timeline for the CBP round 1 rebid implementation. Although a CMS official told PAOC members that they were encouraged to continue to provide individual feedback, advice, and suggestions during the meeting and additionally by e-mail for CMS’s consideration, as with the original PAOC meetings, CMS did not ask the PAOC to provide recommendations that would reflect input from the committee as a whole. Although CMS had not conducted PAOC meetings by teleconference previously, the agency held a three-hour teleconference on July 21, 2009, to solicit the current PAOC members’ feedback and suggestions on (1) determining beneficiary demand, (2) assessing bidding suppliers’ ability to meet the demand, and (3) reviewing regulations for change of ownership and the sale of contracts. A four-page meeting summary was posted on CMS’s Web site in August 2009, but a transcript has not been posted. We cannot determine at this time the degree to which the PAOC members’ input will be reflected in CMS’s implementation of the round 1 rebid. If wholly adopted, competitive bidding could reduce Medicare payments for DME, help close the disparity with prices paid by others for the same items and services, and also help reduce improper payments. It also represents a change from Medicare’s long-standing policy that any qualified provider can participate in Medicare because it authorizes CMS to select suppliers to participate in Medicare, based in part on CMS’s scrutiny of their financial documents and other bid submission materials. CBP round 1 was the first time that both CMS and DME suppliers participated in a large-scale DME competitive bidding process. Some challenges may be expected for a new program, but problems occurred, in part because of poor communication by CMS and an inadequate electronic bid submission system. CMS was aware of these problems as the bidding unfolded and extended the original bid window as it attempted to correct them. The agency worked to address these problems before the round 1 rebid began. DBidS, the new electronic bid submission system developed by CMS and Maricom, could be an improvement if it successfully addresses the deficiencies identified in the system used for round 1 as CMS claims it will. CMS’s implementation of the MIPPA requirement that the agency provide feedback on the status of suppliers’ financial documentation may help reduce the number of bids disqualified for inadequate financial documentation. And the agency’s implementation of the statutory requirements that all suppliers, including subcontractors, provide evidence of accreditation by October 1, 2009, and that suppliers generally must post surety bonds may help ensure that only legitimate suppliers are enrolled in Medicare and therefore are eligible to bid. To address the concerns that suppliers have the experience to provide the DME items they win contracts for, before they can submit bids, suppliers will also have to be accredited and licensed for each DME product category and CBA in which they bid. In addition, CMS’s early announcement of the timeline for the rebid and the revised request-for-bid instructions gave suppliers more time to decide whether to participate in the rebid and to begin preparing their bids before the window opens. Despite CMS’s actions to improve the program, difficulties may still arise in the round 1 rebid and future rounds. Because CMS did not effectively notify suppliers of the postbidding review conducted in round 1, some suppliers missed the opportunity to have their disqualified bids reviewed. Unless CMS commits to effectively notifying all bidders of any review of disqualified bids, if it decides to allow such a process in future rounds of the CBP, CMS will not be able to ensure that all bidding suppliers have an equal opportunity to request a postbid review. To improve future rounds of the competitive bidding program for DME, we recommend that the Administrator of CMS take the following action: If CMS decides to conduct a review of disqualification decisions during the round 1 rebid and future rounds, CMS should notify all suppliers of any such process, give suppliers equal opportunity for such reviews, and clearly indicate how they can request a review. In written comments on a draft of this report, HHS agreed with our recommendation that it effectively notify all suppliers of all aspects of the CBP. This would include any process to review bid disqualifications. CMS said it believes that suppliers should have the opportunity to raise questions or concerns about the competitive bidding process, including disqualification decisions. We found that CMS did not effectively notify suppliers about its postbid review of disqualified bids which resulted in some bid disqualifications being overturned in round 1 of the CBP. HHS also commented that we had not identified concerns with the overall structure and design of the CBP. However, such an analysis was beyond the scope of this report. HHS noted that it had a different perspective on some aspects of our report. The agency commented that the number of suppliers with CBP contracts did not account for the number of locations where DME items and services might be available in the CBAs. Our work focused on the number of suppliers participating in the CBP process, the number that were disqualified, and the number that were awarded contracts. We used the same contract supplier definition as CMS, which did not include the number of locations. We did not analyze whether there were enough locations to provide adequate Medicare beneficiary access during the CBP’s 2-week operation. HHS suggested that our statement that about half of the submitted bids were disqualified before competing on price creates the impression that additional suppliers would have won if they submitted bids that complied with the terms and conditions of the request-for-bid instructions. However, we believe our characterization is accurate because bids were first reviewed for completeness, compliance with bidding requirements, and financial score. The agency also argued that it relied heavily on the PAOC for the design and implementation of the CBP. But as we stated in the report, two original PAOC members and three trade association representatives told us that CMS did not always use the PAOC effectively. Our review of PAOC meeting transcripts also found members who were dissatisfied with how the PAOC was used. Finally, we revised the report according to HHS’s comment that a reduction in the number of suppliers was an expected result of the CBP, but not a goal of the program. As we noted in the report draft, the CBP was structured to allow only suppliers with winning bids that accepted contracts to provide DME items and services, in contrast to Medicare’s long-standing policy that any qualified provider can participate in Medicare. HHS provided additional technical comments which we incorporated as appropriate. HHS’s written comments are reprinted in appendix V. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its date. We are sending copies of this report to the Secretary of Health and Human Services. The report will also be available at no charge on our Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To assess the Centers for Medicare & Medicaid Services’s (CMS) implementation of round 1 of the competitive bidding program (CBP), we reviewed federal laws and regulations. We also interviewed officials from CMS and Palmetto GBA—the contractor CMS selected to implement the CBP bidding and contract award process—about the results of the bid submission and review processes, CMS’s major challenges in implementing CBP round 1, and the actions taken to improve future CBP rounds. To determine the results of the CBP round 1, we reviewed data from CMS and Palmetto GBA about the number and characteristics of suppliers participating in the CBP process, number and characteristics of bids submitted, and the bids’ outcomes. We reviewed the Competitive Bid Submission System (CBSS) User Guide, and instructions for entering data. We interviewed and obtained information from officials from CMS and Palmetto GBA about the CBSS, including system testing and data processing. We asked Palmetto GBA officials about data transfers from the CBSS to the Competitive Bidding Evaluation System (CBES), an application designed by Palmetto GBA to automate specific portions of the bid evaluation process that contained bid data, financial data entered by Palmetto GBA personnel, and documentation of Palmetto GBA actions. We asked them about CBES data checks, quality control, data entry procedures, and security. We interviewed CMS officials about the criteria and procedures for disqualifying bids, identifying winning bids, and calculating single payment amounts. We reviewed information CMS provided to the Program Advisory and Oversight Committee (PAOC) about this process and its results. We compared data published by CMS with the data provided to us and followed up with the appropriate officials to resolve discrepancies. We assessed the reliability of round 1 data by reviewing information from or interviewing CMS and Palmetto GBA officials and determined that the data were sufficiently reliable for the purposes of this report. We did not evaluate the reliability of CMS estimates of beneficiary demand for durable medical equipment (DME) which relied on 2005 and 2006 DME claims data, the most recent data available to them at the time, nor did we evaluate CMS’s estimates of projected savings as the result of round 1. To determine the major challenges CMS had in conducting CBP round 1, we interviewed CMS and Palmetto GBA officials and reviewed information provided to suppliers, including CBP bid submission instructions and related materials, bidder conference call transcripts, and CMS’s and Palmetto GBA’s CBP Web sites. We reviewed these materials for inconsistencies. We also reviewed an internal document provided by Palmetto GBA about its implementation of the CBP round 1. We interviewed two PAOC members concerning whether CMS used the PAOC effectively and to gain insight about the committee’s role in advising CMS about the implementation of the CBP and establishing standards for suppliers that bid in round 1. We reviewed transcripts and meeting summaries of the seven PAOC meetings to assess the concerns and feedback that the members provided about potential supplier issues and challenges. We also interviewed CMS and Palmetto GBA officials and reviewed documentation about CBSS’s operational problems. We interviewed 12 suppliers about their experiences with CBP. We interviewed 4 suppliers that were not offered a contract, 4 suppliers that accepted a CBP contract, and 4 suppliers that rejected their CBP contract offer. The suppliers were randomly selected from CMS’s list of suppliers that bid in CBP round 1. Because we interviewed a small number of suppliers, our findings from these interviews are not generalizable to all suppliers. In addition, we interviewed representatives from national and state industry trade associations representing DME suppliers—the American Association for Homecare, the National Association of Independent Medical Equipment Suppliers, the Florida Association of Medical Equipment Services, and the Ohio Association of Medical Equipment Services. We also reviewed testimony from three congressional hearings including two 2008 hearings about the CBP implementation and a 2009 congressional hearing on the CBP’s impact on small business, in which a CMS official discussed the results of CBP round 1, and six representatives of various DME associations and interest groups discussed the effect that the CBP had on their businesses and professions. To analyze the postbidding review authorized by CMS and conducted by Palmetto GBA, we interviewed CMS and Palmetto GBA officials about the development and implementation of the review process and reviewed its results. We also reviewed relevant federal laws and regulations and interviewed CMS officials and attorneys representing the Department of Health and Human Services (HHS), CMS division. To determine the steps that CMS has taken to improve the bidding process for future CBP rounds, we reviewed relevant federal laws and regulations, PAOC Federal Register notices, and CMS press releases related to the PAOC. We interviewed CMS and Palmetto GBA officials about the actions they have taken and intend to take to improve the CBP bidding process during the CBP round 1 rebid. We also attended the June 4, 2009, PAOC meeting at which CMS provided updates of the process changes and modifications that it made for the round 1 rebid. In addition, we interviewed Maricom officials and reviewed available documentation related to the development, testing, and proposed implementation of the new electronic bid submission system—Durable Medical Equipment, Prosthetics, Orthotics, and Supplies bidding system (DBidS)—that will be used during the CBP round 1 rebid. We did not assess the reliability or functionality of DBidS, but we reviewed the processes established by CMS and its contractors for testing and accepting such systems. We conducted this performance audit from June 2008 to September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Change in Numbers of Suppliers by CBP Product Category and CBA: 2006-2008 Competitive bidding area (CBA) Competitive bidding area (CBA) Competitive bidding area (CBA) Support Surfaces (group 2 mattresses and overlays) This table identifies the total number of suppliers that provided services in each competitive bidding area (CBA) for each product category in CY 2006 with allowed charges for items in the product category greater than $10,000 and the number of suppliers awarded CBP contracts in round 1 as of June 11, 2008. uipment, and supplies are used to provide food through a tube placed in the nose, the stomach, or the small intestine. Appendix III: Percentage Differences between 2008 Medicare Fee Schedule and CBP Round 1 Single Payment Amounts Competitive bidding area (CBA) Competitive bidding area (CBA) INS means that the estimated capacity of suppliers submitting ualified bids or accepting contracts was insufficient to meet projected demand. NA means not applicable. No auctions were conducted for these product category and CBA combinations. Except for the last column, the data reflect volume-weighted average savings within an auction. According to the CMS, the savings rate was derived by multiplying the difference between the 2008 Medicare fee schedule for each item in a product category in a CBA and the item’s CBP-derived single payment amount by the same weights used to calculate composite prices for the product category. CMS projected the overall savings for round 1 at approximately 26 percent annually to the Medicare program and Medicare beneficiaries. The averages in the last column are unweighted. GAO did not make a determination as to whether or not this methodology is an accurate measure of true savings to the Medicare program. Enteral nutrients, euipment, and supplies are used to provide food through a tube placed in the nose, the stomach, or the small intestine. Group 2 mattresses and overlays of the support surfaces product category are pressure-reducing support surfaces for persons with large or multiple pressure ulcers. The examples below are taken from two competitive bidding program (CBP) documents that provided written information for suppliers about how to submit a bid and information on bidding requirements. The documents are from a Web-based seminar, or webinar, posted on the Palmetto GBA Web site on April 30, 2007, and the request-for-bid instructions posted on the same Web site on May 15, 2007. In addition to the contact named above, key contributors to this report were Martin T. Gahart, Assistant Director; Carrie Davidson; Neil Doherty; JoAnn Martinez; Christie Motley; Michelle Paluga; Hemi Tewarson; Keo Vongvanith; Timothy Walker; Opal Winebrenner; Suzanne Worth; and Charles Youman. Medicare: Covert Testing Exposes Weaknesses in the Durable Medical Equipment Supplier Screening Process. GAO-08-955. Washington, D.C.: July 3, 2008. Medicare: Competitive Bidding for Medical Equipment and Supplies Could Reduce Program Payments, but Adequate Oversight Is Critical. GAO-08-767T. Washington, D.C.: May 6, 2008. Medicare: Improvements Needed to Address Improper Payments for Medical Equipment and Supplies. GAO-07-59. Washington, D.C.: January 31, 2007. Medicare Payment: CMS Methodology Adequate to Estimate National Error Rate. GAO-06-300. Washington, D.C.: March 24, 2006. Medicare Durable Medical Equipment: Class III Devices Do Not Warrant a Distinct Annual Payment Update. GAO-06-62. Washington, D.C.: March 1, 2006. Medicare: More Effective Screening and Stronger Enrollment Standards Needed for Medical Equipment Suppliers. GAO-05-656. Washington, D.C.: September 22, 2005. Medicare: CMS’s Program Safeguards Did Not Deter Growth in Spending for Power Wheelchairs. GAO-05-43. Washington, D.C.: November 17, 2004. Medicare: Past Experience Can Guide Future Competitive Bidding for Medical Equipment and Supplies. GAO-04-765. Washington, D.C.: September 7, 2004. Medicare: CMS Did Not Control Rising Power Wheelchair Spending. GAO-04-716T. Washington, D.C.: April 28, 2004. | In 2007, Medicare spent $8.3 billion for durable medical equipment (DME) and related supplies. To reduce spending, the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) required that the Centers for Medicare & Medicaid Services (CMS) phase in, with several rounds of bidding, a large-scale competitive bidding program (CBP) for certain DME and other items. DME suppliers began bidding in round 1 of the CBP on May 15, 2007. After contracts were awarded, the Medicare Improvements for Patients and Providers Act of 2008 (MIPPA), was enacted on July 15, 2008. Because of numerous concerns MIPPA delayed the program, terminated supplier contracts, and required CMS to begin the CBP round 1 rebid in 2009. GAO was asked to report on (1) the results of CBP round 1, (2) the major challenges CMS had in conducting CBP round 1, and (3) the steps CMS has taken to improve future CBP rounds. GAO reviewed CMS data and relevant laws and regulations, and interviewed officials from CMS and its contractors, and DME suppliers and professional associations. About a quarter of the bids submitted during CBP round 1 resulted in awarded contracts. The contracts were in effect until terminated by MIPPA on July 15, 2008. Of the 6,374 bids submitted by 1,010 suppliers, half were disqualified before competing on price. Bids were most often disqualified for missing financial documentation or noncompliance with accreditation requirements. In nearly two-thirds of CBP round 1's price competitions--in which suppliers submitted bids to deliver items for a specific product category within a specific competitive bidding area (CBA)--the number of suppliers decreased by at least half. The largest decreases in suppliers were in the Miami CBA. CMS estimated that the reduction in Medicare payments for items acquired as a result of CBP round 1 would have averaged 26 percent when compared to payments under the Medicare fee schedule. CBP's round 1 presented several challenges to suppliers, including poor timing and lack of clarity in bid submission information, a failure to inform all suppliers that losing bids could be reviewed, and an inadequate electronic bid submission system. CMS provided some clarifying information about bidding after the bid window opened, repeatedly extended the bid window deadlines, and provided updated guidance to bidders throughout the bid window. The information CMS provided to suppliers about bidding requirements was sometimes unclear and inconsistent, particularly regarding financial documentation. CMS did not effectively notify suppliers of its postbidding review process. Because some suppliers were not aware of the review process, they missed the opportunity to have their disqualified bids reviewed. CMS found that some bids had been incorrectly disqualified. Finally, several problems with the electronic bid submission system, including data losses from automated logouts and unscheduled downtimes, made it difficult for some suppliers to submit bids. CMS has taken several steps to improve the bidding process for the round 1 rebid and subsequent rounds of the CBP. CMS is implementing MIPPA provisions to notify suppliers of missing financial documentation and create a CBP ombudsman. It has reduced financial documentation requirements and revised the request for bid instructions to make it clearer and more understandable. It is also developing a new electronic bidding submission system, the Durable Medical Equipment, Prosthetics, Orthotics, and Supplies bidding system (DBidS), which the agency claims will address the deficiencies of the system used for round 1. Bidding for the round 1 rebid began in late October 2009. The CBP has the potential to produce considerable benefits, including reducing overall Medicare spending for DME and limiting potential fraud through increased scrutiny of suppliers. Although challenges may be expected for any new program, problems occurred in round 1 because of poor communication by CMS and an inadequate bid submission system. |
DOD offers medical services to 8.3 million eligible people through the MHSS—1.7 million active duty members and another 6.6 million non-active duty members, such as dependents of active duty personnel and military retirees and their dependents. The bulk of the health care is provided at more than 600 military hospitals and clinics worldwide; through CHAMPUS; and, to a comparatively minor extent, at USTFs. The USTF managed care program involves the formation of provider networks to deliver a full spectrum of inpatient and outpatient care and preventive services; beneficiary enrollment; and a monthly capitated reimbursement system. DOD’s capitation payment rates cover all the medical care a member would need in a year. Subject to annual appropriations, USTFs are permitted to enroll any person eligible for MHSS benefits except for active duty members, who receive their care at military hospitals and clinics. But unlike those under CHAMPUS, USTF members do not lose their participation rights when they reach age 65 and become eligible for Medicare. At the beginning of fiscal year 1996, the USTFs had 124,012 members, including about 27,000 Medicare-eligibles, and an appropriated funding level of $339 million (see table 1). By September 1997, DOD plans to complete its implementation of TRICARE—a nationwide managed care program. TRICARE is aimed at improving access to high-quality care while containing costs. TRICARE involves coordinating and managing beneficiary care on a regional basis using all available military hospitals and clinics supplemented by competitively contracted civilian services. TRICARE offers beneficiaries three plans: (1) TRICARE Standard, a fee-for-service arrangement to replace the present CHAMPUS program; (2) TRICARE Extra, a preferred provider plan; and (3) TRICARE Prime, an HMO that provides comprehensive medical care to beneficiaries through an integrated network of military and contracted civilian providers. (App. I compares the cost-sharing provisions of the three TRICARE plans.) As required by P.L. 104-106, DOD is to develop a plan to integrate the USTFs into TRICARE. We will soon report on several issues regarding the USTFs’ integration into TRICARE, including whether the USTFs should retain their special, noncompetitive relationship with DOD. The managed care support contractors under TRICARE compete on a cost-effectiveness basis rather than through a noncompetitive negotiation of rates as is done with the USTFs. Our analysis of the potential effects on the USTFs of adopting the TRICARE cost shares showed that less than 10 percent of the members will disenroll, causing less than a 2-percent increase in operating costs. But DOD’s reimbursement approach takes into account and otherwise adjusts the USTFs’ capitation payments for higher costs that may result from changes in the population’s age and gender. It also allows for negotiated adjustments in reimbursement rates for the effects of benefit and cost-sharing revisions, which may result in adverse selection. In contrast, the USTFs estimated that the new cost shares will cause about a 40-percent USTF disenrollment rate and cost increases of about 11 percent. However, the USTFs’ estimates are overstated because of weaknesses in their survey and health claims data, and the absence of out-of-pocket cost differences among the key plans that are significant enough to cause disenrollment of more than 10 percent. The USTFs’ estimates of the effects of the new cost shares were based largely on the results of a telephone survey of USTF households and an analysis of health claims data. In February 1996, the USTFs conducted a survey of 2,100 member households (300 from each USTF) to determine whether, with the new cost shares, members would disenroll and choose TRICARE Standard. Retirees under age 65 and their households were surveyed because only this group—not the Medicare-eligible or active duty dependent members—will be subject to the new enrollment fee. Also, USTF health claims data for the surveyed households covering the 12 months ending September 30, 1995, were analyzed to determine the costs for members who said they would remain and those who would disenroll. Our review of the USTFs’ survey approach and data analysis raised several concerns about the reliability of their estimates. First, the survey questions focused solely on the households’ out-of-pocket cost increases and did not probe respondents’ views about the quality of or access to care at the USTF versus other options available to them. (See app. III for the questionnaires used in the USTF survey.) Since households may base their health plan decisions on factors other than out-of-pocket costs, such as access and quality, questions on these other factors would have added needed perspective to the survey responses. Second, the wording of several questions could have misled respondents and produced incorrect responses. For example, one question asked: “If you have to choose between CHAMPUS [or TRICARE Standard, TRICARE Extra, or TRICARE Prime at the Pacific Medical USTF] and the USTF with higher copays and enrollment fees for your household in the future, which would you select?” The question’s phrasing could have led respondents to believe that with the new cost shares the USTFs will have higher copayments than the other choices. This is not the case. Furthermore, survey choices were categorized as “would stay,” “would leave and choose TRICARE Standard,” “neither, or would choose different plan,” and “don’t know.” To reduce the number of “don’t know” responses, interviewers were instructed to probe respondents and try to force them to make a decision. One probe was “We’re not asking you to make a firm commitment right now, but we are interested in knowing which one you would be most likely to choose on the basis of the information I just read to you.” Because interviewers tried to force respondents to change “don’t know” answers, the responses in these cases may not reliably predict the respondent’s answer. Finally, the average length of time individuals took to respond to the survey was about 4 minutes. This short period probably did not allow most individuals to weigh and respond thoughtfully about the medical plan they would choose. When analyzing cost differences among respondents, the USTF actuaries combined the “don’t know” responses with the group who responded they would disenroll. This caused an overstatement of the number of respondents the USTFs estimated will leave. Also, in analyzing potential cost differences, the USTF actuaries did not verify the claims data the USTFs reported. In addition, four of the USTFs provided incomplete data for the surveyed households. They provided less than 12 months of claims data and/or omitted such services as outpatient prescription drugs and care provided under subcontract with non-USTF providers. For Bayley Seton and Johns Hopkins, 172 and 106, respectively, of the 300 surveyed households for each facility were dropped because no claims data were available for these households. Furthermore, the USTF report stated that the USTFs had to perform some adjustments to produce theoretical billed charges. In effect, a percentage of the claims costs the USTFs provided is incomplete, or estimated; thus, such data cannot be validated and are of questionable use for estimating the potential cost effects of adverse selection. According to actuarial research, any time a health plan increases a member’s out-of-pocket costs relative to competing plan choices, some adverse selection can occur. But for the USTFs to experience the 40-percent disenrollment rate they estimated, the cost differences would have to be significantly higher than what would exist between the USTFs’ new cost shares and TRICARE Standard. As table 2 shows, the USTF households that face the greatest out-of-pocket increase—$460—relative to TRICARE Standard are those incurring no medical expenditures. Most USTF households, or those incurring some medical expenditures, will have even lower relative cost differences. According to actuarial experience, such relative cost difference levels will not result in major enrollment shifts. Moreover, a Congressional Research Service study of the 1987 FEHBP open season found that out-of-pocket cost differences among plans had to be at least $1,000—$2,000 in 1996 dollars—to result in more than a 10-percent plan disenrollment rate. But as table 2 shows, no USTF household will reach an out-of-pocket cost difference that high when compared with TRICARE Standard. The disenrollment rate that will likely result from the USTFs’ adopting the new cost shares will be less than 10 percent. But to be actuarially conservative, we allowed for a 20-percent outcome and reestimated the USTFs’ disenrollment and cost increases. Table 3 shows the comparative results of these adjustments. As shown, the 20-percent disenrollment estimate reduces the USTFs’ estimated 11-percent cost increase to 4 percent. Reductions in the USTFs’ estimated cost increases are greater for the USTFs that may experience the most adverse selection, such as Pacific Medical. (App. IV provides a breakdown on the effects for each USTF.) Also, although active duty and Medicare-eligible family members are not subject to the new enrollment fee, the USTFs estimated that some of these family members will also disenroll. The USTFs estimated up to 5-percent cost increases for each group as a result of adverse selection. We found, however, that there would be negligible or no adverse selection of such members and thus no cost increase would occur with the new cost shares. Family members of active duty personnel would incur the same out-of-pocket costs at the USTFs as elsewhere in the TRICARE system and thus would not have a relative cost difference incentive to disenroll.Medicare-eligible family members would incur the same costs but have better benefits and better access to care at the USTFs than in TRICARE. We believe, moreover, that individuals from these two categories would replace those retirees under age 65 and their dependents who disenroll because of adverse selection. In estimating an 11-percent cost increase resulting from the new cost shares, the USTFs assumed that each affected member would have the same claims costs in the year after adverse selection occurred as they had in the year before. Also, they concluded that members with the most costly claims would be most likely to stay with the USTF, and new enrollees would have the same claims costs as those respondents who said they would stay. According to actuarial research, however, individuals that incur a large claim in one year will not necessarily do so the following year. This is because large claims may be for one-time high-cost events. A recent study of year-to-year health care expenditures for a large manufacturing firm showed that most large claims incurred in a given year are from individuals incurring much lower claims the prior year. Conversely, most of the future large claims will come from individuals with low claims in the current year. According to the USTFs’ estimates of adverse selection, members with the least costly claims will be most likely to disenroll. Also, in any given year, a small number of enrollees will have large claims. If enrollees could predict such claims—and some can—when faced with choosing between competing plans, they would join the plan most cost-beneficial to them (the USTFs, in this case). According to actuarial research, however, many such claims cannot be predicted, so many of the USTF members with high claims in the year after adverse selection would have had no reason to have selected the USTF plan before adverse selection occurred. To illustrate the sensitivity of the USTF analysis to the inclusion of all high-cost claims, we recomputed the USTFs’ cost estimates by removing the two largest claimants from each facility. The largest claimants’ costs ranged from about $49,000 to $337,000. The comparative results are shown in table 4. As shown, removing such high claims costs from the USTFs’ estimating base reduces their 11-percent cost increase estimate to 4 percent.Coincidentally, this is the same effect produced by reducing their estimated disenrollment rate from 40 percent to our conservatively applied 20 percent rate. Because less than 1 percent of the surveyed households had high claims that accounted for almost 20 percent of the total claims costs, including or removing such claimants from the estimating base significantly affects the estimating outcome. Moreover, on the basis of the actuarial assumption that individuals who have large cost claims in one year are likely to have lower claims the following year, the USTFs’ 11-percent cost increase estimate appears to be unnecessarily high. Finally, actuarial studies focusing on adverse selection and ways to predict the effects of beneficiary choice have concluded that future-year costs resulting from adverse selection cannot be accurately predicted by any set of known characteristics and circumstances from past years. According to actuarial research, the most reliable way to gauge the effects of adverse selection is to examine actual experience under the benefit change in question. Our adjustments to the USTFs’ estimated 11-percent cost increase covering the adverse selection for retirees and their dependents under 65 years old resulted in a reduced estimate of 4 percent. Using the 4-percent cost increase, we estimated that the weighted average cost effect of adverse selection for the USTFs in 1996 would be less than 2 percent of their 1996 reimbursement level, or about $5.5 million dollars. This estimated increase, however, will likely have no lasting negative financial impact on USTFs because DOD’s current reimbursement approach automatically adjusts USTF payments to account for changes in members’ age and gender. For example, our analysis of the survey data available for Johns Hopkins beneficiaries who had a claims history shows that the facility may gain financially from adverse selction. The Johns Hopkins data indicate that respondents who said they would stay there are, on average, 1.7 years older than all respondents. Because USTF capitation rates generally increase as the members age, some of the older remaining members would cause substantial payment increases as they move to higher capitation bands. For example, DOD pays the Johns Hopkins USTF $885 more per year for a 55-year-old male than a 54-year-old male. For females, the difference between 55- and 54-year-olds is $483 per year (see DOD’s capitation bands by age and gender category in app. V). Thus, if its remaining members’ average age increases by 1.7 years, we estimate that capitation payments would automatically rise by 4.9 percent. The higher revenue would exceed the USTFs’ 4-percent estimate of Johns Hopkins’ cost increase resulting from adverse selection. = 2.38). As a result, the capitation increase resulting from a 1.7-year age increase would be 4.9 percent (1.7 years x 2.9 percent per year). USTF, and the process for periodically adjusting them is set forth in their participation agreements with DOD. The TRICARE cost shares are appropriate for the risks to be borne by the USTFs. The cost shares would create some problems for a managed care plan unable to adjust its capitation. But any initial USTF loss would be covered through automatic capitation adjustments based on members’ age and gender, and later losses could be offset by future negotiated capitation adjustments. As a result, the TRICARE cost shares will not create a financial burden on the USTFs. Furthermore, the TRICARE cost sharing is similar to HMO plans in the FEHBP. However, the new $230 to $460 USTF enrollment fees are lower than the employee shares of the typical private sector HMO and signficantly less than those in the FEHBP (see table 5). The USTFs believe that adoption of the new cost shares will result in their enrolling an older, perhaps less healthy beneficiary population than is enrolled under TRICARE. This in their opinion will increase USTF costs. The USTF and DOD beneficiary populations are already dissimilar. The USTFs serve proportionately more retirees and their dependents. At issue, therefore, is to what degree this dissimilarity is likely to change as a result of the USTFs’ new cost shares. In 1994, the USTF population consisted of a larger proportion of retirees and dependents under age 65 than the DOD populations in the USTF regions. This disparity grew during the 1996 USTF enrollment period, as shown in table 6. Thus, the USTF population, already dissimilar to the DOD population, is becoming more so. But with the new USTF cost shares, the USTF population will actually move closer to the general DOD population as the healthy retirees under age 65 seek less costly medical coverage. Further, those who disenroll will likely be replaced by new enrollees who are dependents of active duty personnel or Medicare-eligible retirees and their families over age 64. But no matter how dissimilar the populations are, DOD’s reimbursement approach will account for USTF population changes and offset any resulting negative financial effect. The establishment of uniform benefits and cost sharing for DOD beneficiaries is a key component of the TRICARE program and something that we and others have long advocated. Such uniformity would, in our view, eliminate inequities and confusion that now exist among beneficiaries of military health plans. While adopting the TRICARE cost shares may cause some minor adverse selection for the USTFs, our analysis indicates that there will be no lasting negative financial effect on USTF operations. Further, the new cost shares, which are similar to HMOs, are appropriate for the risks to be borne by the USTFs and will likely make the USTF population more similar to DOD’s general beneficiary population. More importantly, should there be a financial impact, DOD’s current USTF capitation methodology takes into account and allows for adjusted reimbursement levels for such higher costs that result from changes in the enrollee cost shares and population characteristics. We received comments on a draft of the report from DOD’s Principal Deputy Assistant Secretary for Health Affairs and other DOD officials, and on the USTFs’ behalf from officials of the Seattle and Texas facilities. DOD officials stated that they agreed with the report’s analysis and findings. They pointed out, however, that the draft report’s language discussing DOD’s reimbursement approach should clearly set forth that the capitation rates make automatic age and gender adjustments and also allow for negotiated rate adjustments to cover the possible adverse selection effects of benefit/cost-sharing revisions. We clarified the report’s language on this matter and incorporated the officials’ other suggested technical report changes as appropriate. USTF officials also took issue with the report’s discussion of factors for which the capitation rates automatically adjust. They stated that there is no provision in their participation agreements with DOD that allows for negotiated capitation rate adjustments for possible adverse selection due to cost-sharing changes. The officials stated that, while the agreements allow for negotiated rate changes due to benefit revisions, the USTFs do not consider the new cost shares to be benefit changes—although they stated they have not consulted DOD on the matter. We believe that because the new cost shares represent a change in the health care package offered to USTF beneficiaries and materially affect the actuarial value or cost of the package, the new cost shares constitute a benefit change. Also, as pointed out, DOD considers the effects of such changes to be subject to negotiated capitation rate adjustments. USTF officials stated that, contrary to our assertion that the USTFs’ survey should have included questions on quality and access along with the questions on higher cost shares, such additional questions were not relevant. They stated that their annual member surveys repeatedly show high member satisfaction with the USTFs, tending to affirm their historic 2-percent disenrollment rate. Adding questions on quality and access would have, in our view, added perspective for more fully understanding why survey respondents gave the answers they did. Moreover, the high levels of member satisfaction referred to by the USTF officials tend to raise further questions as to whether members would disenroll at the USTFs’ estimated 40 percent rate. The USTF officials stated that our removing the two highest claimants per USTF from their database and reestimating the potential cost increase is incorrect. They stated that there will be high claims in each year—or new enrollees with high claims—so that removing the two highest claimants would not reflect the USTFs’ actual costs. Also, the officials stated that while it is true that a member with high claims in one year will not necessarily have such claims the next year because the member may have died, the costs should be included in the estimating base to have a true picture of the total costs. We disagree. The USTFs’ cost-effect estimates assume that all claims, including the high claims, for all members whether they said they would stay or leave will be the same in the year after the choice as before the choice. According to actuarial research, however, many of the high claims in one year will not be for the same individuals as in the prior year, which, for example, as the USTFs point out, would occur if the member died. To illustrate the major impact that a few respondents with high claims costs had on the USTFs’ estimated cost increases, we removed the two largest claimants in each USTF. We agree that the USTFs will have some high claims each year. But the level of cost increase the USTFs estimated as a result of adverse selection will depend on the same members (with the highest claims in the year before the choice) staying and having the same high claims in the year after the choice. The USTFs’ assumption is actuarially questionable and greatly overstates the adverse selection effect. Even if we had included the two highest claimants per USTF in our illustration, but distributed them randomly among those who stay and those who leave, the net adverse selection effect would have only been approximately 4 percent. USTF officials also said we were incorrect in basing the estimated cost increase due to adverse selection on their total reimbursement. They said it should be based only on reimbursement for the segment of the USTF members most affected by adverse selection—the retirees and their dependents under age 65. We disagree. One purpose of our evaluation was to determine if the new cost shares would be inappropriate for fully at-risk managed care facilities. To do so, it is necessary to consider the financial impact of adverse selection on the facilities’ total income—in this case, DOD’s total capitation payments for all USTF members. Also, since the active duty dependents and retirees and their dependents aged 65 and over will not pay any enrollment fee, the impact of adverse selection on these two groups would be negligible. Thus, in our view it is appropriate to compare the potential adverse selection cost increase for the retirees and dependents under age 65 ($5.5 million) to the total income of the USTF facilities ($323.5 million) in determining the 1.7-percent increase in financial risk to the facilities. Finally, the USTF officials stated that their members cannot be compared to FEHBP or private plan enrollees; that their members are used to and believe they are entitled to free care such that the enrollment fees would be strongly resisted; and that our use of a 20-percent disenrollment rate is not substantiated nor valid. We disagree. Fewer than 10 percent of the USTFs’ members would disenroll, but to be actuarially conservative, we used a 20-percent rate to estimate the cost shares’ effects. We based our approach on actuarial research and experience with a wide range of private and public health plans. As the report states, there is very little disenrollment as a result of relative increases in out-of-pocket differences of $460 or less per family. While plans do vary widely in structure and demographics, the relative effect of changes in out-of-pocket costs on choice is similar, and one set of plans can safely be used to predict the results in another set. Also, neither the USTF officials nor their report cited any evidence or studies that showed that disenrollment had been higher than 10 percent for similar out-of-pocket changes in any other plan. We will send copies of this report to the Secretaries of Defense, Health and Human Services, Transportation, and Commerce; and the USTFs. We will make copies available to others upon request. If you have any questions about this report, please call me on (202) 512-7111. Other major contributors are listed in appendix VI. Retirees and retiree family members$230(S), $460(F) $11/day, $25 min. $11/day, $25 min. $11/day, $25 min. $20/day, $25 min. $20/day, $25 min. Catastrophic limits (single or family) Retirees and retiree family members$150(S), $150(S), $150(S), $300(F) $300(F) $50(S), $100(F) $150(S), $300(F) $300(F) $10.50/day, $25 min. $250/day or 25% $10.50/day, $25 min. $10.50/day, $25 min. The greater of $25/admission or $20/day 15%, no deductible 20%, no deductible (S) = single; (F) = family. Active duty family members, ranks E5 and above $230(S), $230(S), $460(F) $460(F) $11/day, $25 min. $11/day, $25 min. $11/day, $25 min. $11/day, $25 min. $20/day, $25 min. $20/day, $25 min. This appendix contains estimates of the cost impact on each USTF resulting from adverse selection (1) with and without the two largest claimants of each facility in the computation and (2) using disenrollment rates of 20 percent and 40 percent. The cost impact with and without the two largest claimants is derived by subtracting the cost for total respondents’ monthly claims from the monthly claims cost of the respondents who stay and dividing this increase by the total respondents’ monthly claims costs. For example, as shown in table IV.1, the cost for total respondents’ monthly claims (including all claimants) is $257. The monthly claims cost of the respondents who stay is $279. Subtracting $257 from $279 yields a cost increase of $22, which is about 8 percent of $257. The cost impact of using different disenrollment rates is derived by subtracting the total respondents’ monthly claims costs from those of the respondents who stay and dividing this increase by the total respondents’ costs. For example, as shown in table IV.2, the cost for the total respondents’ monthly claims is $257. For a 20-percent disenrollment rate, the monthly claims cost of the respondents who stay is $265. Subtracting $257 from $265 yields a cost increase of $8, which is about 3 percent of $257. Daniel M. Brier, Assistant Director, (202) 512-6803 Carolyn R. Kirby, Senior Evaluator, (202) 512-9843 Jean N. Chase, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the potential effects of the Department of Defense's (DOD) new health care benefit and cost-sharing package, which is part of its TRICARE managed health care program, on the Uniformed Services Treatment Facilities (USTF). GAO found that: (1) the new cost-sharing arrangement might leave USTF at risk for higher costs by causing some healthy members to disenroll, an outcome known as adverse selection; (2) adverse selection probably will not have long-term negative financial effects on USTF, because less than 10 percent of current USTF members are expected to disenroll, and USTF costs would not increase by more than 2 percent; (3) DOD capitation payments will automatically adjust for higher USTF costs caused by changes in enrollment, and USTF may negotiate payment adjustments for the effects of the benefit and cost-sharing revisions; (4) USTF estimated that cost-sharing would cause about 40 percent of their members to disenroll and increase costs by about 11 percent; (5) the USTF estimates are unreliable because of data and methodological weaknesses; (6) the USTF estimates included data for large claims which are unpredictable and dramatically affect cost estimates; (7) the difference in members' out-of-pocket costs between USTF and TRICARE Standard is not expected to be great enough to cause more than an estimated 10-percent USTF disenrollment; and (8) USTF already serve proportionally more retirees and their dependents who are under age 65 than exist in the general DOD population, but cost-sharing may reduce the proportion of younger retirees and their dependents in the USTF population. |
To encourage employers to establish and maintain retirement plans for their employees, the federal government provides preferential tax treatment under the Internal Revenue Code (IRC) for plans that meet certain requirements. In addition, the Employee Retirement Income Security Act of 1974 (ERISA), as amended, sets forth certain protections for participants in private-sector retirement plans and establishes standards of conduct for those that manage the plans and their assets, generally called fiduciaries. To the extent they qualify as fiduciaries under the law, plan sponsors assume certain responsibilities and potential liability under ERISA. For example, a fiduciary must act prudently and solely in the interest of plan participants and their beneficiaries, which may require documenting decisions relating to the plan, including hiring outside professionals or service providers that advise and help administer plans. Small employers may choose a plan for their employees from one of three categories: employer-sponsored IRA plans; defined contribution (DC) plans; and defined benefit (DB) plans Appendix II presents (often referred to as traditional pension plans).information provided by Labor and IRS about some of the various types of retirement savings plans available to small employers. Employer-sponsored IRA plans: Employer-sponsored IRA plans allow employers and, in some cases, employees to make contributions for deposit in separate IRA accounts for each participating employee. These plans generally have fewer administration and reporting requirements than other types of plans. Participating employees bear the full investment risk of their account assets. There are two types of employer- sponsored IRA plans. Savings Incentive Match Plans for Employees (SIMPLE) IRA plans require employers to either match their eligible employees’ voluntary salary reductions (typically up to 3 percent of compensation) or to contribute 2 percent of compensation for each eligible employee. The second type is the Simplified Employee Pension (SEP) IRA plan, which can be sponsored by an employer of any size, and has higher employer contribution limits than the SIMPLE IRA plan. In a SEP IRA plan, employer contributions are voluntary and employee salary reductions are not permitted. Defined contribution plans: DC plans allow employers, employees, or both to contribute to individual employee accounts that are grouped under a single plan. Employee salary reductions, if provided under the plan, may be pretax or after-tax, in some cases. As with employer-sponsored IRA plans, employees participating in DC plans bear the full risk of investment and will realize any returns (gains or losses) on those investments. DC plans tend to have higher limits for employee contributions but also more rules and reporting requirements than employer-sponsored IRA plans. For example, some DC plans may be required to conduct annual testing in order to ensure that the contributions or benefits provided under the plan do not discriminate against rank-and-file workers in favor of highly compensated employees,. In addition to nondiscrimination testing, some DC plans may also be subject to top-heavy requirements and be required to conduct further testing to ensure a minimum level of benefits are provided to rank-and-file workers in plans that are sponsored by owner-dominated firms, where the majority of benefits accrue to “key” employees, such as owners and top executives. As we have previously reported, top-heavy requirements are intended to address a greater potential for tax-shelter abuses in such plans. Top-heavy requirements are most likely to affect smaller plans (fewer than 100 participants), according to the IRS. The most common type of DC plan is a 401(k) plan. In 401(k) plans, employees can defer a portion of their salary—pretax or after tax, if permitted by the plan—for deposit into a separate retirement account. Employers may also choose to make additional contributions (such as contributing a percentage of each eligible employee’s salary), match the amount contributed by the employee, or both. One type of 401(k) plan, the safe harbor 401(k) plan, is not subject to some of the requirements associated with traditional 401(k)s that generally require annual plan testing. However, under safe harbor 401(k) plans, employers are required Another type to make certain contributions to each participant’s account.of tax-qualified DC plan, the profit sharing plan, gives the employer the discretion to determine annually whether and how much to pay into the plan, within certain maximum limits. Employer contributions, if any, are allocated to each employee according to the terms of the plan. The assets held in DC plans and employer-sponsored IRA plans are not insured by the Pension Benefit Guaranty Corporation. requirements, nondiscrimination testing, and top-heavy requirements. Operating DB plans typically requires the expertise of an actuary. Over the years, Congress has responded to concerns about lack of access to workplace retirement plans for employees of small businesses with legislation to lower costs, simplify requirements, and ease administrative burden. For example, The Revenue Act of 1978 and the Small Business Job Protection Act of 1996 established the SEP IRA plan and the SIMPLE IRA plan respectively, featuring fewer compliance requirements than other plan types. The Economic Growth and Tax Relief Reconciliation Act of 2001 (EGTRRA) also included a number of provisions that affected small businesses. For example, EGTRRA eliminated top-heavy testing requirements for safe harbor 401(k)s, increased contribution limits for employer-sponsored IRA plans and 401(k) plans, and created a tax credit for small employers to offset startup costs, including the cost of educating employees about a new plan. EGTRRA also created a tax credit for individuals within certain income limits who make eligible contributions to retirement plans. The Pension Protection Act of 2006, among other changes, made these EGTRRA provisions permanent and established additional provisions that support retirement plan participation by rank-and-file employees, such as automatic enrollment. To help encourage plan sponsorship, federal agencies conduct education and outreach activities and provide information about retirement plans for small employers. Labor, IRS, and the Small Business Administration (SBA)—which maintains an extensive network of field offices—have collaborated with each other and with national and local organizations to develop information on small employer retirement plans and conduct outreach with small employers. For example, Labor, IRS, SBA and the U.S. Chamber of Commerce partnered to create the Choosing a Retirement Solution Campaign, which targets small employers and their employees.retirement plan guidance for small employers, highlight key aspects of and differences between various plans and features, including tax benefits for employers and employees. Labor also worked with the Society for Human Resource Management and the American Institute of Certified Public Accountants (AICPA) on the Fiduciary Education Campaign to provide retirement plan fiduciaries with information about their fiduciary responsibilities under ERISA. The campaign’s educational materials, including web-based In addition, various private-sector service providers, from individual accountants, investment advisers, recordkeepers, and actuaries to insurance companies and banks, assist sponsors with their retirement plans. Some sponsors hire a single provider that offers a range of plan services for one fee—sometimes referred to as a “bundled” services arrangement. Other sponsors hire different providers for individual services under an “unbundled” arrangement, paying a separate fee for each service. Plan services include legal, accounting, trustee/custodial, recordkeeping, investment management, and investment education or advice. Service providers can also assist with plan administration functions, including nondiscrimination testing, top-heavy testing, and filing of government reports. Some providers also include payroll services, which further centralize an employer’s administrative services through a single company. Labor provides some guidance for plan sponsors in selecting and monitoring plan service providers. Further, the American Society of Pension Professionals & Actuaries (ASPPA) publishes a list of certified firms that adhere to ASPPA’s standards and best practices concerning recordkeeping and administration services for retirement plans. GAO found that the number of employees and average wages greatly influence the likelihood that a small employer will sponsor a retirement plan. Further, the regression analysis using Labor and IRS data found that small employers with larger numbers of employees were the most likely of all small employers to sponsor a retirement plan, as were those paying average annual wages of $50,000 to $99,999. Conversely, employers with the fewest employees and the lowest average annual wages were very unlikely to sponsor a retirement plan. A separate GAO analysis using Labor and IRS data found an overall small employer sponsorship rate of 14 percent in 2009.sponsorship rate does not include small employers that sponsor SEP IRA plans because IRS currently does not have a means to collect these data, which limits what is known about small employers that sponsor SEP However, the plans. According to IRS, its Form 5498, “IRA Contribution Information,” includes some SEP information; however, the agency is unable to link this information to an employer’s employer identification number (EIN). As a result, IRS can identify participants in SEP plans but not sponsoring employers.proposed a change to the form to allow IRS to identify SEP IRA plan sponsors, officials said the proposal was not adopted. While the IRS Tax Forms and Publication Committee Further examination of sponsorship rates looking at small employer characteristics found that those with 26 to 100 employees had the highest sponsorship rate—31 percent—while small employers with 1 to 4 employees had the lowest rate—5 percent (see fig. 1). Additionally, even though small employers with 26 to 100 employees made up only 10 percent of the overall small employer population, they sponsored more retirement plans than employers with 1 to 4 employees. Looking at the average annual wage characteristics, small employers with average annual wages of $50,000 to $99,999 had the highest rate of retirement plan sponsorship at 34 percent while small employers with average wages of under $10,000 had the lowest sponsorship rate—3 percent (see fig. 2). Further, despite having a smaller overall population, small employers with average annual wages of $50,000 to $99,999 sponsor almost three times as many retirement plans compared to small employers paying average wages of under $10,000. As a point of comparison, the overall annual average wages for employees working for small employers was about $38,000. Analysis of the Labor and IRS data examining the interaction between both characteristics—number of employees and average annual wages— illustrates how sponsorship rates increase as numbers of employees and average annual wages increase. For example, the plan sponsorship rate for employers with 26 to 100 employees and average wages of $30,000 to $49,999 was more than nine times higher than employers with the same number of employees and wages below $10,000. Further, the sponsorship rate for small employers with 26 to 100 employees exceeded 75 percent when average wages were $50,000 or higher. In contrast, small employers with 1 to 4 employees reached their highest sponsorship rate of 13 percent when average annual wages were $50,000 or more; however, sponsorship rates were still about one-sixth the rate for small employers with 26 to 100 employees in the same wage category. Our analysis showed the sponsorship rate for employers with one to four employees lowered the overall sponsorship rate in the average annual wage categories. For example, the figure shows that small employers with average annual wages of $100,000 or more have an overall sponsorship rate of 26 percent, but this is much lower than the sponsorship rates for small employers with five or more employees. Figure 3 shows small employer sponsorship rates by size of employer and average annual wage paid. In examining the geographic distribution of sponsorship rates, small employers in the Midwest and Northeast were more likely to sponsor plans, while employers in the West and South were less likely.in examining data on individual states, Connecticut, Wisconsin, and Washington, D.C., had the highest rate—with Washington, D.C., showing the top rate of 23 percent. Florida and Mississippi had the lowest Further, sponsorship rates at fewer than 10 percent. Figure 4 shows the percentage of small employers that sponsor plans by state. According to GAO analysis of Labor and IRS data, 401(k) and SIMPLE IRA plans were overwhelmingly the most common types of plans sponsored by small employers. Out of slightly more than 712,000 small employers that sponsored a single type of plan, about 86 percent sponsored either a 401(k) or a SIMPLE IRA plan. Additionally, non- 401(k) DC plans, which include non-401(k) profit sharing plans, make up 11 percent of the plan type population; SARSEP IRAs are 3 percent, while DB plans make up only about 1 percent of the small employer sponsor population.sponsored by small employers. Small employers and other stakeholdersadministration requirements, fiduciary responsibilities, and top-heavy testing requirements as complex and burdensome, often citing these factors as barriers to sponsoring retirement plans or as reasons for terminating them. identified various plan options, Plan options and administration requirements: Small employers and other stakeholders said that plan options and administration requirements are frequently complex and burdensome and discourage some small employers from sponsoring a plan. For example, some small employers and retirement experts said that the number of plan types and features make it difficult for small employers to compare and choose plans. Representatives of a plan service provider said that too many plan options overwhelmed small employers, making it more difficult for them to choose a plan and, ultimately, less likely that they will sponsor one. Some stakeholders also described the administrative burden of plan paperwork, such as reviewing complicated quarterly investment reports or complying with federal reporting requirements—like those associated with required annual statements—as particularly burdensome. For example, one small employer with a DB plan described a dense and highly technical quarterly investment report for his plan that ran 50 pages, making it difficult to glean summary financial information about the plan. Another small employer who previously sponsored a 401(k) with a company match said the amount of required plan paperwork, including generating annual reports, was a key reason he terminated it. Stakeholders also identified interim amendment requirements as burdensome for plan administration. Plan sponsors generally submit plan documentation to IRS periodically to ensure that plans are up to date and compliant with relevant federal statutes and regulations. However, when statutes and regulations change, some sponsors may be required to modify plan documentation and resubmit their plan documents to IRS. Some stakeholders, including small employers, a small business advocacy organization, and plan service provider, said that complying with interim amendment requirements can be costly and time consuming for small employers. IRS has recognized that interim amendment requirements pose a burden to plan sponsors. However, an IRS official noted that most small employer plans are likely based on plan designs that are preapproved by IRS, and interim amendment requirements are likely to entail little administrative burden for most small employer sponsors. Fiduciary responsibilities: A number of stakeholders indicated that understanding and carrying out a sponsor’s fiduciary responsibilities with respect to their qualified retirement plans presents significant challenges to some small employers. Plan sponsors may qualify as fiduciaries under ERISA, for example, if they have discretionary authority or control over the management of the plan or control the plan assets. Fiduciaries have a number of responsibilities, such as the duty to act prudently, in the sole interest of the participants and beneficiaries, and to diversify the investments of the plan. Some small employer sponsors found the selection of investment fund options for their plans particularly challenging. A small employer with a 401(k) plan described the difficulties of selecting appropriate investment options, with an appropriate balance of risk, for a workforce that includes younger and older workers. A number of small business advocates and retirement experts said that not all small employers have an adequate understanding of their fiduciary duties and are not always aware of all their responsibilities under the law. For example, a retirement expert said that small employers that do not consult with plan professionals often lack the time and expertise to understand complicated fiduciary rules under ERISA. One service provider explained that some small employers mistakenly believe that all fiduciary responsibilities and liabilities are transferred to a service provider when they are hired. Another expert noted that some small employers have an exaggerated sense of the liabilities that being a fiduciary carries, and may avoid sponsoring a plan out of fear of being sued by their employees. Top-heavy requirements: Top-heavy requirements are most likely to affect smaller plans (fewer than 100 participants), according to IRS. A number of stakeholders said compliance with the requirements is often burdensome and poses a major barrier to plan sponsorship for small employers. Small employers with high employee turnover may face an even greater likelihood of becoming top-heavy. According to some experts, employee turnover alone can force some small employers out of compliance with top-heavy requirements as they replace departing employees. Over time, rank-and-file employees separate and take their plan assets with them, while long-term employees, such as business owners or executives, continue to contribute to the plan, eventually leading to a top-heavy imbalance of plan assets. For example, one small employer with a 401(k) plan stated that, because two of the four owners had worked for the company for about 25 years and their retirement accounts made up the majority of the total plan assets in the 401(k) plan, the plan had become top-heavy. To comply with the top-heavy requirements, sponsors of certain plansare required to test their plans annually. An employer’s failure to make certain adjustments to a plan deemed top-heavy can result in it losing its tax-qualified status and the associated tax advantages for the employer and employees. A number of stakeholders stated that top-heavy compliance is confusing and can pose significant burdens on some small employers. For example, some retirement experts said that small employers whose plans are found to be top-heavy may encounter a number of additional costs in the effort to make their plans compliant, such as hiring a plan professional to make corrections to the plan document and instituting a minimum top-heavy employer contribution for all participating rank-and-file employees. According to one expert, in some cases, the costs of mandatory contributions to employees’ accounts may prevent owners from making contributions to their own retirement accounts, and may make some small employers reluctant to sponsor a plan, or may drive those that sponsor a plan to terminate it. Sponsors can avoid top-heavy testing by adopting a safe harbor 401(k) plan with no additional contributions, which is not subject to top-heavy requirements. However, safe harbor 401(k) plans require the employer to make either specified matching contributions or a minimum 3 percent contribution to each participant’s account. According to representatives of the accounting professional, the additional cost to the employer of required contributions under a 401(k) safe harbor plan may offset the advantages of sponsoring such a plan. Federal agencies provide guidance that can assist small employers in addressing some of the challenges of starting and maintaining retirement plans. Labor and IRS, often in collaboration with SBA, have produced publications, conducted workshops, and developed online resources, among other efforts, to assist small employers in understanding options, requirements, and responsibilities of running a plan. For example, Labor and IRS jointly published a guide that compares various features of different plan types, including IRA, DC, and DB plans. Both agencies have also developed websites and online tools to help small employers navigate plan information and make informed decisions about plan options. For example, IRS’s Retirement Plans Navigator is a key component of its education efforts for small employers and is designed for employers that are less likely to hire a service provider. According to IRS, the navigator is intended to lead a novice through basic information on retirement plans and compliance. Similarly, Labor, in collaboration with the American Institute of Certified Public Accountants (AICPA), developed an interactive website highlighting small employer retirement options. The website introduces employers to a number of plan options from simpler IRA plans to more complex automatic enrollment 401(k) plans, and describes the advantages and features of various plan types. According to Labor, employers with as few as two employees can find options using the tool. However, a number of stakeholders suggested that many small employers are unaware of federal resources on retirement plans. For example, the Advisory Committee on Tax Exempt and Government Entities (ACT) recognized that, despite the numerous IRS retirement plan resources available, many small employers and other stakeholders in the small business community are unaware of these resources. ACT indicated these resources could go a long way in addressing the needs of the small employers were it not for their lack of visibility. The lack of visibility of federal guidance on small employer plan options may be due, in part, to difficulties in finding useful, relevant information across federal websites. For example, while Labor’s webpage on small employer retirement plan options contains links to relevant topics, such as compliance assistance, participants’ rights and fiduciary responsibilities, it is easy to navigate away from but difficult to return to the content developed for small employers because there is no consistent page navigation menu for small employer information. Furthermore, while the Labor website includes guidance on selecting and monitoring plan service providers, there is no link to the guidance on the small employer plan options page. IRS’s Retirement Plans Navigator is located on a separate website from the rest of the agency’s online plan resources for small employers. When navigating from the page on small employer retirement plan resources on IRS’s main portal to the agency’s Retirement Plans Navigator, a message alerts users that they are leaving the IRS website and entering another government website. IRS officials noted that small employers who participated in focus groups on IRS plan resources reported challenges to understanding plan-based information when navigating these resources. Furthermore, Labor and IRS present their online content separately, which makes it necessary for an employer to navigate both agencies’ websites to gather complete information about starting and maintaining a retirement plan. For example, to review information on fiduciary responsibilities, users must visit Labor’s website, and to review information on nondiscrimination and top-heavy testing, users must visit IRS’s site. Neither agency maintains a central web portal for all information relevant to small employer plan sponsorship, though such portals exist for federal information resources in other areas such as healthcare. Consolidating Internet-based services and information is also consistent with one of the purposes of the E-Government Act of 2002 to promote interagency collaboration in providing electronic government services. Small employers that lack sufficient financial resources, time, and personnel may be unwilling or unable to sponsor retirement plans. In particular, stakeholders stated that plan sponsorship may be impractical for smaller or newer firms that are unable to undertake the commitment to sponsor a plan. For example, one expert noted that the first priority of a small employer is remaining in business, and this focus may preclude sponsoring a retirement plan as a benefit to employees until the firm becomes more established. Financial resources: Small employers, especially those with lower profit margins or an unstable cash flow, could be less willing or less able to sponsor a retirement plan because of the one-time costs to start a plan and the ongoing costs involved with maintaining the plan. These costs can result from start-up activities, complying with reporting and testing requirements, and fees paid to an outside party for administration tasks. Stakeholders stated that these expenses can make sponsoring a plan unappealing. For example, one small employer stated that as a new business owner, she thinks it is better for her business to proceed cautiously and avoid adding to her fixed cost structure. Additionally, any requirement for small employers to match employee contributions or to make mandatory contributions to an employee’s account can also increase costs. Further, small employers stated that general economic uncertainty makes them reluctant to commit to such long-term expenses and explained that they needed to reach a certain level of profitability before they would consider sponsoring a plan. For example, one small employer stated that he wanted to be able to expect consistent profits over several years before he would consider investing in a plan. Another small employer stated that she wanted to triple her business revenue to a little less than $1 million before she would consider sponsoring a retirement plan. Time and personnel: Some small employers stated they may not have sufficient time to administer a retirement plan themselves or lacked the personnel to take on those responsibilities. For example, one small employer said that he was not prepared to assume the burden of managing a plan as he thought it would require almost daily attention and did not have the staff to devote to it. Further, a plan service provider described how the focus of the small employer would not be on absorbing the additional time that starting and maintaining a plan would require. Additionally, a plan sponsor employer stated that, since her business did not have a dedicated human resources person or accountant, she performed these duties herself, as she would ultimately be responsible for any mistakes. Further, small employers may not have time to develop the expertise to investigate or choose financial products, select the best investment options, or track their performance. For example, one small employer described how business owners without the financial expertise to compare and select from among different plan options would likely find the experience intimidating. Some small employers stated that they may be less likely to sponsor a retirement plan if they do not perceive sufficient benefits to the business or themselves. For example, several small employers stated that their firms sponsored retirement plans in order to provide the business owners with a tax-deferred savings vehicle. One small employer stated that his firm evaluated the plan annually in order to determine whether it continues to benefit the owners. A service provider observed that the cost of mandatory contributions—such as those associated with safe harbor 401(k) plans—can discourage small employers, since the cost of the contributions can outweigh the benefit to the owners. Low employee demand for an employer-sponsored retirement plan may also be a challenge for small employers. For example, a number of small employers stated that employees prioritized health care benefits over retirement benefits. One small employer thought that, given the limited funds available to contribute towards benefits, his employees would prefer those resources be applied toward lowering the employees’ share of health insurance premiums. Small employers emphasized that offering health care benefits was necessary to attract quality employees. Further, one small employer stated that his employees perceived a more immediate need for health care benefits, while perceiving retirement benefits as a future concern. Additionally, some small employers, such as those who described having younger workforces, stated that their employees were less concerned about saving for retirement and, as a result, were not demanding retirement benefits. Other small employers told us that employees, particularly those with low pay, do not have any interest in retirement benefits because they live paycheck to paycheck and are less likely to have funds left over to contribute to a plan. For example, one small employer discontinued his plan when too few of his employees—most of whom he described as low wage—participated in the plan. Another small employer noted that even senior-level managers in his business did not participate in the plan. However, a retirement expert stated that, while some employees might not be interested in participating in a retirement plan, he believed the perceived lack of demand to be exaggerated. He added that he believed some businesses may use lack of employee demand as an excuse when the small employer was not interested in sponsoring a plan. A number of small employers indicated that they use plan service providers to address various aspects of plan administration, which enabled them to overcome some challenges of starting and maintaining a plan. For example, one small employer said his service provider addresses his plan testing requirements and educates employees about the plan. Another employer noted that her business would not have the time or the expertise to administer their plan without a service provider. A third employer stated that he would not be able to administer a plan without the assistance of a service provider to help navigate the complexity of plan administration. Some stakeholders said that service providers offer small employers plan administration solutions by providing basic, affordable plan options. For example, one service provider said a small employer could sponsor a plan for an administrative fee as low as $1,200 annually. They and other retirement industry representatives said they are able to provide plan options at affordable rates because they market and administer IRS pre- approved standard plans in high volume, thereby reducing the costs of administration. Even so, while some small employers said the fees service providers charge were affordable, others said they were too high. Further, some stakeholders pointed to other limitations of using service providers, such as the difficulties of choosing a provider, setting up a new plan through a provider, and switching to a new provider, as well as the significant plan responsibilities that remain with the sponsor. For example, a small employer described the process of finding a service provider and setting up a plan as particularly difficult, especially for an employer with little knowledge of retirement plans or experience in working with a service provider. Another small employer said she was not satisfied with the services of her current service provider but would not consider switching to a new one because of the administrative hardships that would entail. Finally, as representatives of the accounting profession noted, even with the assistance of a service provider, small employer sponsors often continue to have significant plan responsibilities, such as managing plan enrollments and separations, and carrying out their fiduciary duties. Stakeholders provided several suggestions targeted at addressing some of the administrative and financial challenges they believed inhibited plan These proposals, which they said could reduce complexity sponsorship.and ease administrative and financial burdens for small employer plan sponsors, included simplifying plan administration rules, revising or eliminating top-heavy testing, and increasing tax credits. Simplify plan administration requirements: Several stakeholders suggested proposals that could simplify plan administration requirements and ease administrative burdens for small employers. For example, representatives of a large service provider stated that there is a need for simplification of existing rules and processes for retirement plans and proposed easing nondiscrimination and top-heavy testing requirements as an example. Similarly, several small employers said that federal regulators should strive for simplicity in requirements governing plan administration. A small employer who sponsored a 401(k) plan suggested reducing the amount of paperwork as an example. Another small employer who sponsored a 401(k) plan said federal regulators should “just keep it simple.” One proposal from a national small business association would simplify plan requirements by reducing the frequency of statements sent to certain plan participants, from quarterly to once per year, and allowing some required disclosures to be made available solely online. Another proposal, advocated by IRS, would simplify plan requirements by streamlining interim amendment requirements—an aspect of plan administration that stakeholders identified as particularly burdensome for some small employers. Each year since 2004, IRS has published a cumulative list of changes in plan qualification requirements that must be incorporated by plan sponsors. An IRS official stated that IRS is proposing to replace a requirement for some interim amendments with a requirement for notices to be sent directly to employees. These notices would explain how a plan intends to comply with changes to relevant laws and regulations and could reduce the burden for plan sponsors by reducing the number of times plan documents must be amended. The amendments that would be subject to the less-stringent requirement would be those triggered by changes to laws and regulations but that do not affect plan benefits. Revise or eliminate top-heavy testing: A number of stakeholders proposed revising or eliminating top-heavy testing to ease administrative and financial burdens. For example, representatives of the accounting profession told us that top-heavy testing is duplicative because there are other plan testing requirements intended to detect and prevent plan The representatives and discrimination against rank-and-file employees. officials of a large service provider told us lack of plan participation or high turnover among a business’s rank-and file employees frequently cause plans sponsored by small employers to become top-heavy. As a result, the representatives said top-heavy testing should be revised or eliminated. Increase tax credits: Some stakeholders believed that tax credits, in general, are effective in encouraging plan sponsorship and that larger tax credits could encourage more small employers to sponsor plans. However, a stakeholder cautioned that the credits must be sufficient to offset the costs of plan sponsorship, which a service provider said can amount to $2,000 or more per year. Currently, small employers may claim an annual tax credit of up to $500 based on plan startup costs for each of the first 3 years of starting a qualified plan. A national organization representing small employers cited tax credits as a top factor in an employer’s decision to sponsor a plan; however, an organization official said the likelihood of an employer doing so often depends on whether the tax credit offsets a significant portion of administrative and startup costs of sponsoring plans. Some small employers stated that larger tax credits could ease the financial burden of starting a plan by offsetting plan- related costs, thus creating greater incentives for an employer to sponsor a plan. Other stakeholders said that existing plan startup tax credits are insufficient to encourage plan sponsorship. Officials at another national small business association cautioned that short term tax credits do not provide sufficient incentives for a small employer to make the long-term commitment of sponsoring a plan. Similarly, one small employer who sponsored both 401(k) and DB plans said there needs to be a larger incentive for the small employer to sponsor a plan because starting and maintaining plans can be expensive. Numerous stakeholders agreed that the federal government could conduct more education and outreach to inform small employers about plan options and requirements; however, opinions varied on the appropriate role for the federal government in this area. A retirement expert said that the federal government can do more to educate consumers about retirement plans and improve general financial literacy. Officials of a service provider to small businesses stated that, because clients are generally not aware of the retirement plan options available to them, the federal government should provide more education and outreach to improve awareness of the plan types available and rules that apply to each. Another large service provider mentioned the federal government should provide educational materials that help small employers find quality service providers. In addition, in its 2011 report, ACT made numerous recommendations calling for better publicity of IRS resources. According to the report, the committee recommended, among other things, that IRS explore potential partnerships with community organizations and plan service providers to enhance the visibility of IRS resources for small employers. Although several small employers agreed on the need for more education and outreach about plan options and requirements, opinions varied on the extent to which the federal government should provide these services. For example, a representative of a small employer believed the federal government could provide more educational materials that are easy to understand. Another small employer said the federal government should focus education and outreach on service providers instead of on small employers. Conversely, some small employers said the federal government should have a limited role or no role in providing education and outreach efforts. There are a number of domestic pension reform proposals from public policy organizations, as well as practices in other countries, that include features, such as asset pooling, that potentially reduce administrative and financial burdens and could boost retirement plan sponsorship among small employers. By pooling funds, small employers realize economies of scale because plan administration is simplified and administrative costs and asset management fees are reduced. Pooling also creates larger plans, which are more likely to attract service providers that previously may have found it uneconomical to service smaller individual plans. One proposal by the Economic Policy Institute, which incorporates the concept of asset pooling, would create a federally managed and federally guaranteed national savings plan. Generally, participation in the program would be mandatory for workers, and employers and employees would be required to make equal contributions totaling 5 percent of employees’ earnings. Funds would be pooled and professionally managed, and benefits would be paid out in the form of annuities to ensure that workers do not outlive their savings. In addition, Automatic IRAs—which are individual IRAs instead of employer- sponsored plans—are another proposal that draws from several elements of the current retirement system: payroll-deposit saving, automatic enrollment, and IRAs. The automatic IRA approach would provide employers that do not sponsor any retirement plans with a mechanism that allows their employees to save a portion of their pay in an IRA. For most employees, payroll deductions would be made by direct deposit, and enrollment would be automatic unless employees choose to opt out of participation. However, as we reported in 2009, some of these proposals that call for broader systemic reforms pose other trade-offs. For example, proposals that mandate participation would increase plan sponsorship and coverage for workers. However, mandatory participation may create burdens for some employers, and employers might compensate for the costs of contributing to workers’ retirement plans by reducing workers’ wages and other benefits. Proposals that guarantee investment returns can protect workers from market fluctuations and can ensure a minimum level of benefits; however, significant costs to the government might result if the guarantee were unsustainable. In addition, proposals that simplify and centralize 401(k) plans may require new regulatory and oversight efforts, and compliance-related costs could be passed on to employers, workers, and taxpayers in general. Retirement systems in other countries also use asset pooling and other features that reduce administrative and financial burdens for small employers and could spur plan sponsorship. For example, the United Kingdom’s National Employment Savings Trust (NEST), launched in 2011, features low fees for participating employers and employees and default investment strategies for plan participants. NEST also permits plan participants to take their retirement accounts with them throughout their working life, which eliminates ongoing administration of those accounts by former employers when a worker leaves a company. As we previously reported, the predominant pension systems in the Netherlands and Switzerland pool plan assets into pension funds for economies of Denmark’s pension system also pools scale and for lower plan fees.plan assetsfurther lowering administrative costs for small employers. Despite efforts by the federal government to develop new plan designs and to increase tax incentives to spur plan formation and retirement saving generally, sponsorship remains low among small employers. To some extent, it would be expected that sponsorship rates for small employers would be somewhat lower than for larger employers partly because of the heavy “churn” of small business formation and dissolution. However, small employers’ sponsorship rates remain far below those of larger firms. If a complete picture of sponsorship by small employers were available—including information on small employers that sponsor SEP IRA plans, which is lacking because IRS currently does not have a means to collect these data—IRS and Labor would be better able to target their research and outreach efforts. Small employers continue to face a variety of challenges to starting and maintaining retirement plans, including obtaining useful information about the large menu of available plan options, managing administrative requirements that small employers reported as burdensome and overly complex, and drawing upon small employers’ often limited resources to administer and finance a plan. While increased competition among plan service providers may result in more affordable options and plans that are easier to start and maintain, options for many small employers may remain out of reach. Federal agencies have a key role to play in understanding and addressing the barriers to plan sponsorship and to spur sponsorship among small employers by conducting research and conducting education and outreach to small employers. Labor and IRS already provide small employers with a great deal of online information. However, much of the information is scattered among a variety of websites and portals in a largely uncoordinated fashion. A small employer with little knowledge of retirement plan options is forced to navigate multiple sources to retrieve relevant information and may be discouraged from doing so. Increased collaboration and more comprehensive strategic planning between these agencies could enhance outreach and education efforts to more small employers. For example, Labor and IRS could reach out to small employers by utilizing SBA’s extensive network of field offices and by entering into partnerships with public and private organizations. More fundamentally, a coordinated review by the relevant agencies of existing plan designs and their effectiveness in spurring plan sponsorship and participation could help agencies evaluate and develop options that mitigate the barriers to small employer plan sponsorship. To address the need to strengthen the retirement security of employees at small businesses and to build on interagency data-sharing agreements already in place, we recommend that the Secretary of Labor convene an interagency task force with representatives from Treasury, IRS, and SBA, and other agencies deemed appropriate, to review, analyze, and address the challenges facing small business retirement security in the United States. The aim of this taskforce would be to develop strategies and arrangements for the agencies to routinely and systematically coordinate their existing research, education, and outreach efforts to foster small employer plan sponsorship. Specifically, this body should focus on, but not be limited to, the following goals: Conduct plan research on the characteristics associated with small businesses that are more or less likely to sponsor a retirement plan (including employer-sponsored IRA plans) to support agencies’ education and outreach efforts to small employers and provide Congress and the public with information about plan coverage among them. Evaluate and develop proposals for mitigating barriers to small employer retirement plan sponsorship, including an assessment of the cost effectiveness of existing plan designs—with regard to the expansion of coverage, and the potential to provide an adequate retirement income, as necessary--and the appropriateness of alternative plan designs. Create a single web portal to centralize federal agencies’ retirement plan information to enhance the visibility and usefulness of federal guidance on plans for small employers. Considering the lack of information on the number and characteristics of sponsors of SEP IRA plans, as well as their performance in improving retirement security, the Secretary of the Treasury should direct the Commissioner of the Internal Revenue Service to consider modifications to tax forms, such as Forms W-2 or 5498, that would allow IRS to gather complete and reliable information about these plans. We provided a draft of the report to Labor, Treasury, IRS, Commerce, and SBA for review and comment. Agencies generally agreed with our recommendations. Only Labor provided a written response (see app. VII). Labor, Treasury, IRS, and SBA also provided technical comments, which we incorporated as appropriate. Commerce did not provide comments. In its written response, Labor generally agreed with the findings and conclusions of the report. Labor also noted that, since 1995, the agency has developed various initiatives to provide education and outreach to the small business community—particularly in the context of retirement saving and financial literacy—by partnering with SBA, the U.S. Chamber of Commerce, and other entities to target small employers. Labor cited these and other efforts as progress in response to of our recommendation for a taskforce that would analyze and address the challenges facing small business retirement security, stating that Labor remains committed to continuing its existing coordination efforts with respect to plan research and developing proposals for mitigating barriers to small business plan sponsorship. However, Labor disagreed with our recommendation to create a unified web portal to centralize retirement plan information for small employers, expressing concerns about its necessity. Specifically, Labor noted that an SBA website, http://www.business.gov, currently serves as the central portal for information—including information about retirement plans— relevant to small employers. However, none of the stakeholders we interviewed during this report—including Labor and SBA officials— identified http://www.business.gov as a resource of retirement plan information for small employers. Further, in reviewing http://www.business.gov, we found the retirement plan information consisted primarily of links that send users to websites maintained by Labor. We did not find links to or information regarding any IRS retirement plan guidance, including the Retirement Plans Navigator—the agency’s key online retirement plan tool for small employers—or http://www.choosingretirementsolution.org, Labor’s online retirement plan tool for small employers. However, even if http://www.business.gov contained links to all available federal guidance on retirement plans for small employers, it is not clear how it would increase the visibility of the guidance among small employers because so few small employers and other stakeholders we spoke with appeared to be aware of its existence. Thus, while we commend Labor for its existing coordination efforts, we continue to believe that there are additional benefits to be gained by consolidating information on retirement plans for small employers into a single, easy-to-use source—an initiative that would also appear to be consistent with the administration’s interest in information technology consolidation and in encouraging agencies to conduct their missions more effectively. Finally, in its written response, Labor cited BLS’s 2010 National Compensation Survey, which found that an estimated 45 percent of establishments employing fewer than 100 workers offered a retirement plan to their workers. This is not necessarily inconsistent with our estimate of 14 percent of small employers sponsoring some form of retirement plan, given the different units of analysis used. While the National Compensation Survey used “establishment” as its unit of analysis, we chose to use “firms” for the purposes of this study. There are important differences between an establishment and a firm. For example, according to BLS’s definition, an establishment is a single economic unit at a single physical location. Thus, an establishment can be a business at a single physical location or a branch of a larger company operating multiple branches and the characteristics of each branch is measured as a separate business instead of in the aggregate. On the other hand, for this study, we defined a firm as a complete, for-profit, independent business with 1 to 100 employees. As a result, Labor’s estimate comprises a broader population of employers beyond the small employers we examined. Further discussion of our methodology can be found in appendix I. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Labor, Secretary of the Treasury, the Secretary of Commerce, the SBA Administrator, and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact Charles Jeszeck at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. Key contributors are listed in appendix VIII. In the body of this report, we present a range for the rate of employee access to retirement plans. According to the Congressional Research Service (CRS), the differences in the estimates regarding employee access to retirement plans between information obtained from Bureau of Labor Statistics (BLS) and the Census Bureau may stem from the different populations used in the surveys. BLS’s National Compensation Survey (NCS) is conducted among a nationally representative sample of private-sector business establishments. The term establishment usually refers to a single place of business at a particular location. An establishment might be a branch or a small operating unit of a larger firm. The Census Bureau’s Current Population Survey (CPS) is conducted among a nationally representative sample of households. Employer characteristics are reported at the level of the firm, which may include more than one establishment. CRS has reported that, in any given year, the NCS can reasonably be expected to show a higher rate of retirement plan participation than the CPS because the business owners and benefits specialists who are interviewed for the NCS might have greater knowledge about the retirement benefits they sponsor than the household members who are interviewed for the CPS. However, CRS has noted that the gap in Census and BLS estimates has grown over time, further complicating the process of estimating both the proportion of workers without employer-sponsored retirement plans and the trend in retirement plan participation rates. To perform this work, we combined and analyzed 2009 data from the Department of Labor’s (Labor) Form 5500 database, the Internal Revenue Service’s (IRS) Information Returns Processing (IRP) database, and the IRS Compliance Data Warehouse database (CDW) to obtain information on what would make a small employer more or less likely to sponsor a retirement plan, descriptive statistics on small employer retirement plan sponsors and nonsponsors, and descriptive statistics on the types of retirement plans sponsored by small employers. The Form 5500 database provided information on defined benefit (DB) and defined contribution (DC) plans, and the publicly available data was downloaded directly from Labor’s website: http://www.dol.gov/ebsa/foia/foia-5500.html. The IRP database provided information on employer-sponsored SIMPLE IRA and SARSEP IRA plans and was provided by the IRS officials in the Tax-Exempt Governmental Entity Employment Plans division. The CDW database provided the characteristics regarding the universe of small employers with 100 or fewer employees and was provided by IRS officials with the Statistics of Income (SOI) division. We assessed the reliability of the Form 5500, the IRP, and the CDW data generally and of data elements that were critical to our analyses and determined that they were sufficiently reliable for our analyses. Our unit of analysis was the small employer, as identified by its employer identification number (EIN). For the purposes of this study, we defined a small employer as an independently owned and operated for-profit firm with at least 1 employee and no more than 100 employees. This definition excluded agricultural businesses, such as farms, as well as tax-exempt organizations, such as nonprofits and government entities. This definition also excluded subsidiary for-profit firms. To prepare the Form 5500 data in advance of combining the data with the other datasets, we screened out any plans that were not entered in the Form 5500 or Form 5500-SF as “single employer plans,” those that did not have a plan year beginning date in 2009, as well as screened out any plans that had entries in the Welfare Benefit Codes. Our analysis did not consider small employers that only participated in multiple employer plans, in which two or more employers maintain a single plan, or multiemployer plans, in which a joint plan is maintained under a collective bargaining agreement between at least one employee organization and more than one employer. As individual employers are not considered sponsors of multiple employer plans and multiemployer plans, including these plans was considered beyond the scope of this report. We then matched the Form 5500 data and the IRP data with the CDW data using the EIN in common. Any matches between a small employer in the CDW database and a plan in either the Form 5500 or IRP database classified the small employer as one that sponsored the plan while any small employers that did not match with a plan were classified as nonsponsors. We developed bivariate and multivariate regression models to estimate the likelihood that a small employer would sponsor a retirement plan using the following small employer characteristics: the number of employees, the annual average wage of the employees, the industry using the 2007 North American Industry Classification System (NAICS) with a depth of two digits, and the region in which the small employer resided as defined by the Census Bureau. For results of the regression model, see appendix VI. The regression model did not include the age of the business as a variable in the model. It is difficult to measure this variable because, over time, a small employer may change its EIN. For example, some small employers change their business structure, which may also require the business to obtain a new EIN. It would be challenging to track businesses over time with changes to the EIN. In addition to the regression model, we produced a descriptive statistical analysis of small employer characteristics using cross-tabulations of the following characteristics: the number of employees, the annual average salary of the employees, the industry using the NAICS with a depth of two digits, and the state in which the small employer is located. The ranges used for the characteristics identifying the number of employees and average annual wages were established using the statistical spreads identified by the regression model. Table 2 is based on guidance produced by Labor and the IRS to educate small employers about their retirement plan options. This guidance, titled “Choosing a Retirement Solution for Your Small Business,” can be found at http://www.dol.gov/ebsa/pdf/choosing.pdf. The content of this table is reproduced from the Labor/IRS publication without alteration, with the following exceptions: GAO updated some dollar amounts to reflect changes made for 2012, where applicable (specifically, the maximum annual contributions to the defined contribution plans and SEP IRA plans, and the maximum compensation upon which contributions to non-DB plans may be based), reordered the columns, and omitted information about payroll deduction IRA plans, which are beyond the scope of this review. GAO did not independently verify the legal accuracy of the information contained in the table. In analyzing small employer retirement plan sponsorship by industry, we found that small employers in heath care and manufacturing were most likely to sponsor a retirement plan, while small employers in the food and hospitality industry were least likely to sponsor a plan. See figure 8 for the sponsorship rate by industry and table 3 for a sample list of businesses contained within each industry type. Appendix VI: Regression Results ** denotes p-value < 0.01. * denotes p-value < 0.05. Odds ratio from multivariate model 95 percent Wald Confidence Limits (lower) 95 percent Wald Confidence Limits (upper) Ref. Ref. Ref. Ref. Industry classification (NAICS) In addition to the contact named above, individuals making key contributions to this report include David Lehrer, Assistant Director; Edward Bodine, Analyst-in-Charge; Curtis Agor; Kun-Fang Lee; and David Reed. Susan Aschoff, Susan Baker, James Bennett, Michael Brostek, Sarah Cornetto, Cynthia Grant, Catherine Hurley, Anna Kelley, Gene Kuehneman, Karen O’Conor, Dae Park, MaryLynn Sergent, Aron Szapiro, Frank Todisco, James Ungvarsky, and Walter Vance also provided valuable assistance. SOI (IRS) provided valuable assistance in extracting small employer data from the CDW. | Because about one-third of privatesector employees in the United States work for small employers, Congress and federal agencies have made efforts to encourage small employers to sponsor retirement plans for workers. However, federal data show workers access to plans remains limited, leaving many without a work-based plan to save for retirement. For this report, GAO examined (1) characteristics of small employers that are more or less likely to sponsor a plan for their employees, (2) challenges small employers face in establishing and maintaining a plan for their employees, and (3) options to address these challenges and attract more small employer plan sponsors. GAO defined small employers as forprofit firms that employ 100 or fewer employees. GAO analyzed Internal Revenue Service (IRS) and Department of Labor (Labor) data, interviewed agency officials and experts, held discussion groups with small employers, and reviewed relevant federal rules, literature, and retirement plan proposals. Based on available data, about 14 percent of small employers sponsor some type of retirement plan. Overall, GAO found that the likelihood that a small employer will sponsor a retirement plan largely depends on the size of the employers workforce and the workers average wages more than on the industry in which the employer operates and the geographic region in which the employer is located. GAO found the greatest likelihood of plan sponsorship was among small employers with larger numbers of employees and those paying an average annual wage of $50,000 to $99,999. GAO also found that the most common plans sponsored by small employers are 401(k)s and Savings Incentive Match Plans for Employees (SIMPLE) Individual Retirement Arrangements (IRA)an employer-sponsored IRA designed for small employersat 46 percent and 40 percent, respectively, of total plans. However, IRS currently does not have the means to collect information on employers that sponsor another type of IRA plan designed for small employers, the Simplified Employee Pension (SEP) IRA plan, which limits what is known about employers that sponsor these plans. Small employers and retirement experts identified several challenges to starting and maintaining retirement plans. Many small employers said they feel overwhelmed by the number of retirement plan options, administration requirements, and fiduciary responsibilities. For example, many are concerned about the potential risks associated with sponsoring a plan. Although federal agencies conduct education and outreach on retirement plans, a number of small employers and other stakeholders said small employers were unaware of these initiatives. For example, Labor, IRS, and the Small Business Administration (SBA) collaborate to develop and disseminate information and guidance online but do so through separate websites and in a largely uncoordinated fashion. Small employers and other stakeholders also cited other challenges to plan sponsorship, including a lack of financial resources, time, and personnel. However, some small employers said their employees prioritized health benefits over retirement benefits. To address some of the challenges to plan sponsorship, some small employers said they use contracted service providers that perform plan administration tasks. Small employers and other stakeholders offered options for addressing some challenges and reducing the complexity of plan sponsorship for small employers. Options included simplification of federal requirements for plan administration, such as easing or eliminating certain plan testing requirements. Some stakeholders said increasing the tax credit for plan startup costs could further defray costs and help boost plan sponsorship. Some stakeholders also said that the federal government could conduct more education and outreach efforts to inform small employers about plans. Pension reform proposals in the United States, along with certain features of pension systems in other countries, may provide additional options that could increase plan sponsorship and increase workers access to retirement plans. For example, asset pooling is a feature that allows small employers to pool resources for economies of scale, which can lower plan costs. In light of the variety of options, Labor, the Department of the Treasury, IRS, and SBA should jointly evaluate existing options and develop new proposals with the goal of mitigating barriers to small employer plan sponsorship. GAO recommends that Labor convene an interagency task force with Treasury, IRS, and SBA to coordinate existing research, education, and outreach efforts to foster small employer plan sponsorship. GAO also recommends that IRS consider modifying tax forms to gather complete, reliable information about SEP IRAs. Agencies generally agreed with GAOs recommendations; however, Labor disagreed with GAOs recommendation to create a single webportal for federal guidance. However, because federal resources are scattered across different sites, GAO believes consolidating plan information onto one webportal could benefit small employers. |
FHWA plays a key role in funding and overseeing the completion of highway projects. In addition to providing financial assistance and establishing standards for state DOTs to build and improve highways and roads, FHWA—through its division office in each state—provides technical expertise and fulfills oversight functions. State and local governments execute the programs by matching and distributing federal funds; planning, selecting, and supervising projects; and complying with federal requirements. Funding for highway projects represents a large federal investment— about $39 billion in fiscal year 2011. Federally funded highway projects are typically developed in the following four phases: 1. Planning. State DOTs and metropolitan planning organizations (MPO) assess the need for a project in relation to other potential highway project needs. 2. Preliminary design and environmental review. State DOTs identify potential transportation solutions based on needs identified during planning, potential environmental and social effects of those solutions, project cost, and construction location; analyze the effect, if any, of the proposed project and potential alternatives on the environment; and select the preferred alternative. 3. Final design and right-of-way acquisition. State DOTs finalize design plans, acquire property, and relocate residents and businesses. 4. Construction. State DOTs award construction contracts, oversee construction, and accept the completed project. In the preliminary design and environmental review phase, many activities are carried out pursuant to the National Environmental Policy Act of 1969 (NEPA) and other federal laws. Under NEPA, federal agencies must assess the effects of major federal actions—those they propose to fund, carry out or permit—that significantly affect the environment. NEPA has two principal purposes: (1) to ensure that an agency carefully considers detailed information concerning significant environmental impacts and (2) to ensure that this information will be made available to the public. NEPA generally requires federal agencies to prepare documentation showing the extent of the project’s environmental impacts. Per NEPA, the lead agencies—usually a state DOT and FHWA—will determine which of the three documentation types is needed as follows: Projects referred to as ‘categorical exclusions’ (CE) are determined to not individually or cumulatively have a significant effect on the quality of the environment. These projects require no or limited environmental review or documentation under NEPA. Examples of highway projects that are generally processed as CEs include resurfacing, constructing bicycle lanes, installing noise barriers, and landscaping. The vast majority of highway projects are processed as CEs (see fig. 1). Based on data collected in 2009, FHWA estimates that approximately 96 percent of highway projects were processed as CEs. An environmental impact statement (EIS) is required for proposed projects that are determined to have a significant effect on the environment. In broad terms, FHWA starts the EIS process by publishing a notice of intent in the Federal Register. It then consults with resource agencies—such as USACE or FWS—and solicits comments from the public on a draft EIS, incorporates comments into a final EIS, and issues a record of decision. Among other things, the record of decision—which is the final step for agencies in the EIS process—identifies (1) the decision made; (2) the alternatives considered during the development of the EIS, including the environmentally preferred alternative; and (3) plans to mitigate environmental impacts. For the 32 projects in which FHWA was the lead agency and signed the EIS in fiscal year 2009, the average amount of time from signing the notice of intent to signing the record of decision was 83 months—almost 7 years. As noted, FHWA estimates that based on its 2009 data approximately 1 percent of all federal-aid highway projects in the United States were processed with an EIS. While projects requiring an EIS are a small portion of all highway projects, they are likely to be high-profile, complex, and expensive projects. For these reasons, many efforts to expedite highway projects and reports which study those efforts, including this report, tend to focus on highway projects requiring an EIS. Project sponsors prepare an environmental assessment (EA) when it is not clear whether a project will have significant environmental impacts. An EA is intended to be a concise document that, among other things, briefly provides sufficient evidence and analysis for determining whether to prepare an EIS. If during the development of an EA, the project sponsor determines that the project will cause significant environmental impacts, the project sponsor will stop producing the EA and, instead, produce an EIS. However, an EA typically results in a finding of no significant impact, a document that presents the reasons why the agency has concluded that there are no significant environmental impacts to occur when the project is implemented. FHWA estimates that, based on its 2009 data, about 3 percent of all federal-aid highway projects were processed using an EA. Numerous federal, state, and local laws determine the processes and tasks highway projects are to complete throughout the four phases. For example, SAFETEA-LU contains provisions that establish policies related to transportation planning and the environmental review process. Various environmental laws—including NEPA, the Endangered Species Act, the Clean Water Act, and the National Historic Preservation Act—establish processes and environmental requirements that projects must meet. Right-of-way acquisition must be accomplished according to the requirements of the Uniform Relocation Assistance and Real Property Acquisition Policies Act of 1970, as amended, a law designed to provide fair treatment of property owners and tenants when they are displaced by federally funded programs, including the construction of a federal-aid highway. Federal-aid highway projects are typically subject to a number of federally required contract provisions, such as nondiscrimination, payment of a predetermined minimum wage, and accident prevention. There are also numerous state and local laws—for example, several states, including California and North Carolina, have laws roughly equivalent to the federal NEPA—that projects must comply with and which help guide projects through various tasks in the process. In addition, a number of provisions created by SAFETEA-LU are intended to help expedite highway projects. We analyzed seven of those provisions that have been implemented (see table 1), focusing primarily on those in Title VI of SAFETEA-LU, which deals with transportation planning and project delivery. We surveyed officials from 52 state DOTs about the potential benefits and challenges associated with each of these SAFETEA-LU provisions and did not ask states to quantify these benefits or challenges. During survey pretesting, we learned that any number of variables could impact the time frames for completing a project, such as the SAFETEA-LU provisions we were asking about in our survey, the complexity of each highway project, or even the personalities of individuals working on tasks for the project. As such, our survey findings generally do not indicate specific values for the benefits and challenges (such as time savings) from implementing or using the SAFETEA-LU provisions, but rather represent state DOTs’ perspectives (i.e., the degree to which they agree or disagree that a particular factor could be a benefit or a challenge) on the potential benefits and challenges of those provisions. Completing a highway project can involve many stakeholders—including federal, state, and local government agencies; nongovernmental organizations (NGO); and private citizens—and, for major highway projects, as many as 200 steps from planning through construction (see fig. 2). A number of additional factors can also affect project time frames. A wide range of stakeholders can be involved in highway projects, from federal, state, and local agencies with varying missions and responsibilities to NGOs, contractors, and private citizens. Different factors, however, will help determine the extent to which stakeholders will become involved in the project. For example, if a highway project will not affect endangered or threatened species, it is likely that FWS—which is responsible for implementing the Endangered Species Act for freshwater and terrestrial species—will not become involved in the project. Additionally, some states have developed written agreements—known by a number of terms, including programmatic agreements or memoranda of agreement—with other state or federal agencies that can help to establish a process for consultation, review, and compliance with one or more federal laws, allowing for the project to be reviewed more quickly. Regardless, there are a host of stakeholders that could become involved in a highway project as follows: Transportation agencies. Federal and state transportation agencies are responsible for improving, maintaining, and planning highway systems with a focus on safety, reliability, effectiveness, and sustainability. Among other things, FHWA oversees planning and project completion by reviewing statewide long-range transportation plans, evaluating whether a project meets environmental protection requirements, and authorizing acquisition of property for highway projects it funds. State DOTs are typically the focal point for project planning and construction and are responsible for setting the relevant goals for the state, planning safe and efficient transportation, designing most projects, identifying and mitigating environmental impacts, acquiring property for highway projects, and awarding and overseeing construction contracts. Federal resource agencies. Federal resource agencies, such as those described below, are responsible for managing and protecting natural and cultural resources like wetlands, historic properties, forests, and wildlife: The Advisory Council on Historic Preservation, established by the National Historic Preservation Act of 1966, seeks to promote the preservation, enhancement, and sustainable use of the nation’s historic resources. The council advises the President and the Congress on national historic preservation policies and ensures federal agencies take such issues into account when developing and implementing federal projects. USACE issues permits for the dredging and filling of waters of the United States, including wetlands within the agency’s jurisdiction, under Section 404 of the Clean Water Act. The Environmental Protection Agency administers, among other things, the Clean Air and Clean Water Acts. FWS implements the Endangered Species Act with respect to freshwater and terrestrial species. The National Marine Fisheries Service implements, among other things, the Marine Mammal Protection Act and the Endangered Species Act with respect to most marine species and anadromous fishes (which spend portions of their life cycle in both fresh and salt water). The U.S. Forest Service transfers land for highway rights of way within the National Forest System to states through FHWA. State resource agencies. These state-level agencies are generally responsible for managing and protecting the state’s natural and cultural resources. State resource agencies, like their federal counterparts, participate in and review assessments of environmental impacts, in accordance with their responsibilities under federal or state laws. A state historic preservation office advises and consults with federal and other state agencies to identify historic properties and assess and resolve adverse effects to them under the National Historic Preservation Act. Local governments. Local governments involved in highway projects include MPOs and rural planning organizations. Every urbanized area with a population of 50,000 or more has an MPO, an organization made up of representatives of local governments—county, city, and town government officials—for the purpose of transportation planning and coordination of highway and transit projects. According to a nonprofit organization that represents MPOs, there are almost 400 MPOs in the United States. Rural planning organizations are typically voluntary planning organizations that serve as a forum for local officials to develop consensus on regional transportation priorities for an area with a population of less than 50,000. NGOs. NGOs advocate for a number of issues, including the environment and transportation. Examples of NGOs include the following: The Natural Resources Defense Council is an environmental organization that seeks to protect the environment by educating the public, lobbying government officials, and litigating, if necessary. AASHTO advocates for transportation-related policies and provides technical transportation-related support to states. Contractors. Contractors generally are private sector companies that bid on contracts from federal and state transportation agencies to conduct various activities, such as conducting environmental studies or constructing a highway. Private citizens. Private citizens have the opportunity to provide comments and opinions in venues like public hearings. In addition to the involvement of a large number of stakeholders, completing a major highway project takes a number of years because of the many tasks, requirements, and approvals involved throughout the four phases of a highway project. Major highway projects can involve as many as 200 steps from the initial planning phase through the construction phase that require actions, approvals, or input from a number of stakeholders. State DOTs and local planning organizations assess a project’s purpose and consider the need for the project in relation to the need for other potential highway projects. To receive federal transportation funding, any project in an urbanized area must emerge from the relevant MPO and state DOT planning processes. For nonmetropolitan areas not covered by an MPO, states must consult with and provide opportunities for local officials to participate in statewide planning. To meet federal planning requirements, states must develop (1) a long-range statewide transportation plan covering a 20-year period and (2) a state transportation improvement program—that is, the state program of transportation projects covering at least a 4-year period that are to be supported with federal surface transportation funds, as well as regionally significant projects requiring an action by FHWA, whether or not federally funded. During preliminary design, a project’s location and design are identified, along with the effect, if any, of the proposed project and of potential alternatives on the environment; eventually, a preferred alternative is selected. Among other tasks, state DOTs identify the preliminary engineering issues, proposed alignment of roadways, and costs, as well as create topographic surveys and conduct traffic studies. During environmental review, the proposed project alternatives are examined and may require review, input, or feedback from relevant resource agencies such as USACE, FWS, or the Environmental Protection Agency. Environmental reviews require state and FHWA officials to address and comply with many federal laws—FHWA has identified over 40 environmental laws—as well as applicable state laws. More complex projects require additional time for the completion of preliminary designs and environmental reviews. In addition, private citizens and local governments are asked to comment on the project and its potential effects. At the end of this phase, the preferred alternative is selected. State DOTs finalize design plans, acquire property, and relocate utilities in the final design and right-of-way acquisition phase. State DOTs develop detailed engineering plans consistent with environmental documents and updated environmental studies, and finalize cost estimates. If a significant amount of time has passed since the preliminary design work was performed, right-of-way maps and other information may need to be updated. Acquiring property for the project includes determining any restrictions to state ownership of the property, determining the identities of property owners, making offers to property owners based on just compensation, negotiating a purchase price, relocating property owners and tenants, and sometimes invoking eminent domain. Utilities must be located, marked, surveyed, and possibly relocated. If there are a significant number of underground utilities, professional engineers, geologists, and land surveyors may be needed to determine the exact location of the utilities. State DOTs award construction contracts, oversee construction, and accept the completed project. State DOTs request and evaluate bids on projects and then award the contract. The federal government is not directly involved in construction, but does have an oversight role. For example, projects that receive federal-aid highway funds require FHWA concurrence on the award. During construction, the contractor and the state resolve any unexpected problems that may arise, such as removal of hazardous waste at the construction site. Once satisfied that construction has been carried out as agreed to with the contractor, the state must approve the final completion of construction. In addition to the many stakeholders and tasks involved, a number of other factors can complicate the process and lead to longer highway project time frames such as the following: The availability of funding for large highway projects can affect how long it takes to complete a project. For example, one state DOT informed us it has completed a number of EISs for highway projects, but that these projects are stalled due to a lack of funds. In addition, a state DOT official stated that since the state did not have enough funding to complete major highway projects, they are choosing to focus more on completing smaller, less expensive highway projects such as bridge replacements and repaving. Of those responding to our survey, most state DOTs identified funding as a challenge for all project phases but found it to be more of a challenge in both the planning phase and the preliminary design and environmental review phase. Changes in a state’s transportation priorities during a project’s duration can complicate time frames and delay the project. For example, one administration may favor a highway project when it is first planned and may provide the necessary financial support; however, a new administration with different priorities may come in before the project is completed and withdraw or reduce support and funding. If a project that was shelved garners support again, in some cases, FHWA, the state DOT, or resource agencies might have to reevaluate, rework, and update environmental- or NEPA-related documents and information to ensure that the environmental impact information is current. This can lead to a longer project time frame. Public opposition and litigation can also lengthen highway project time frames or even lead to the cancellation of a project. For example, the Elizabeth Brady Road project in Orange County, North Carolina was canceled by FHWA due to public and local government opposition to the project. After the project began the preliminary design and environmental review phase, local community and government officials determined that there was insufficient need for the project because the potential costs outweighed the project’s potential benefits. As a result, local government officials withdrew their support for the project and it was canceled. Public controversy related to a highway project can sometimes lead to litigation, which can also lengthen highway project time frames. Litigants might settle their lawsuit if, for example, a state DOT agrees to change the design of a project to limit its impact on a species or increase noise abatement measures. Lawsuits can also lead to longer completion time frames. For example, plaintiffs filed suit in 2006 against FHWA and the U.S. Forest Service for a highway project in Alaska, alleging that these parties failed to comply with a number of federal laws, including NEPA. The U.S. District Court found that the final EIS issued for the project was not valid and issued an injunction stopping all work on the project. Upon appeal, the U.S. Court of Appeals for the Ninth Circuit upheld the District In September 2011, nearly 5 years after the lawsuit was Court decision.filed, the Alaska Department of Transportation and Public Facilities began work to prepare a supplemental EIS—that is, an updated EIS—for the project. The agency anticipates issuing a record of decision for this project in late 2013. States identified both benefits and challenges with each of the SAFETEA- LU provisions meant to help expedite highway projects but acknowledged alternative solutions for some of the provisions that better served their purposes. In our survey, state DOTs most frequently agreed that the Minor Impacts to Protected Public Land provision of SAFETEA-LU has the potential to save time (see table 2) and has relatively few challenges to implementation. Most respondents agreed that the Minor Impacts to Protected Public Land provision of SAFETEA-LU has potential time savings benefits, and nearly all have used this provision at least once. This provision authorizes an historic site or publicly owned land from a park, recreation area, or wildlife or waterfowl refuge, to be used for a transportation program or project if a DOT determines that such use would result in minor impacts (i.e., “de minimis impacts”) to that resource. The Department of Transportation Act of 1966 includes a provision—known as Section 4(f)—which stipulates that FHWA and other DOT agencies cannot approve the use of land from publicly owned parks, recreational areas, wildlife and waterfowl refuges, or public and private historical sites unless (1) there is no feasible and prudent alternative to the use of such land and (2) the action includes all possible planning to minimize harm to the property resulting from use. Complying with Section 4(f) can result in additional time to receive project approval. One NGO we spoke with noted that use of the Minor Impacts to Protected Public Land provision of SAFETEA-LU is a more “common sense” approach that not only allows greater use of these protected properties when only very minor impacts are likely to occur, but also helps to expedite highway projects. Potential benefits. Of those responding to our survey, 92 percent of states (47 of 51 states) agreed that this SAFETEA-LU provision has the potential to save time. In addition, of those responding, 80 percent (41 of 51 states) identified the Minor Impacts to Protected Public Land provision as having the potential to create staffing or personnel savings and 59 percent (29 of 49 states) identified the provision as having the potential to increase the number of projects completed. Potential challenges. Most states who responded to our survey did not indicate significant challenges to implementing this SAFETEA-LU provision. For example, 82 percent of states (42 of 51 states) disagreed that the participation requirements for this provision are too challenging to fulfill, indicating that this provision may be easier to use or implement than the other provisions. Implementation/use. Of all the SAFETEA-LU provisions we studied, the Minor Impacts to Protected Public Land provision was used most frequently. Of those states responding, almost all (47 of 49 states) had used this provision at least once, with 9 states indicating that they have used this provision for more than 50 percent of their highway projects since SAFETEA-LU’s enactment in 2005. Most states responding to our survey agreed that the Design-Build Contracting provision within SAFETEA-LU has the potential to save time, but many states have not used this contracting method and, therefore, have not had the opportunity to take advantage of this provision. Under the traditional procurement approach, design and construction services must be separated and a construction contract, which generally goes to the lowest bidder, can be awarded only after the design is complete. Design-build contracting combines the responsibilities for designing and constructing a project in a single contract instead of separating these responsibilities. The Design-Build Contracting provision in SAFETEA-LU repealed the minimum cost requirements for use of design-build contracting for federal-aid highway projects; prior to enactment of SAFETEA-LU, federal-aid highway projects needed to have total costs exceed $50 million in order to use design-build contracting. In our survey, state DOTs generally agreed that the Design-Build Contracting provision has the potential to save time, but noted some challenges and limited use. Potential benefits. Of those responding, 79 percent of states (30 of 38 states) agreed that this SAFETEA-LU provision has the potential to save time. Fewer states that responded agreed that other benefits could potentially be realized from use of design-build contracting: 45 percent (17 of 38 states) noted that its use could potentially increase the number of highway projects completed, and 37 percent (14 of 38 states) noted potential staff or personnel savings. Potential challenges. Most states did not indicate significant challenges to using this SAFETEA-LU provision in the survey However, states did provide some challenges to questions we asked.design-build contracting in their written responses. For example, some states are prohibited by state statute from using design-build contracting for highway projects. Other states noted that problems in completing other project tasks, such as obtaining permits, can slow overall project completion time frames such that potential time savings achieved by design-build contracting might be negated. Implementation/use. Of those responding, 60 percent of states (26 of 43 states) have used design-build contracting at least once since enactment of SAFETEA-LU. However, the majority of states that responded (24 of 43 states, or 56 percent) use design-build contracting for less than 10 percent of all highway projects. States noted both in our survey and in our interviews that smaller highway projects—such as resurfacing or landscaping projects that are processed as CEs—generally do not require extensive design work and, as a result, do not lend themselves to the use of design-build contracting. Most states responding to our survey agreed that the 180-Day Statute of Limitations provision has potential benefits, and many have had at least one highway project since SAFETEA-LU’s enactment that has taken advantage of it. Prior to enactment of SAFETEA-LU, individuals or organizations generally had up to 6 years in which they could file a judicial claim on a final agency action related to environmental requirements, such as NEPA requirements. This provision of SAFETEA- LU bars claims seeking judicial review of a permit, license, or approval issued by a federal agency for a highway project unless that claim is filed within 180 days of a notice in the Federal Register—FHWA generally publishes these notices—announcing the final agency action. In our survey, state DOTs generally agreed that the 180-Day Statute of Limitations provision has the potential to save time, and many states have taken advantage of this provision since SAFETEA-LU’s enactment; however, some states expressed concerns that a shorter statute of limitations could actually encourage litigation. Potential benefits. Of those responding, 78 percent (32 of 41 states) agreed that this SAFETEA-LU provision has the potential to save time. 56 percent of those states responding (22 of 39 states) also agreed that this provision could result in staff or personnel savings. Only about one-third of those responding (15 of 41 states, or 37 percent) agreed that the provision could result in more projects being completed. Potential challenges. When asked what challenges, if any, could be faced from this SAFETEA-LU provision, 8 states noted that a shorter statute of limitations may actually encourage litigation.these 8 states noted that if the shorter statute of limitations was used, such use could be seen as suspect by outside entities and encourage them to question the project and file a lawsuit against it. Implementation/use. Due in part to the above mentioned challenge, at least one state has chosen not to take advantage of the shorter statute of limitations. Of those states responding to our survey, 64 percent (29 of 45 states) have used the 180-Day Statute of Limitations provision for at least one project since enactment of SAFETEA-LU, leaving 36 percent of states (16 of 45 states) as having never used the provision. Officials from one state DOT we interviewed did note that they have chosen to not pursue this shorter statute of limitations as they feel its use might draw undue attention to the project and encourage outside entities to litigate it. Most states responding to our survey agreed that the Offering Financial Assistance to Stakeholder Agencies provision of SAFETEA-LU has potential benefits, including time and staffing or personnel savings, but fewer states have actually taken advantage of this provision. Under this SAFETEA-LU provision, a state DOT can use part of its federal highway funding to support staff for a federal or state agency participating in the environmental review process, such as the local USACE or FWS office. Funds provided in accordance with this provision may only be used for projects in a given state that support activities that directly and meaningfully contribute to expediting and improving transportation project planning and completion. In our survey, state DOTs generally agreed that the Offering Financial Assistance to Stakeholder Agencies provision has the potential to save time, but its use is not as widespread as some of the other SAFETEA-LU provisions. Potential benefits. Of those responding, 77 percent of states (34 of 44 states) agreed that this SAFETEA-LU provision has the potential to save time. The majority of those states responding also agreed that this provision could have potential staff or personnel savings (26 of 44 states, or 59 percent), as well as increase the number of projects completed (25 of 44 states, or 57 percent). Potential challenges. States responding to our survey generally noted some challenges to using this SAFETEA-LU provision. Nineteen of 44 states (43 percent) responding agreed that programmatic agreements could serve their agency better than this SAFETEA-LU provision. However, only 9 percent of states (4 of 43 states) agreed that a state or agency policy would discourage them from providing financial assistance to affected entities. Implementation/use. Of those responding, 58 percent of states (25 of 43 states) have provided financial assistance to affected entities at least once. However, a large number (18 of 43 states, or 42 percent) have never taken advantage of this provision. In our interviews with state DOTs and federal resource agencies, interviewees also had mixed opinions on this SAFETEA-LU provision. For example, some interviewees stated that use of this SAFETEA-LU provision has created a better working relationship between the state DOT and the affected entity. However, other states we interviewed indicated that they had previously provided financial assistance to affected entities but had seen limited results and had stopped providing such funding. Staff from the federal resource agencies we spoke with were generally familiar with this SAFETEA-LU provision and, in some cases, found it to be helpful in expediting the completion of highway projects. While most states responding to our survey agreed that the Categorical Exclusion Approval Authority provision within SAFETEA-LU has the potential to save time, only three states are participating in this program, and most states indicated that other techniques could achieve the same outcome as this program. This SAFETEA-LU provision authorizes U.S. DOT to assign and a state to assume responsibility for determining whether certain designated activities constitute actions that are categorically excluded from the requirement to prepare an EA or EIS. As noted above, most highway projects in the United States are processed as CEs, thus many of the projects a state DOT leads could be affected by participation in this program. As of April 2012, only three states are participating in this program: Alaska, California, and Utah. These three states have signed memoranda of agreement with their respective FHWA division offices outlining the processes and procedures they are to follow once assuming authority to approve CEs. Per SAFETEA-LU, these agreements are to last no more than 3 years, but can be renewed by mutual agreement of both the state DOT and FHWA. States that choose to participate in this program are required to accept federal court jurisdiction for the decisions they make under the program. Highway stakeholders often refer to this aspect of the Categorical Exclusion Approval Authority provision as requiring the state legislature to ‘waive its sovereign immunity.’ In our survey, state DOTs generally agreed that the Categorical Exclusion Approval Authority provision has the potential to save time, but several respondents supported the use of approaches other than this program to achieve a similar outcome. Potential benefits. Of those responding, 76 percent (34 of 45 states) agreed that this SAFETEA-LU provision has the potential to save time. States also saw this provision as having the potential to increase the number of projects being completed (26 of 44 states, or 59 percent) and create staffing or personnel savings (22 of 45 states, or 49 percent). Potential challenges. The majority of those responding to our survey (29 of 49 states, or 59 percent), as well as some state DOTs we spoke with, indicated that the use of agreements—such as programmatic agreements or memoranda of agreement—could serve the state DOTs better than this SAFETEA-LU provision. Seventeen state DOTs noted in our survey that they have undertaken efforts to establish agreements with their respective FHWA division offices or federal and state resource agencies. Among other things, these agreements establish policies and procedures for the state DOTs to follow in certain situations and scenarios. For example, the Missouri DOT has entered into a programmatic agreement with the FHWA division office to allow the state DOT to classify certain activities specified in the agreement as CEs without submitting each project to FHWA for approval of an environmental classification of CE. Agreements such as these allow relevant agencies—in this case, the FHWA division office—to make certain that projects comply with relevant laws and regulations but relieve the agency of the burden of having to review every project that the state DOT undertakes. Implementation/use. As noted above, only three states—Alaska, California, and Utah—are participating in the program created by this SAFETEA-LU provision. All three state DOTs indicated that they have seen positive outcomes from their participation in the program. States saw the Issue Resolution Process provision within SAFETEA-LU as having some potential to save time, but none has used this provision, and most saw the use of written agreements between parties—including programmatic agreements or memoranda of understanding—as a better alternative. This SAFETEA-LU provision established procedures for resolving issues that could delay completion of the environmental review process or could result in denial of approvals required for the project under specific laws, such as the Clean Water Act or the Endangered Species Act. In general terms, a meeting of the relevant agencies can be convened to resolve the issues at hand; if a resolution cannot be achieved, the lead agency—for most federal-aid highway projects, this would be FHWA—is to notify a number of interested parties, including the Senate Committee on Environment and Public Works, the House Committee on Transportation and Infrastructure, and the Council on Environmental Quality within the Executive Office of the President. In our survey, state DOTs indicated that this SAFETEA-LU provision has some potential to create time savings but generally saw the use of programmatic agreements as a better alternative for resolving issues between parties. Potential benefits. Of those responding, 61 percent (22 of 36 states) agreed that this SAFETEA-LU provision has the potential to save time. States generally did not agree that other potential benefits could arise from the use of this SAFETEA-LU provision: 37 percent (14 of 38 states) agreed that its use has the potential to create staffing or personnel savings, and only 29 percent (11 of 38 states) agreed that its use could increase the number of projects completed. Potential challenges. The majority of the states responding to this portion of the survey (25 of 41 states, or 61 percent) indicated that established agreements, like a programmatic agreement, could better serve their agency than this SAFETEA-LU provision. Some of the state DOTs we interviewed indicated that they had programmatic agreements in place with various parties, such as federal resource agencies, that established procedures by which issues could be identified and resolved. States and federal resource agencies told us that they would prefer if issues were identified and resolved at lower staff levels, rather than by management or executives, or through the process established in this SAFETEA-LU provision. Implementation/use. As noted above, this SAFETEA-LU provision has not been used or implemented, and highway stakeholders we interviewed noted that resolving these disputes using methods other than this SAFETEA-LU provision are preferred. The majority of states responding to our survey agreed that the NEPA Approval Authority provision within SAFETEA-LU has the potential to save time, but most states indicated that it is too burdensome to begin participating. This SAFETEA-LU provision required the establishment of a pilot program to permit not more than five states to assume certain federal environmental review responsibilities, such as the environmental reviews required under NEPA or other federal laws. SAFETEA-LU listed five states that were given the opportunity to participate in this pilot program: Alaska, California, Ohio, Oklahoma, and Texas. To date, California is the only state that is participating in this pilot program. Other states expressed interest but withdrew their applications to participate. Eventually, FHWA opened the pilot program to all states, but limited participation to a total of five states, as called for in SAFETEA-LU. Much like the Categorical Exclusion Approval Authority provision of SAFETEA- LU, states that choose to participate in this program are required to accept federal court jurisdiction for the decisions they make under the program, an action which is generally undertaken by the state legislature and which highway stakeholders often referred to as requiring the state legislature to ‘waive its sovereign immunity.’ In our survey, state DOTs agreed that the NEPA Approval Authority pilot program has the potential to save time, but a majority of respondents indicated that participation requirements for this provision are too challenging to fulfill. Potential benefits. Of those responding, 56 percent of states (19 of 34 states) agreed that this SAFETEA-LU provision has the potential to save time. States generally agreed that this provision does not have the potential to save staffing or personnel resources, or increase the number of projects completed. Potential challenges. The majority of states responding to this section of our survey (27 of 33 states, or 82 percent) indicated that the participation requirements for this initiative are too challenging to fulfill. This message was reiterated in interviews we conducted with state DOTs. For example, officials from these agencies stated that accepting federal court jurisdiction for the environmental review decisions they make was something they, their agency management, or their state legislature—which would need to approve the acceptance of such responsibility—did not wish to take on. Implementation/use. As noted above, California is the only state that is currently participating in this pilot program. According to the California Department of Transportation, highway projects requiring an EA now take about 30 months less to complete than they previously did. In addition, staff from some of the federal resource agencies we spoke with indicated that California’s participation in the pilot program has generally been beneficial, with staff from one resource agency calling for California’s continued participation in the pilot program. While California has reported a time savings from its participation in the NEPA Approval Authority pilot program, other states with whom we spoke did not express interest in this pilot program, with most states citing the requirement to accept federal court jurisdiction for the decisions they make under the program as a key reason why they do not wish to participate. In addition, at least two states indicated that they appreciate having FHWA make these environmental decisions. More specifically, they stated that FHWA has the staff and expertise to make informed decisions regarding environmental impacts. States have implemented a variety of efforts to expedite highway projects, and FHWA has initiated efforts to share innovative practices. Some state efforts began in the 1990s in response to challenges faced at that time. Other state efforts are more recent, prompted by new authorities provided by SAFETEA-LU or by streamlining concepts recently promoted by FHWA. FHWA is making efforts to share innovative practices to help expedite highway projects, most recently through an effort known as Every Day Counts. However, it is too soon to determine the effect these initiatives have had on highway project time frames. Most states have made efforts to expedite projects with state DOTs playing a key role in choosing the techniques that are used. In our survey of state DOTs, we asked officials about initiatives they have undertaken since the enactment of SAFETEA-LU to expedite the four phases of highway projects. Most states—43—reported that they have implemented at least 1 initiative, 4 states reported undertaking no initiatives, 3 states did not respond for any phase, and 2 states reported no initiatives for some phases and no response for other phases. According to the survey, states most often implemented initiatives involving the preliminary design and environmental review phase (39 states). Twenty-two states reported implementing initiatives involving the planning phase, 15 states involving the final design and right-of-way acquisition phase, and 19 states involving the construction phase. We also asked officials about the potential benefits that could be realized from the initiatives they had undertaken. For each of the four phases of a highway project, time Staff savings was savings was the benefit most often cited by states.cited as a potential benefit by a majority of states for all phases except construction, when it was cited as a potential benefit by 39 percent of the states (7 of 18 states) responding. Increased number of projects completed was cited as a benefit by a majority of officials responding for all phases except construction, where it was cited as a potential benefit by half of the states responding (9 of 18). State DOTs reported implementing a variety of types of initiatives to expedite highway projects but generally not one type more than another. In fact, only 4 of more than 30 initiatives were reported by 10 or more states: Linking Planning and Environmental Review. Twenty-three states reported implementing steps that linked their planning and environmental review processes. Using information collected in the planning phase and carrying it through the environmental review phase can minimize duplication of effort and reduce delays in project implementation. For example, the North Carolina Department of Transportation designed a project development process, implemented in 1997, that promotes early involvement of state and federal stakeholders. Each project must pass seven concurrence points that cover aspects of project planning, environmental review, and permitting. This process reduces permit processing times from years to months, according to North Carolina Department of Transportation officials. Using Programmatic Agreements. Seventeen states reported implementing programmatic agreements. These written documents establish a process for consultation, review, and compliance with one or more federal laws between one or more parties, such as a state DOT and a resource agency. Programmatic agreements can help reduce project time frames. For example, an agreement between the Illinois Department of Transportation and FHWA created both a procedure for negotiating project-specific time frames for completing environmental reviews and completion time goals for EISs and EAs. After processing five EISs and four EAs under the agreement, project completion time was reduced by at least 2 years, according to a 2010 AASHTO report for FHWA. Some state DOTs have used programmatic agreements for more than a decade, including at least four states that have used programmatic agreements since the 1990s. An agreement between the California Department of Transportation, FHWA, and four resource agencies has been in place since 1991. Using Design-Build Contracts. Eleven states reported implementing design-build contracts. Again, design-build contracting combines the responsibilities for designing and constructing a project in a single contract instead of separating these responsibilities. Design-build contracting can provide significant time savings compared with the design-bid-build approach where design and construction phases must take place in sequence, according to FHWA. Using Other Nontraditional Construction Contracts. Eleven states reported implementing other nontraditional construction contracts. These included construction manager/general contractor contracts and other nontraditional contract approaches such as cost plus time bidding, lane rentals, and contractor completion incentives and disincentives. State DOTs also reported implementing several other highway project streamlining initiatives, including use of electronic bidding, clarifying the scope of preliminary design, linking the final design and right-of-way acquisition phase with prior project development phases, as well as early right-of-way purchases. FHWA is sharing information on methods to expedite highway projects with state DOTs through an effort called Every Day Counts. This effort’s goals are to shorten project time frames and accelerate use of technology and innovation by convincing states to adopt proven, rapidly deployable innovations. Many of these innovations were in use by some states before they were selected for promotion through Every Day Counts: as discussed earlier, for example, California’s 1991 programmatic agreement or North Carolina’s 1997 project development process. FHWA selected its Every Day Counts innovations through a process that involved headquarters and division office staff, as well as outside organizations such as AASHTO, Associated General Contractors, and the American Road and Transportation Builders Association. Every Day Counts was introduced at AASHTO meetings in spring 2010 and subsequently promoted at 10 regional summits sponsored by FHWA and AASHTO. Each state was asked to decide on specific initiatives it wanted to pursue and develop a plan for implementing them during 2011 and 2012. States were also asked to create transportation innovation councils to track attainment of goals. Under Every Day Counts, FHWA urged state DOTs to consider use of 15 specific initiatives—10 designed to shorten project time frames and 5 designed to accelerate technology deployment. The Every Day Counts initiatives described below include 3 of the 4 initiatives that states have taken as described above. See table 3 for brief descriptions of the initiatives in FHWA’s Every Day Counts effort for implementation in 2011 and 2012 and appendix II for more detailed information on those Every Day Counts initiatives. FHWA expects to introduce a new set of initiatives in late 2012, to be implemented during 2013 and 2014, and intends that another series of initiatives will follow for the period from 2015 to 2016. FHWA began, in October 2011, the process of soliciting ideas for new initiatives to implement in 2013, saying it would consider proposed initiatives based on factors such as transportation system impact, readiness for deployment, affordability, and urgency. FHWA has developed performance measures for Every Day Counts that are linked to U.S. DOT performance measures, but it is too soon to determine the effect these initiatives have had on expediting highway projects, according to an FHWA official. FHWA and other highway project stakeholders developed one or two performance measures for each of the Every Day Counts initiatives. For example, FHWA established the following performance measure for the “Expanding Use of Programmatic Agreements” under Every Day Counts: “FHWA will expand, revise, or create 15 programmatic agreements at the state and regional scale by December 30, 2011.” These performance measures support the overall goals of the Every Day Counts effort, which the FHWA administrator has stated are “to cut project delivery time in half and more quickly advance innovation into daily practice.” The Every Day Counts performance measures and goals that FHWA established are linked to a U.S. DOT performance measure, an attribute of successful performance measures. Specifically, since fiscal year 2010, U.S. DOT has had a performance measure to streamline environmental review with a target of 48 months to complete an EIS for major transportation projects. U.S. DOT noted in its fiscal year 2011 performance report that Every Day Counts is an effort to help reduce project times. Every Day Counts includes 13 specific initiatives to streamline time frames for all four phases of highway projects, including the environmental review phase. While FHWA has collected data to address the U.S. DOT target noted above and data on the Every Day Counts initiatives, states have only had about 1 year to implement the Every Day Counts initiatives and, according to an FHWA official, it is too soon to tell if those initiatives have had a positive effect on expediting the completion of highway projects. For fiscal year 2011, FHWA reported that the median time to complete EISs was 79 months (about 6.6 years). However, FHWA also reported that 16 of the EISs completed during fiscal year 2011 were started before August 2005, when SAFETEA-LU was enacted, and the median time to complete those EISs was 110 months (about 9 years). For the 7 remaining EISs completed during 2011, the median time to completion was 44 months (a little under 3.7 years). These data suggest that the full impact of the Every Day Counts initiatives on the time to complete EISs may not be discernable for several years, if ever, due to a number of reasons. Such reasons could include (1) the lengthy time frames needed to adopt complex initiatives such as linking planning and environmental review or compensatory mitigation, (2) the possibility of EISs being completed which had started before SAFETEA-LU was enacted or Every Day Counts began, or (3) the impact of the SAFETEA-LU provisions discussed above that were also meant to expedite highway projects. FHWA’s efforts to share promising practices depend on the willingness of state DOTs to adopt them. Each state has identified Every Day Counts initiatives to use, according to an FHWA report. An FHWA headquarters official provided the following examples of Every Day Counts initiatives that have achieved wide acceptance: States have, since the introduction of Every Day Counts, generated 56 new programmatic agreements, far more than the Every Day Counts goal of 15 programmatic agreements by December 2011. Nearly all states are using warm mix asphalt. Interest in the product increased after FHWA promoted it as an Every Day Counts initiative. Over 40 states are using prefabricated bridge elements. They report working on 663 bridges, far more than the Every Day Counts goal of designing or building 100 bridges by December 2012. Completing major highway projects involves a complex process that depends on a wide range of stakeholders conducting many tasks. The long time frames to complete highway projects are often caused by factors outside the control of state DOTs, such as a lack of available funds, changes in a state’s transportation priorities, or litigation. These factors can be project specific and may not be controllable by legislation, or by federal or state initiatives. The SAFETEA-LU provisions meant to help expedite highway projects are generally viewed by state DOTs as having the potential to save time. However, given that state DOTs noted in our survey that there are other solutions outside of the SAFETEA-LU provisions that better serve their needs and are within their authority to implement, it is unlikely that state DOTs will greatly increase their participation in some of the SAFETEA-LU provisions we analyzed, particularly those that delegate environmental review decision-making authority from FHWA to state DOTs and require the state to accept federal court jurisdiction for such decisions. Regardless, keeping these provisions in law would continue to give state DOTs the ability to pursue these provisions should they later choose to do so. FHWA’s Every Day Counts effort offers a structured approach to collecting and sharing information with state DOTs to help expedite highway projects. FHWA’s continued efforts to (1) track the progress of Every Day Counts using the performance measures it developed for each initiative and (2) use Every Day Counts as a way to keep introducing new initiatives for trial and adoption by state DOTs can help to ensure that promising practices are developed and shared among states. Additionally, use of the Every Day Counts effort could help U.S. DOT as it attempts to meet its performance measure to streamline environmental review. We provided U.S. DOT with a draft of this report for review and comment. U.S. DOT provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretary of Transportation; the Administrator, Federal Highway Administration; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our work for this report was focused on federal-aid highway projects and efforts to expedite those projects. In particular, this report addresses the following questions: (1) What is the process for planning, designing, and constructing federally funded highway projects, and what factors could affect project time frames? (2) What are state departments of transportation (DOT) views on the benefits and challenges of implementing initiatives to expedite highway projects established by the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU)? (3) What practices have state DOTs and the Federal Highway Administration (FHWA) implemented to expedite highway projects? To describe the process for completing highway projects, as well as the factors that could affect project time frames, we reviewed and analyzed relevant legislation—particularly SAFETEA-LU—regulations, congressional hearing statements, and relevant reports and other publications. We also conducted a number of interviews in six states to collect information on practices involved in completing highway projects, as well as factors that could affect project time frames (see table 4). We chose these six states using several criteria, including participation in the environmental review delegation programs established under SAFETEA- LU (i.e., the Categorical Exclusion Approval Authority and the NEPA Approval Authority provisions), number of active and inactive environmental impact statements, amount of federal highway funding received in fiscal year 2011, and geographic location within the United States. The six states we chose to interview were Alaska, California, Missouri, North Carolina, Pennsylvania, and Utah. In each state, we interviewed the FHWA division office and the state DOT. To obtain more detailed information on processes to complete highway projects—but to minimize the burden on interviewees and in the interest of time—we interviewed regional offices of key resource agencies in two of the six states we selected: California and North Carolina. We chose these states based on geographic diversity, as well as previously conducted fieldwork. We selected six resource agencies to interview, as they were often cited in our preliminary review of reports, publications, and other documents, as well as in early interviews with state DOTs. The resource agencies we interviewed were: U.S. Army Corps of Engineers, U.S. Environmental Protection Agency, U.S. Fish and Wildlife Service, U.S. Forest Service, National Marine Fisheries Service, and state historic preservation offices. There are potentially other federal—as well as state, local, and tribal—agencies that could have been interviewed, but we chose to limit our scope to these six federal agencies. These interviews are not generalizable to all states. Furthermore, for this report, we focused only on federal-aid highways and not other types of highways. To identify state DOT perspectives on the benefits and challenges associated with implementing provisions meant to help expedite highway projects established in SAFETEA-LU, we (1) reviewed information obtained in the above mentioned interviews and (2) conducted a survey of state DOTs. To conduct this survey, we identified key provisions within SAFETEA-LU that we felt were meant to expedite highway projects based on our review of the legislation, analysis of relevant reports, and interviews with highway project stakeholders. We then developed a draft survey to gather state DOTs’ perspectives on the benefits and challenges associated with these SAFETEA-LU provisions. We selected five states in which to conduct pretests: California, Iowa, Pennsylvania, Utah, and Washington. In each pretest, we provided a state DOT official with a copy of our draft survey, asked them to complete it, and then contacted them after 1 hour to discuss the clarity of each question. Through this method, we were able to refine the questions and closed-ended responses in our survey. After the five pretests were completed, we provided a draft copy of the survey to FHWA and AASHTO for their review and comment. Both organizations provided technical comments that we incorporated, as appropriate. Using our professional judgment based on early interviews with highway project stakeholders and our pretests, we determined that the survey should be sent to environmental officials at the state DOTs. However, because the survey considered other aspects of highway projects—for example planning, right-of-way acquisition, and construction—language was included in our transmittal e-mails and in the introduction of the survey to indicate that the state DOT official receiving our survey should consult with his or her colleagues when completing it. We felt that it would be far more cumbersome for respondents, and potentially less reliable, if we were to develop and transmit separate surveys for each highway project phase—that is, individual surveys that covered planning, preliminary design and environmental review, final design and right-of-way acquisition, and construction. Thus, one survey was developed and respondents were asked to share it and consult with colleagues when providing responses. We used lists of environmental officials at the state DOTs that were compiled by AASHTO and the Transportation Research Board to determine the relevant survey respondents. The full universe for this survey was 52 state DOTs: all states, the District of Columbia, and Puerto Rico. We took steps, such as sending early notification e-mails, to help ensure that the list of respondents we created was accurate. We launched our survey on November 30, 2011. We sent e-mail reminders and telephoned survey respondents who had not completed the survey, urging them to do so as soon as possible. We eventually received responses from all 52 state DOTs. We reviewed these responses for inaccuracies or omissions, analyzed the data, and have presented the key findings in this report. The survey and its responses—with the exception of open-ended responses or other identifying information—is reproduced in our e-supplement for this report: see GAO-12-637SP. While all state DOTs were included in our survey and, therefore, our data are not subject to sampling errors, the practical difficulties of conducting any survey may introduce nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond to a question can introduce errors into the survey results. We included steps in both the data collection and data analysis stages to minimize such nonsampling errors. We collaborated with GAO survey specialists to design draft questionnaires and, as previously noted, versions of the questionnaire were pretested, revised, and sent to FHWA and AASHTO for review and comment. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error and addressed such issues, where possible. A second, independent analyst checked the accuracy of all computer analyses to minimize the likelihood of errors in data processing. In addition, GAO analysts answered respondent questions and resolved difficulties that respondents had in answering our questions. To describe the practices state DOTs have implemented on their own to help expedite highway projects, we included a series of questions in our survey of state DOTs asking respondents to identify practices they have implemented in each highway project phase: planning, preliminary design and environmental review, final design and right-of-way acquisition, and construction. State DOTs provided these responses in open-ended questions, which we analyzed. The practices we identified in this report were those that were cited most frequently by survey respondents. To collect additional information on efforts both state DOTs and FHWA have implemented to help expedite highway projects, we reviewed and analyzed information obtained during the interviews with FHWA (both headquarters and division offices), state DOTs, and resource agencies. The Federal Highway Administration (FHWA) is sharing information on methods to expedite highway projects with state departments of transportation (DOT) through an effort called Every Day Counts. This effort’s goals are to shorten project time frames and accelerate technology and innovation by convincing states to adopt proven, rapidly deployable innovations. Under Every Day Counts, FHWA urged state DOTs to consider use of 15 specific initiatives. Every Day Counts promoted the following 10 initiatives to shorten project time frames: Linking planning and environmental review. This initiative promotes use of planning documents and decisions from the project planning process in the environmental review process. It takes environmental, community, and economic information collected early in the planning stage and carries it through project development, design, and construction. This can lead to decision making that minimizes duplication of effort, promotes environmental stewardship, and reduces delays in project implementation. Early consultation with FHWA environmental attorneys. Decisions made early in planning and project development are often the root causes of problems identified later in the environmental review process when National Environmental Policy Act of 1969 (NEPA) and Section 4(f) documents undergo legal scrutiny. Consulting with FHWA environmental attorneys at early decision points can help decision makers avoid problems later, saving time and costs. Expanded use of programmatic agreements. A programmatic agreement is a document that formally spells out the terms of an agreement between a state DOT and other state and/or federal agencies. A programmatic agreement establishes a process for consultation, review, and compliance with one or more federal laws. According to FHWA, programmatic agreements have been effective in producing time savings for completing highway projects. Compensatory mitigation. The federal Clean Water Act requires compensatory mitigation for projects that cause unavoidable impacts to streams, wetlands, and other waters of the United States. Mitigation for federally protected species may also be required through the Endangered Species Act. Some state laws and regulations also require compensatory mitigation. The permitting process under Section 404 of the Clean Water Act constitutes a major component of the project development and completion process. This initiative proposes expanded use of in-lieu fees and mitigation banking in order to save time and expedite highway projects. See table 5 for information on these compensatory mitigation approaches. Some states have never used mitigation banks or in-lieu fee programs, while others use them for the majority of their mitigation needs. Clarifying scope of preliminary design. Some consider preliminary design to involve only the activities needed to make a NEPA determination; they view everything else as final design activities. This cautious approach delays highway projects because it postpones essential planning until it is too late to be effective. States have the flexibility to pursue many design activities not required for a NEPA determination under preliminary design. When performed concurrently with the NEPA process, these activities can expedite projects without affecting eligibility for federal aid. For example, states can perform soil borings, preliminary traffic control plans, and grading plans. Allowable right-of-way acquisition streamlining. Before building a highway project, land and property must be acquired by federal, state, and local agencies through right-of-way practices and procedures. Instead of sequentially, these agencies may move elements of a project through the right-of-way process concurrently. This can significantly shorten the highway project development process. To save time, agencies can use these process flexibilities including appraisal waiver valuations, incentive payments to advance acquisition and relocation, or appraisals and negotiations of property acquisition (up to $10,000) by the same individual. Effective coordination for utility relocation. Approximately half of all highway and bridge projects eligible for federal funding require the relocation or adjustments to accommodate utilities. Gas lines, water lines, waste plumbing, electrical wires, telephone lines, and other wiring are often affected by highway and bridge projects. Flexibilities in place under federal law and regulations foster effective utility coordination during project development by determining the best strategy for physically relocating utilities documenting the terms and considerations for accomplishing utility relocation activities, and financing the work in an effective and timely manner. FHWA technical assistance for stalled projects. This initiative focuses on new projects wherein problems are anticipated with conducting an effective project development process, or for “ongoing EISs” where 60 months or more have elapsed since the publishing of the project’s notice of intent and no record of decision has been issued. FHWA technical assistance teams will resolve many issues that would otherwise hold up the NEPA review or otherwise delay a project’s progress. FHWA subject matter experts can help resolve resource- specific issues concerning wetlands, endangered species, and cultural resources. Use of design-build contracts. Traditionally, a project is designed, put out for bid to construction firms, then built by the winning bidder (design-bid-build). As discussed above, design-build is an alternate contracting method in which the design and construction phases are combined into one contract, eliminating the separate bid phase and allowing certain aspects of design and construction to take place at the same time. This can provide significant time savings compared with the design-bid-build approach, where the design and construction phases must take place in sequence. Use of construction manager/general contractor contracts. In a construction manager/general contractor project, typically, the owners of a project are able to hire a general contractor early in the design phase so that the state may benefit from the contractor’s constructability input as the design develops. This contract type allows state DOTs to remain active in the design process while assigning risks to the parties most able to mitigate them. This can save time because a number of activities can be undertaken concurrently. FHWA allows this type of contract only on a trial basis because approval is necessary for any nontraditional construction contracting technique that deviates from the competitive bidding provisions in Section 112 of Title 23 of the U.S. Code. FHWA promoted five tools to accelerate use of innovative technology, including three that can shorten the time needed to complete highway projects: Warm mix asphalt paving. Warm mix asphalt is the generic term for technologies that allow asphalt to be produced and then placed on the road at lower temperatures than the conventional hot-mix method. In most cases, the lower temperatures result in significant cost savings and reduce greenhouse gas emissions because less fuel is required. Warm mix asphalt also has the potential to extend the construction season, allowing projects to be completed faster. Prefabricated bridge elements. Use of prefabricated bridge elements means that many time-consuming construction tasks no longer need to be done sequentially in work zones. An old bridge can be demolished while the new bridge elements are built at the same time off-site, then brought to the project location ready to erect. Because the bridge elements are usually fabricated under controlled climate conditions, weather has less impact on the quality, safety, and duration of the project. The use of prefabricated bridge elements also offers cost savings. The ability to rapidly install prefabricated bridge elements on-site can reduce the environmental impact of bridge construction in environmentally sensitive areas. See a photograph of prefabricated bridge elements being assembled in figure 3. Integrated bridge support technology. Instead of conventional bridge support technology, an innovative bridge system technology uses alternating layers of compacted granular fill material and fabric reinforcement sheets to provide support for the bridge (see fig. 4). The technology, known as geosynthetic reinforced soil technology, offers advantages in the construction of small bridges, including the following: reduced construction time and cost, ease of construction with common equipment and materials, ease of maintenance, and flexible design that is easily modified in the field for unforeseen site conditions. time in completing a highway project, is a simple but effective solution that can help save lives by allowing drivers who stray off highways to return to the road safely. Instead of a vertical drop-off, the Safety Edge shapes the edge of the pavement to 30 degrees—the optimal angle to allow drivers to reenter the roadway safely. FHWA’s goal is to accelerate the use of the Safety Edge technology, working with states to develop specifications and adopt this pavement edge treatment as a standard practice on all new and resurfacing pavement projects. Adaptive signal control technology. Poor traffic signal timing contributes to traffic congestion and delay. Conventional signal systems use preprogrammed, daily signal timing schedules. Adaptive signal control technology adjusts the timing of red, yellow and green lights to accommodate changing traffic patterns and ease traffic congestion. Though not designed to save time in completing a highway project, the main benefits of adaptive signal control technology over conventional signal systems are that it can continuously distribute green light time equitably for all traffic movements, improve travel time reliability by progressively moving vehicles through green lights, reduce congestion by creating smoother flow, and prolong the effectiveness of traffic signal timing. In addition to the individual named above, Sara Vermillion (Assistant Director); Richard Bulman; Russell Burnett; Richard Calhoon; Steven Elstein; Lorraine Ettaro; Kathleen Gilhooly; Phillip Herr; Richard Johnson; Hannah Laufe; Faye Morrison; Joshua Ormond; Daniel Paepke; and Amy Rosewarne made key contributions to this report. | Projects to construct, improve, and repair roads and bridges are fundamental to meeting the nations mobility needs. However, completing highway projectswhich generally involves four phases consisting of (1) planning, (2) preliminary design and environmental review, (3) final design and right-of-way acquisition, and (4) constructioncan sometimes take a long time. In 2005, SAFETEA-LU established provisions to help expedite highway projects, including streamlining some portions of the environmental review process, allowing states to assume greater environmental review responsibilities under certain conditions, and establishing efforts that permitted delegation of some authority from the federal government to states. GAO was asked to (1) describe the process and factors that could affect highway project time frames, (2) examine state DOTs views on the benefits and challenges of the provisions to expedite highway projects established in SAFETEA-LU, and (3) describe additional initiatives that state DOTs and FHWA have implemented to expedite the completion of highway projects. GAO surveyed officials from 52 state DOTs, including all states, the District of Columbia, and Puerto Rico; interviewed officials at FHWA, state DOTs, and federal resource agencies (agencies tasked with protecting natural, historic, or cultural resources); and analyzed legislation, regulations, and other reports and publications. U.S. DOT provided technical comments on a draft of this report, which GAO incorporated as appropriate. The process to complete highway projects is complicated and lengthy due to multiple factors. Specifically, highway projects can involve many stakeholders, including agencies at all levels of government, nongovernmental organizations, and the public. These stakeholders perform a number of tasksfor major highway projects, as many as 200 steps from planning to constructionbut their level of involvement varies. For example, resource agencies like the U.S. Army Corps of Engineers or the U.S. Fish and Wildlife Service generally only become involved in a highway project if it affects the environmental or cultural resources that agency is tasked with protecting. Additional factors can lengthen project time frames, including the availability of funding, changes in a states transportation priorities, public opposition, or litigation. State departments of transportation (DOT) that GAO surveyed generally agreed that the provisions meant to help expedite highway projects established in the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) could decrease time frames but found some provisions more useful than others. They most frequently agreed that the provision allowing for the use of protected public landif such use has minor impacts on the property and is approved by relevant resource agencieshas the potential to save time and has few challenges to implementation. State DOTs reported that the other SAFETEA-LU provisions GAO studied have both potential benefits and challenges but, in some cases, they identified alternative solutions that could better serve their needs. For example, although respondents indicated that they could save time by implementing the issue resolution process established in SAFETEA-LU, they also noted that the use of written agreements between highway project stakeholderssuch as federal resource agenciescould better serve their purposes. Survey respondents also indicated that they are generally not interested in implementing two SAFETEA-LU provisions that would delegate environmental review decision-making authority from the Federal Highway Administration (FHWA) to states, primarily because the states did not want to accept federal court jurisdiction for the decisions made under those provisions. States have implemented a variety of efforts to expedite highway projects and FHWA has initiated efforts to expedite projects by sharing innovative practices. For example, in 1997, the North Carolina DOT implemented a project development process that promotes early involvement of highway stakeholders and reduces permit processing times from years to months. Other state efforts are more recent, prompted by streamlining concepts promoted by FHWA beginning in 2010 under an effort known as Every Day Counts. Through Every Day Counts, FHWA encouraged states to consider implementing 15 specific innovative practices during 2011 and 2012, including 13 practices that could help expedite highway project completion. FHWA plans to introduce a new set of initiatives during 2012 for implementation during 2013 and 2014. FHWA developed performance measures for Every Day Counts and is currently collecting data to determine if these initiatives have had a positive impact on expediting highway projects. |
Selecting applicants based on their qualifications instead of patronage has been the foundation of the federal hiring system for more than 130 years. Congress passed the Pendleton Act in 1883, establishing that federal employment should be based on merit. The nine merit system principles were later codified as part of the Civil Service Reform Act of 1978. The first merit principle requires that agencies recruit qualified individuals from appropriate sources to achieve a work-force from all segments of society. It also requires that selection and advancement should be determined solely on the basis of relative ability, knowledge, and skills after fair and open competition. This assures that all receive equal opportunity. In this report, “Title 5” refers to the government-wide personnel management laws and related provisions generally applicable to federal employment. Title 5 outlines the rules agencies must follow to hire employees, such as the competitive examining hiring authority. Competitive examining has been the traditional method for making appointments to competitive service positions. The competitive examining process requires agencies to notify the public that the government will accept applications for a job, screen applications against minimum qualification standards, apply selection priorities such as veterans’ preference, and assess applicants’ relative competencies or knowledge, skills, and abilities against job- related criteria to identify the most qualified applicants. Federal agencies typically assess applicants by rating and ranking them based on their experience, training, and education. Congress and the President have created a number of additional hiring authorities—beyond competitive examining—to expedite the hiring process or to achieve certain public policy goals. In some cases, Congress created hiring authorities outside of Title 5 granting access directly to specific agencies. For example, provisions under Title 42 of the United States Code provide authority for the Department of Health and Human Services to hire individuals to fill mission critical positions in science and medicine. In other cases, Congress and the President created authorities under Title 5 which permitted hiring actions to be taken by means other than competitive examining. Through authority delegated by the President, OPM has authorized excepted service appointment authorities for when it is not feasible or practical to use competitive examining. Examples of some exceptions to the competitive hiring process include the following: Filling critical skills gaps. Congress created “direct hire authority” to help agencies fill vacancies in the competitive service under certain circumstances. Congress authorized OPM to permit agencies to use direct hire authority for a position or group of positions where OPM has determined that there is either a severe shortage of candidates or a critical hiring need for such positions. This direct hire authority expedites the typical hiring process associated with the competitive examining hiring authority in Title 5 by eliminating competitive rating and ranking procedures and veterans’ preference. Congress has also provided direct-hire authority directly to agencies for specified purposes. Employment of veterans. Congress created a hiring authority called the Veterans’ Recruitment Appointment authority that allows for certain exceptions from the competitive examining process. Specifically, agencies may appoint eligible veterans without competition under limited circumstances or otherwise through excepted service hiring procedures. Employment of students and recent graduates. To ensure that the federal government continued to compete effectively for students and recent graduates, a 2010 executive order created the Pathways Program. Pathways replaced two former student programs and incorporated the Presidential Management Fellows program. OPM decentralized and delegated many personnel decisions to federal agencies. It also has encouraged agencies to use human capital flexibilities, such as hiring authorities, to help tailor their personnel approaches to accomplish their missions. In January 1996, for example, OPM delegated competitive examining authority to federal agencies for virtually all positions in the competitive service. OPM is responsible for ensuring that the personnel management functions it delegates to agencies are conducted in accordance with merit system principles, and the standards established by OPM for conducting those functions. The exceptions authorized under Title 5 to the competitive examining process, and the creation of exceptions to hiring rules through new agency-specific non-Title 5 hiring authorities mentioned above, also affect oversight responsibilities. Oversight of hiring actions depends on the origin of the authorized exception. For example, if the position was excepted from the competitive service by OPM, OPM is responsible for ensuring that the hiring actions taken to fill those positions are consistent with merit principles and other relevant Title 5 laws and regulations. However, if Congress directly granted an agency authority to appoint individuals into the excepted service without regard to Title 5 and OPM authority, generally the agency (rather than OPM) must ensure that it is meeting relevant standards under that grant of authority or additional oversight provisions detailed by Congress. Further, the Chief Human Capital Officers (CHCO) Act of 2002 established the CHCO Council to advise and coordinate the activities of member agencies on such matters as the modernization of human resources systems, improved quality of human resources information, and legislation affecting human resources operations and organizations. The CHCO Council is chaired by the Director of OPM and serves to coordinate and collaborate on the development and implementation of federal human capital policies. Our analysis of OPM data found that overall, agencies used a relatively small number of hiring authorities to fill nearly all of the vacancies in 2014, and a large number of hiring authorities to fill the small proportion of positions that remained. Specifically, we found that agencies used 105 hiring authority codes for 196,226 new appointments in fiscal year 2014. These appointments were for competitive service positions, as well as for excepted service positions, made under both Title 5 and agency-specific non-Title 5 authorities. However, of these 105 authorities, agencies used just 20 hiring authority codes for more than 178,000 (91 percent) of the new appointments, while using 85 hiring authority codes for the 18,000 (9 percent) remaining new appointments (see fig. 1). As noted, Congress and the President have created a number of additional hiring authorities—beyond competitive examining—intended to address positions that cannot be filled through competitive examining in order to expedite the hiring process or to achieve certain public policy goals, such as facilitating the entrance of certain groups into the civil service. Importantly, our analysis provides only a snapshot of a single fiscal year. The use of hiring authorities was influenced by particular hiring levels at certain agencies. For example, the Department of Veterans Affairs (VA) used the Title 38 hiring authority—the second most used authority—to hire almost 8,000 nurses and more than 3,000 medical officers in fiscal year 2014 in response to increased demands for healthcare providers. Of the 20 top-used hiring authorities, agencies relied most heavily on three types of authorities: (1) the competitive examining hiring authority, (2) other Title 5 hiring authorities, and (3) agency-specific hiring authorities (non-Title 5 as well as Title 5). Each of these is discussed in greater detail below (see table 1 for a description of all 20 hiring authority codes most commonly used in fiscal year 2014). Competitive examining hiring authority. While the Title 5 competitive examining hiring authority—has been the traditional method for federal hiring—was the single most used hiring authority in fiscal year 2014, it accounted for less than a quarter of all new appointments government- wide. Further, only 3 of the 24 agencies covered under the Chief Financial Officers (CFO) Act of 1990, as amended—the Department of Justice, the Small Business Administration, and the Department of the Treasury—used the competitive examining hiring authority for a majority of their new appointments in fiscal year 2014. At the same time, 6 of the 24 CFO Act agencies used competitive examining for less than 10 percent of all new appointments in fiscal year 2014. They include the Departments of Agriculture, and Transportation, VA, U.S. Agency for International Development, Nuclear Regulatory Commission, and the National Science Foundation. Other Title 5 hiring authorities. In addition to competitive examining, in fiscal year 2014, agencies often used other Title 5 hiring authorities available to all agencies. For example, agencies used direct hire—which waives the rating and ranking process and the application of veterans’ preference under competitive examining for certain critical needs, or when there is a severe shortage of candidates. In other instances, agencies used special hiring authorities to hire veterans, students, and recent graduates. With respect to direct hire, OPM provided agencies with government-wide direct hire authorities for certain IT specialists and medical occupations, among other critical needs occupations. This enables an agency with delegated examining authority to hire, after public notice is provided, any qualified applicant without regard to certain competitive hiring requirements such as category rating and veterans’ preference. Additionally, agencies often used hiring authorities related to hiring veterans and students or recent graduates. For example, the Veterans Employment Opportunities Act (VEOA), a competitive service appointment authority, was the fifth most frequently used hiring authority code for new appointments in fiscal year 2014. One of our case agency officials told us that they used VEOA to recruit veterans eligible to apply for positions announced under merit promotion procedures. Fiscal year 2014 was one of the first years agencies used two new Pathways excepted service hiring authorities for interns and recent graduates. These authorities are to be used as a supplement to, not a substitute for, the competitive hiring process. Pathways Programs are tools for agencies because of their focus on students and recent graduates. For example, nearly all of the 24 CFO Act agencies used at least one of the three Pathways Programs (the third being the Presidential Management Fellowship), and two agencies—the National Aeronautics and Space Administration and the National Science Foundation—used the Pathways Internship authority for more than 50 percent and 30 percent, respectively, of their new hires. At the same time, 7 of the 24 CFO Act agencies that used at least one of the Pathways authorities used it for less than 5 percent of their new appointments in fiscal year 2014. Agency-specific hiring authorities. Agencies also relied heavily on hiring authorities that are only available to specific agencies. As shown in figure 2, in fiscal year 2014, 38 percent of all new federal appointments were made using hiring authorities designated only for specific agencies. Agency-specific hiring authorities can be authorized directly by Congress or may be provided to the agency by OPM. As previously noted, VA used Title 38 hiring authorities—attributable to the second most used hiring authority code—to hire for medical occupations in fiscal year 2014. In addition, two of the most used non-Title 5 hiring authorities related to entire agency-components—the Transportation Security Administration and Federal Aviation Administration—were provided by Congress with additional flexibility in hiring agency personnel for all occupations. OPM also provided special Title 5 excepted service hiring authorities to certain agencies. OPM annually publishes a consolidated listing of all agency specific Title 5 excepted service hiring authorities granted under Schedules A, B, or C in the Federal Register. Schedules A, B, C, and D refer to distinct suites of authorities that enable agencies to hire in those circumstances when it is not practicable to use competitive service qualification standards or to rate applicants using traditional competitive examining procedures, when recruiting certain types of students (or others who have recently completed certain educational programs), or to fill positions of a confidential or policy-determining nature. For example, OPM granted a Schedule A hiring authority to the General Services Administration and Office of Management and Budget to hire digital services staff as a part of the President’s Management Agenda’s Smarter Information Technology (IT) Delivery Initiative through September 2017 and 2016, respectively. In May 2015, OPM approved government-wide Schedule A hiring authority for digital services staff for all agencies working on IT projects as part of this initiative, also through September 2017. In our 2002 report, we found that to address their human capital challenges, it is important for agencies to assess and determine which human capital flexibilities, including hiring authorities, are the most appropriate and effective for managing their workforces. Among other things, such assessments help ensure that agencies use hiring authorities as part of an overall human capital strategy. By helping agencies to better understand the impact that different authorities have on the pool of available candidates, agencies could use hiring authorities more strategically to achieve specific talent management and public policy goals such as closing mission critical skills gaps, employment of veterans, and increased workforce diversity. Assessments can also better ensure that agencies first identify and use the flexibilities already available under existing laws and regulations and only seek additional flexibilities when necessary based on sound business cases. Moreover, given agencies’ reliance on a relatively small number of authorities in 2014, assessments of authorities’ effectiveness could help inform whether there are opportunities to refine, consolidate, or reduce the number of available authorities to simplify the hiring process, or whether provisions of some agency-specific authorities should be expanded to more agencies. Indeed, OPM officials said they do not know if agencies rely on a small number of authorities because agencies are unfamiliar with other authorities, or if they have found other authorities to be less effective in meeting their needs. As part of its hiring reform efforts, in 2010 the administration launched the Hiring Reform Initiative, which was aimed at improving the effectiveness of the hiring process. To gauge agencies’ progress in meeting those goals, OPM tracked improvements in time-to-hire and manager and applicant satisfaction levels as key indicators for jobs posted on the USA jobs website. Time-to-hire and satisfaction surveys are useful metrics of the effectiveness of the hiring process as a whole. Neither OPM nor the selected agencies used or explored the potential for this information to analyze the effectiveness of individual, or the different types, of hiring authorities. Without this information, it is difficult for OPM and agencies to assess the impact specific hiring authorities are having on the administration’s reform efforts and other goals. As one example, OPM requires agencies to report time-to-hire information for all job announcements posted on USAjobs.gov. However, neither OPM nor officials from our selected agencies said they used this data to analyze the effectiveness of individual hiring authorities. Likewise, OPM surveys managers and applicants to gauge their opinions of the application process. OPM officials said while they conduct some government-wide analysis of these surveys and brief the CHCO Council on trends and findings, they have not analyzed the relative effectiveness of individual hiring authorities. OPM officials said that the time-to-hire, manager satisfaction, and applicant satisfaction databases were located in different systems, thus making it difficult to analyze or identify trends. Further, OPM officials said they view the use of hiring authorities as case-specific and in some cases tied to agency-specific goals, which makes it difficult to compare them to one another and develop meaningful conclusions about how they are used. However, there are several potential benefits to understanding the relative effectiveness of different authorities for particular agency requirements. First, different types of hiring authorities have different procedures associated with them. By analyzing the effectiveness of hiring authorities, OPM and agencies could identify improvements that could be used to refine those procedures. Second, Congress and OPM could provide more agencies access to specific authorities found to be highly effective. Third, authorities found to be less effective could be revised or eliminated by Congress or OPM, thus helping to ensure only those hiring tools found to be the most useful were available to agencies. Moreover, a better understanding of the relative effectiveness of different hiring authorities could enhance agencies’ awareness of the implications different authorities may have on the composition of their workforce. For example, when agencies use one hiring authority to achieve a particular public policy objective, it may have implications for the attainment of a different objective. In its 2015 report, the U.S. Merit Systems Protection Board found that when agencies used veteran-specific hiring authorities in fiscal year 2012, they hired between 50 and 60 percent more men than women, not surprising given the active duty military is over 80 percent male. Better information on the impact different authorities have on the applicant pool could help agencies use hiring authorities more deliberatively to accomplish different hiring outcomes. Officials from our selected agencies—AFMC, Energy, and NIH—said that they had not used time-to-hire data or manager and applicant satisfaction survey data to evaluate specific hiring authorities. Like OPM, officials from the selected agencies said they have focused their efforts and used this information to better understand and improve the overall hiring process but had not considered it for analyzing the use of individual authorities. Given that an individual agency may only use a subset of all authorities, an agency-specific analysis may not reveal many differences across authorities. However, compiling agency evaluations of the range of authorities available would likely provide greater insight. A number of factors can determine how and whether agencies get the talent they desire. Selected agencies described for us additional strategies they use to help meet their workforce requirements. These are discussed in greater detail in appendix II). Despite the importance of assessing the effectiveness of individual hiring authorities, there are some limitations to using available OPM data for this type of analysis. The hiring authority codes used in OPM’s EHRI database are a tool for tracking the use of hiring authorities across the federal government. However, the codes are not a perfect one-for-one match for individual hiring authorities, and some hiring authority codes represent an unknown number of authorities. For example, the hiring authority codes for “Digital Service Experts” are also used to track the use of other hiring authorities. As a result, it is unclear how frequently these particular digital services authorities are used. Similarly, OPM uses a single hiring authority code for all “other laws, executive orders, and regulations,” which was the sixth most used hiring authority code for new appointments in fiscal year 2014. As a result of these data limitations, OPM is unable to use its own data to determine the extent to which special hiring authorities like these are being used, to determine (or aid agencies in determining) whether they are meeting their intended purpose, if refinements are needed, or whether they should be expanded for other agencies with similar needs. With this analysis, OPM could revise authorities within its authority and work with agencies to develop legislative proposals as warranted to revise authorities beyond its authority. OPM is responsible for executing, administering, and enforcing the civil service rules and regulations and the laws governing the civil service, including those pertaining to hiring. Additionally, OPM is required to establish and maintain oversight over delegated personnel activities, including delegated competitive examining activities, to ensure agencies are acting in accordance with the merit system principles and the relevant standards established by OPM, such as compliance with applicable laws, rules, regulations, executive orders, and OPM policies. OPM monitors overall implementation and identifies corrective actions when deficiencies are found. OPM conducts this oversight through three primary means: delegated examining unit audits, human resource management evaluations, and special studies. Delegated examining audits: OPM oversees agencies’ use of the delegated examining authority for competitive service. These audits focus on compliance with merit principles and other key goals. To a lesser extent they consider the effectiveness of the overall hiring process, but typically do not analyze the effectiveness of specific hiring authorities. Human resource management evaluations: OPM and agencies evaluate how well human capital programs, align with agency mission and goals and comply with the merit system principles, laws, and regulations. In contrast to the delegated examining unit audits, officials said that human resource management evaluations provide an opportunity for identifying best hiring practices at agencies. Special studies: OPM officials said they periodically conduct special studies of hiring issues. For example, officials said OPM is currently studying the use of Pathways Programs government-wide. Officials said the study will identify trends in agencies’ usage, highlight notable practices by agencies, identify any challenges within the programs, and assess whether the programs are being used as intended. OPM’s oversight functions provide an in-depth understanding of agency hiring, and could provide important information to help OPM identify opportunities to streamline and consolidate federal hiring authorities. However, OPM officials identified two challenges to analyzing audit findings to understand the effectiveness of hiring authorities. First, OPM does not maintain all the different types of audit reports in a single, centralized location. Conducting a government-wide analysis would require manually piecing together reports from different systems. However, even identifying trends within the audit type could provide leading practices for OPM to share and opportunities for agencies to improve their use of hiring authorities. According to OPM officials, OPM is developing a new database capable of housing all audit findings, which will enable it to conduct this analysis in future years. OPM officials said a second challenge to government-wide analysis of hiring authorities is that OPM’s oversight is limited to the hiring authorities established in Title 5. According to OPM, the evaluation or oversight requirements, if any, of non-Title 5 excepted service hiring authorities are authority-specific. Since non-Title 5 excepted service hiring authorities are granted to agencies directly by Congress, OPM does not generally have a direct oversight role under these authorities. In recent years, OPM has launched several initiatives and provided agencies with tools to address federal hiring challenges. In 2008, for example, OPM and the CHCO Council partnered to lead the End-to-End Hiring Roadmap, which aimed to improve the hiring process from an applicant perspective. Then, in 2010, the President’s Hiring Reform initiative aimed to address impediments to recruiting and hiring highly qualified employees into the federal civilian workforce. Also in 2010, OPM began an effort to increase employment of veterans. In 2011, OPM started a new initiative to increase employment of students, and recent graduates, in part by educating agencies about new or existing hiring authorities for these groups. OPM also established the Veterans Employment Program Office and Office of Diversity and Inclusion. In 2015, OPM introduced the Recruitment, Engagement, Diversity, and Inclusion (REDI) Strategy. REDI’s objective, in part, was to improve the quality of the hiring process by meeting with HR professionals and hiring managers to ensure they understand current hiring flexibilities through guidance and resources, referred to as “untying the knots” sessions. Since 2014, OPM and the Presidential Personnel Office have led the People and Culture Cross-Agency Priority Goal intended to deploy a world-class workforce by creating a culture of excellence and enabling agencies to hire the best talent from all segments of society. As part of this goal OPM with OMB’s assistance, kicked off a new initiative in early 2016—the Hiring Excellence Campaign—designed to improve the federal hiring process. According to OPM officials, one of the objectives of the campaign is to raise awareness and effective use of hiring authorities by managers and human resource professionals and to address administrative and other obstacles that may be impeding the government’s ability to recruit and hire the best talent. Officials said the campaign will feature a series of multi-agency, in-person discussions about hiring and assessment policies and corresponding guidance led by OPM and OMB officials. The events are to be in locations with high concentrations of human resources specialists, where agencies have been hiring for hard-to-fill occupations, among other factors. OPM’s Hiring Excellence Campaign is designed to bring together agency hiring specialists and hiring managers to discuss hiring tools and opportunities for improving the hiring process, and therefore could help agencies make better use of available hiring authorities. The campaign’s ultimate effectiveness will depend to a large degree on OPM’s ability to implement the effort as planned and ensure it generates lasting improvements. To help in this regard, OPM officials said they have identified objectives, strategies, and baseline measures to track the success of the campaign. Further, OMB officials said they were identifying agency-specific hiring authority subject matter experts to assist in the discussions where audiences may have questions about non-Title 5 hiring authorities. OPM officials said they are developing a project plan for implementing the campaign. These activities could help improve OPM’s management, monitoring, and oversight of the Hiring Excellence Campaign and will need to be implemented as planned. Given the similarities between the Hiring Excellence Campaign and OPM’s prior efforts to improve federal hiring, it will also be important for OPM to ensure that the campaign leverages these prior initiatives, incorporates relevant lessons learned, if any, and ensures there is no unnecessary overlap or duplication with other efforts to improve federal hiring. In addition to the initiatives noted above OPM makes available a variety of resources to help agencies make better use of hiring authorities, as shown in table 2. Description The Hiring Toolkit, available on the HR University web site, describes the fundamentals of federal hiring, the hiring process, competitive and excepted service hiring, veteran’s appointing authorities, hiring authorities and pay. It complements the interactive Hiring Decision Tool, which matches potential hiring flexibilities with hiring needs. End-to-End Hiring Roadmap The End-to-End Hiring Roadmap is a timeline tool based on a generic process model for conducting efficient, high-quality hiring. The purpose of this is to help identify similar steps in agencies’ hiring processes and diagnose areas of greatest need for improvement. The Delegated Examining Operations Handbook provides assistance to agencies with delegated examining authority under Title 5 and applies to competitive examining only. The handbook provides agencies with guidance, options, and specific operational procedures designed to ensure that examining programs comply with merit system laws and regulations. The Vet Guide consolidates the laws and regulations that affect the employment of veterans in the federal government in one central location. The guidebook describes veteran’s preference and special hiring authorities for veterans. The Hiring Excellence website provides information, tools, and guidance to hiring specialists and hiring managers to assist in agency hiring. The website includes, among other things, information about strategic recruitment, assessment and selection, hiring authorities, and diversity and inclusion. Officials from our selected agencies reported mixed impressions about OPM’s online resources, including some of the above examples. During our discussion groups, some hiring managers and HR specialists said they found OPM’s resources useful, some said OPM’s website and guidance were fragmented and difficult to use, and some officials said they were not familiar with OPM’s tools and resources and relied on agency-specific policies and tools. Officials familiar with OPM’s resources said they most often relied on agency-specific policies and procedures before turning to OPM’s resources and guidance on hiring authorities. While OPM’s resources may be developed and targeted for different users, without additional review these tools may not be having their intended effect. We have previously recommended, and continue to encourage OPM ensure agencies are getting the needed guidance and tools by evaluating the communication and effectiveness of relevant tools or leading practices created by OPM or agencies to address crosscutting human capital challenges. Given the long-standing human capital challenges and difficulties in filling critical skills gaps, federal agencies need to have an assortment of effective tools to bring qualified applicants onboard as well as to meet public policy goals. In fiscal year 2014, 20 authorities were used to make around 90 percent of new appointments. A critical first step in understanding if and which authorities are meeting agency needs is for OPM and agencies to analyze if and how specific authorities contribute to the effectiveness of the hiring process. This information would help OPM and agencies better manage the suite of hiring authorities and identify opportunities to simplify and improve the hiring process by refining, consolidating, or eliminating some authorities or by expanding provisions of some agency-specific authorities to more agencies. Likewise, agencies could make more strategic use of the hiring authorities, selecting those that data have shown are best suited for their particular talent needs and other objectives. However, despite the metrics available to assess their performance, OPM and our selected agencies do not measure the effectiveness of hiring authorities. Finally, while OPM has launched several initiatives to reform the hiring process over the last several years, some of the issues they were designed to address, including improving agencies’ use of federal hiring authorities, remain. Going forward, it will be important for OPM to ensure that its most recent initiative, the Hiring Excellence Campaign, is implemented as planned, as well as to ensure that there is no unnecessary overlap or duplication with earlier efforts. To help strengthen the government’s ability to compete in the labor market for top talent, we recommend that the Director of OPM, in conjunction with the CHCO Council, take the following actions to improve the federal hiring process: 1. For hiring authorities for which OPM oversees, conduct a study or assessment of specific hiring authorities and/or processes to gain insight into why these agencies relied on the authorities, the relationship between the agencies’ choices and the agency mission and broader public policy goals, consistent with merit systems principles, and determine whether modernization is necessary. For agency-specific hiring authorities and/or processes, OPM should collaborate with the CHCO Council to obtain similar insights agencies may have regarding their authorities and/or processes and to determine whether there are lessons learned which may be relevant to government-wide modernization efforts. 2. Use this information to determine whether opportunities exist to refine, consolidate, eliminate, or expand agency-specific authorities to other agencies and implement changes where OPM is authorized, including seeking presidential authorization (as necessary) in order to do so. In cases where legislation would be necessary to implement changes, OPM should work with the CHCO Council to develop legislative proposals. 3. As OPM continues with the implementation of the Hiring Excellence Campaign, determine ways to sustain aspects of the campaign that focus on equipping agencies with information, tools, and support to strengthen their knowledge and ability to attract and hire top talent beyond the active roll out of the campaign and leverage prior related efforts through such activities as incorporating applicable lessons learned and that there is no unnecessary overlap and duplication across their individual efforts. We provided a draft of this product to the Acting Director of OPM and the Director of OMB for comment. We also provided relevant portions of this product to the Secretaries of the Department of the Air Force, Department of Energy, and Department of Health and Human Services for technical comment. Technical comments were received from OPM, OMB, and the Air Force, and incorporated, as appropriate. In written comments, reproduced in appendix III, OPM generally concurred with our findings and recommendations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Director of the Office of Personnel Management, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-2757 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this product are listed in appendix IV. This report examines: (1) the hiring authorities agencies used in fiscal year 2014 (the most recent year for available data when we began our review), (2) the extent to which selected agencies and OPM assessed the effectiveness of hiring authorities used for selected occupations in helping meet hiring needs, and (3) how OPM ensured that agencies have the assistance and information needed to use the hiring authorities effectively. To address our first objective, we used Office of Personnel Management (OPM) Enterprise Human Resources Integration (EHRI) data, which contains personnel action and workforce data for most federal civilian employees. We analyzed government-wide and agency-level EHRI data from fiscal year 2014, which was the most recent year that data were available during our review. We primarily used the following EHRI data variables: (1) “current appointment authority 1 and 2” to describe the hiring authorities agencies used; and (2) “nature of action” codes to identify new appointments into federal service. First, we used OPM’s “current appointment authority” codes in EHRI to determine which hiring authorities agencies used in fiscal year 2014. According to OPM’s Guide to Data Standards—the guidance document that describes data elements in EHRI—the current appointment authority code is a mandatory data code that describes the law, executive order, rule, regulation, or other basis that authorizes an employee’s most recent conversion or accession action. Each agency must record the appointment authority codes for each hiring action in its own personnel system, which is then submitted to OPM’s EHRI data warehouse. Second, to calculate the new appointments made with each hiring authority, we aggregated the hiring actions using a subset of EHRI’s “nature of action” codes. Nature of action codes capture information about the type of personnel action being tracked, such as whether the action was an appointment, conversion, or separation. We limited our analysis to include nature of action codes for new full- and part-time, and permanent and non-permanent federal appointments in the competitive or excepted services. We also excluded certain groups outside of the scope of this engagement. Using these parameters, we aggregated the hiring actions for new appointments by current appointment authority in fiscal year 2014, and we sorted the current appointment authority codes from most used to least used government- wide. Sometimes agencies entered two current appointment authority codes for the same hiring action. Based on OPM documentation, we determined certain situations in which OPM instructed agencies to enter two codes for a single hiring action. However, there were a number of situations in which OPM could not provide documentation explaining why agencies should enter two current appointment authority codes. For these instances, we generally counted each current appointment authority code agencies entered as a single hiring authority “use.” We confirmed our methodology for describing government-wide use of hiring authorities with OPM. Additionally, we used OPM documentation to match the current appointment authority codes with a name and citation in applicable laws, regulations, and executive orders. We examined laws, OPM regulations, and additional OPM materials to obtain a description of the most frequently used hiring authorities government-wide in fiscal year 2014. We reviewed OPM’s EHRI data for reasonableness and the presence of any obvious or potential errors in accuracy and completeness. On the basis of these procedures, we believe the data are sufficiently reliable for use in the analyses presented in this report. To assess the extent that hiring authorities are effective in helping meet the needs of selected agencies, we focused on three occupations at three agencies as case examples. Based on our prior work on government- wide mission-critical occupations and skills gaps, we focused on the following occupations: Information Technology Specialists, Contract Specialists, and Science, Technology, Engineering, and Mathematics (STEM) occupations. We reviewed OPM’s list of white collar occupational groups and included occupations within groups that appeared to be related to STEM. Specifically, we aggregated hiring actions for the occupations within the following occupational groups: (1) Natural Resources Management and Biological Science Group; (2) Medical, Hospital, Dental, and Public Health Group; (3) Veterinary Medical Science Group; (4) Engineering and Architecture Group; (5) Physical Sciences Group; and (6) Mathematical Sciences Group. Using data from OPM’s EHRI database, we made a nonprobability, judgmental selection of three case agencies based on two primary factors. First, we only considered agencies that hired our selected occupations in fiscal year 2014. Second, we selected agencies that used a variety of hiring authorities for these occupations in fiscal year 2014. Based on these factors, we selected the following case agencies: Air Force Materiel Command (AFMC), Department of Energy (DOE), and the National Institutes of Health (NIH). At AFMC, DOE, and NIH, we interviewed human resources (HR) policy officials to learn about which hiring authorities agencies used for our selected occupations, reasons for using these authorities, and ways that agencies were measuring the effectiveness of the authorities. We also covered these topics with HR specialists and hiring managers for our selected occupations at each case agency using a series of group discussions that we conducted using a standardized set of questions. In addition to the case study approach, we also identified available government-wide data sources on the effectiveness of the hiring process. For example, we reviewed OPM time-to-hire data government-wide and by agency. We reviewed the Chief Human Capital Officers’ (CHCO) Applicant Satisfaction Survey and the CHCO Manager Satisfaction Survey, which provided information on the applicant and hiring manager experience with the hiring process. Further, we reviewed government- wide reporting on the administration’s hiring policy goals, such as increasing the employment of veterans and people with disabilities in the federal workforce. We also interviewed knowledgeable officials at OPM and the Merit Systems Protection Board about the effectiveness of federal hiring authorities. We also assessed the extent to which OPM’s oversight of hiring authorities evaluated their effectiveness. Specifically, we identified relevant laws and policies that outline OPM’s oversight responsibilities for hiring authorities and interviewed OPM on the implementation, results, and analysis of these oversight efforts. We identified and reviewed OPM’s policy documents that describe the key components of OPM’s oversight program for hiring authorities, Delegated Examining Unit (DEU) audits and Human Resources Management Evaluations (HRME). Examples of policy documents we reviewed included the Practitioner’s Guide: How to Conduct a Delegated Examining Audit and Merit System Audit and Compliance Evaluator Handbook. We interviewed knowledgeable OPM officials about oversight activities related to Title 5 hiring as well as officials at our selected case agencies—AFMC, Energy, and NIH—who are responsible for working with OPM on these oversight reviews. For both OPM and the selected case agencies, we interviewed knowledgeable agency officials about the types of analyses they conducted based on audit findings and how, if at all, they shared information on any lessons learned. In addition to reviewing oversight of Title 5 hiring, we also reviewed documentation from our case agencies and interviewed case agency officials about non-Title 5 hiring authority oversight procedures. To assess the extent that OPM ensures agencies have the assistance and information needed to use hiring authorities effectively, we identified online tools and guidance from OPM’s website as well as OPM documents, such as online trainings, handbooks, toolkits, and hiring authority fact sheets. We interviewed selected case agency officials to gauge their awareness of and satisfaction with these resources. We reviewed documentation on previous OPM initiatives that focused in part on making improvements to hiring authorities. These initiatives included OPM’s End-to-End Hiring Roadmap, OPM’s responses to the 2010 hiring reform executive orders, the Recruitment, Engagement, Diversity, and Inclusion initiative, and OPM’s work on cross-agency priority (CAP) goals. We interviewed officials from OPM, the Office of Management and Budget, and our selected case agencies to discuss these initiatives, particularly about measuring impact and sustainability. We reviewed OPM’s plans for its most recent hiring initiative—the Hiring Excellence Campaign, which is part of the “People and Culture” CAP goal—with practices outlined in our prior work that are associated with success of such interagency efforts. We conducted this performance audit from May 2015 to August 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our finding and conclusions based on our audit objectives. Institute formalized recruitment meetings with human resources (HR) staff and hiring managers to discuss recruitment strategy. HR specialists and hiring managers at NIH said they used recruitment meetings to clarify the needs for the position, characteristics of an ideal applicant, possible applicant pools, and potential hiring authorities to use. They also discussed the use of professional or private-sector organizations to identify applicants and of broader recruitment topics, such as agency goals for diversity and veteran hiring. Hiring managers told us that these meetings helped them target the intended applicant pools for their open positions. Use specialized experience statements to help ensure a better fit between the applicant and the position. Hiring managers at Air Force Materiel Command (AFMC) and Department of Energy (DOE) said specialized experience statements—an explicit description provided in the job announcement—helps ensure that applicants possess the knowledge, skills, and abilities to perform the work of the position. Without using a specialized experience statement, AFMC officials told us that it can be difficult for HR specialists to filter out unqualified applicants. Hiring managers felt that specialized experience statements helped improve the quality of applicants on certificate lists. Actively recruit, particularly among preference groups such as veterans, to direct qualified applicants to the job announcement. Hiring managers at NIH and DOE said they used professional networking sites, university contacts, and professional organizations to target potential applicants who may not be actively searching for a new employment opportunity. For example, hiring managers at NIH reported using LinkedIn to identify and send potential job applicants to specific vacancy announcements for hard-to-fill IT positions. Sometimes these applicants would be eligible for excepted service hiring authorities. NIH officials said it enabled them to hire them more quickly than through a competitive examining process. Hiring managers said that they have used this recruiting technique to successfully identify qualified veterans for several IT positions. Use global job announcements when possible to reduce duplication of effort and to share quality applicant lists. Officials from DOE and NIH said that using global or open vacancy announcements allowed hiring officials to make multiple selections from a single vacancy announcement. For example, NIH uses an agency-wide recruitment strategy for commonly filled positions across the institutes. Under this initiative, NIH posts one vacancy announcement for a specific position. Then, any of the 27 institutes can use it to fill a vacancy for that position. Hiring managers at NIH told us that using the global recruitment saved them the time of creating multiple, similar announcements. It also allowed them to review a large number of resumes to find quality applicants. Include subject matter experts in the assessment process to filter out applicants who are not qualified. Hiring managers for STEM occupations at DOE involved subject matter experts in the assessment process to help HR specialists determine which resumes met technical job qualifications. After the HR specialists conducted the initial screening to determine which applicants were eligible to apply for the position, DOE’s subject matter experts reviewed these resumes without applicant identifying information and provided documentation to show whether the applicants met technical job requirements. HR personnel were then able to use subject matter expert feedback to complete applicant assessments and assign a qualification rating. Officials told us that this process has resulted in better qualified applicant on certificate lists. Individuals making key contributions to this statement include: Chelsa Gurkin (Assistant Director), Rebecca O’Connor (Analyst-in-Charge), Sara L. Daleski, Christopher Falcone, Karin Fangman, Ellen Grady, Shelley Rao, David Richards, and Elizabeth Wood. | Federal agencies face human capital challenges as a large percentage of employees become eligible to retire and agencies compete with the private sector for critical skills. To acquire needed talent, agencies need a hiring process that is applicant friendly, flexible, and meets policy requirements, such as hiring on the basis of merit. GAO was asked to review the extent to which federal hiring authorities were meeting agency needs. This report examines (1) the hiring authorities agencies used in fiscal year 2014 (the most recent data at the time of the review), (2) the extent to which case study agencies and OPM assessed the effectiveness of hiring authorities, and (3) how OPM ensured that agencies understood how to use hiring authorities effectively. To meet these objectives, GAO analyzed OPM data and documents, and interviewed OPM and officials from three agencies selected on the basis of recent high hiring levels in critical skill occupations. A hiring authority is the law, executive order, or regulation that allows an agency to hire a person into the federal civil service. Of the 105 hiring authorities used in fiscal year 2014, agencies relied on 20 for 91 percent of the 196,226 new appointments made that year. Office of Personnel Management (OPM) officials said they do not know if agencies rely on a small number of authorities because agencies are unfamiliar with other authorities, or if they have found other authorities to be less effective. The competitive examining hiring authority, generally seen as the traditional method for federal hiring, was the single most used authority in fiscal year 2014, but accounted for less than 25 percent of all new appointments. Agencies Relied on 20 Hiring Authorities for Nearly All New Hires in Fiscal Year 2014 While OPM—the agency responsible for overseeing the delegated hiring authority and managing federal civilian personnel data—tracks data on agency time-to-hire, manager and applicant survey results, and compliance audits to assess the hiring process, this information is not used by OPM or agencies to analyze the effectiveness of hiring authorities. As a result, OPM and agencies do not know if authorities are meeting their intended purposes. By analyzing hiring authorities, OPM and agencies could identify improvements that could be used to refine authorities, expand access to specific authorities found to be highly efficient and effective, and eliminate those found to be less effective. OPM's Hiring Excellence Campaign consists of a number of multi-agency, in-person events and is OPM's latest initiative designed to address long-standing challenges with federal hiring. OPM officials described the objectives, strategies, and measures by which the campaign will be measured and sustained. Going forward it will be important for OPM to sustain the campaign's efforts and incorporate lessons learned, if any, from similar prior or existing efforts to improve federal hiring. GAO recommends that the Director of OPM, working with agencies, strengthen hiring efforts by (1) analyzing the extent to which federal hiring authorities are meeting agencies' needs; (2) using this information to explore opportunities to refine, eliminate, or expand authorities as needed, and (3) sustain the Hiring Excellence Campaign's efforts to improve agency hiring and leverage prior initiatives, as appropriate. OPM generally concurred with these recommendations. |
This section includes information on the types of levee structures and potential levee failures, major levee-related programs of the Corps and FEMA, and selected legislation related to levee safety. The Water Resources Reform and Development Act of 2014 defines a levee as a manmade barrier (e.g., as an embankment, floodwall, or other structure), the primary purpose of which is to provide hurricane, storm, or flood protection relating to seasonal high water, storm surges, precipitation, or other weather events; such a barrier is normally subject to water loading for only a few days or weeks during a calendar year. According to a Corps document, levees are usually earthen embankments or concrete floodwalls, which have been designed and constructed to contain, control, or divert the flow of water so as to reduce the risk of temporary flooding. An American Society of Civil Engineers public information document describes earthen levees as being constructed from compacted soil that is typically covered with various surface materials, such as grass, gravel, stone, asphalt, or concrete, to help prevent erosion. The document further states that a floodwall is a vertical levee structure usually erected in urban areas where there is insufficient land for an earthen levee. Levees can either function passively or can require active operations depending on their components. Some levees have gates and pumps, for example, and may require personnel to operate these devices in times of floods. Levees typically require regular maintenance and periodic upgrades to retain their level of protection. Maintenance can include such actions as removing debris and unwanted vegetation from the levees, areas adjacent to floodwalls, and channels; controlling damage caused by animals (e.g., filling burrows); painting or greasing structural components, such as metal gates; and repairing concrete damage, particularly in northern climates with severe freeze-thaw cycles. Figure 1 depicts an earthen levee and a floodwall as well as their respective components. According to FEMA documents, levees are designed to provide a specific level of protection. However, they can be overtopped or fail; they can also decay over time (see fig. 2). The Corps and FEMA combined have three primary levee-related programs: the Corps’ Levee Safety Program, the Corps’ Flood Risk Management program, and FEMA’s National Flood Insurance Program. According to Corps documents, the Corps’ Levee Safety Program, established in 2007, works to better understand, manage, and reduce the flood risks associated with levees through various activities. For example, the Corps maintains a national inventory of levees and makes the information available in the National Levee Database. In addition, the Corps inspects and assesses the performance of about 2,500 levees, comprising about 15,000 miles, nationwide to determine associated risks. On the basis of information from its assessments, the Corps makes recommendations about future federal investments and to prioritize maintenance, repairs, and other actions on levees. The Corps’ Flood Risk Management Program, established in 2006, is intended to work across multiple Corps’ programs to reduce and manage flood risk, according to the Corps’ website. The program promotes the appropriate use of levees and floodwalls or alternative actions to reduce flood risk, such as land acquisition and flood proofing. The Corps also communicates levee-related concerns to stakeholders and works with stakeholders to develop solutions to reduce flood risk. The Corps accomplishes this outreach and communication through its flood risk management program as well as through other programs such as the Silver Jackets program, which, according to the Corps’ website, is intended to bring together multiple federal, state, and sometimes local agencies and tribes to learn from one another and help reduce the risk of flooding and other natural disasters and enhance response and recovery efforts. FEMA’s primary levee-related program is the National Flood Insurance Program, which was first authorized in the National Flood Insurance Act of 1968 to, among other things, addresses the increasing cost of federal disaster assistance by providing flood insurance to property owners in flood-prone areas, where such insurance was either not available or prohibitively expensive. This act also authorized subsidies to encourage community and property owner participation. To participate in the program, communities must adopt and agree to enforce floodplain management regulations to reduce the risk of future flood damage. An integral part of the program is the accreditation of any levees near the communities. In exchange for meeting program requirements, federally backed flood insurance is offered to residents in those communities. The Water Resources Development Act of 2007 directed the Corps to create and maintain a National Levee Database that includes a national inventory of levees, with information on the location and condition of all federal levees and, to the extent such information is provided to the Corps, nonfederal levees among other things. It also established the National Committee on Levee Safety to develop recommendations for a national levee safety program. The committee, which was composed of 23 diverse professionals from federal, state, and local or regional governments as well as the private sector and Indian tribes, operated from 2007 to 2011. In 2009, it submitted a draft report to Congress that included 20 recommendations for actions to establish a national levee- safety program, in addition to a strategic plan for implementing the program. The Moving Ahead for Progress in the 21st Century Act, enacted in 2012, called for the Corps and FEMA to align agency processes to allow interchangeable use of information collected for the Corps’ Inspection of Completed Works Program and FEMA’s National Flood Insurance Program. In 2013, a joint Corps and FEMA taskforce determined that under certain circumstances, Corps risk assessments of levees conducted under the agency’s Levee Safety Program could satisfy aspects of levee accreditation under FEMA’s National Flood Insurance Program. The effort culminated in a memorandum of understanding signed by the Corps and FEMA in which the Corps agrees to, among other things, provide FEMA with risk assessment results and FEMA agrees to accept and consider the Corps results, when possible. The Water Resources Reform and Development Act of 2014 amends portions of the Water Resources and Development Act of 2007 and also requires the Corps and FEMA to take the lead in implementing certain key national levee-safety-related activities. More specifically, it established new reporting responsibilities for the National Committee on Levee Safety, required continued development of a national levee inventory, and required implementation of a multifaceted levee safety initiative under which the agencies are to accomplish the following tasks: Develop voluntary national levee-safety guidelines: The voluntary national levee-safety guidelines are intended to be comprehensive standards that are available for use by all federal, state, and local agencies as well as tribes. Under the act, the voluntary guidelines are also expected to address activities and practices by states, local governments, tribes, and private entities to safely build, regulate, operate, and maintain a wide range of levee types, canal structures, and related facilities. The guidelines are also expected to address federal activities—including levee inspection, levee rehabilitation, local floodplain management, and public education and training—that facilitate state efforts to develop and implement effective state programs for levee safety. Adopt a hazard potential classification system: A hazard-potential classification system, as described by the National Committee on Levee Safety in its 2009 draft report, would be a first step in identifying and prioritizing hazards in leveed areas and is to be based solely on the potential consequences associated with a levee’s failure, as opposed to the likelihood or probability of a levee failure. The act provides for such a system to be considered in the development of the voluntary national levee-safety guidelines; under the act, the system is also expected to be consistent with the Corps’ levee-safety action- classification tool, which ranks levees based on their likelihood of flooding and the associated consequences. According to Corps officials, the tool is currently being used on levees within the Corps’ Levee Safety program. Provide technical assistance and materials: The agencies are to provide technical assistance and training to help promote levee safety and assist states, communities, and levee owners in (1) developing levee safety programs; (2) identifying and reducing flood risks associated with levees; and (3) identifying local actions that may be carried out to reduce flood risks in leveed areas. Provide public education and promote awareness: To improve public understanding of the role of levees, the agencies are to carry out public education and awareness efforts about the risks associated with living in leveed areas. Education and awareness efforts are to be directed particularly toward individuals living in leveed areas. These efforts must also promote consistency in how information about levee- related risks is communicated at the state and local level and shared among federal agencies. Develop guidelines and provide assistance for a national state and tribal levee-safety program: This national program, as described by the National Committee on Levee Safety in its 2009 draft report, would assist states and tribes in developing and maintaining the institutional capacity, expertise, and framework to quickly initiate and maintain their own levee-safety program activities and requirements. The guidelines are to identify the minimum components necessary for an individual state or tribe to participate in the program. The national program provides assistance to help establish state and tribal programs that would meet these requirements. The act also requires that state and tribal levee-safety programs will have to adopt the voluntary national levee-safety guidelines to be eligible for assistance. Develop guidelines and provide assistance for a levee rehabilitation assistance program: This program is to provide assistance to states, local governments, and tribes related to addressing flood mitigation activities that result in an overall reduction of flood risk. The Corps, in consultation with FEMA, is to develop guidelines for floodplain management plans that program participants are required to prepare to reduce the impacts of future floods in areas with levees. Assistance provided under the program may be used for any rehabilitation activity to maximize risk reduction associated with levees that are (1) under a participating state or tribal levee-safety program and (2) not federally operated and maintained. To be eligible, applicants are expected to comply with all applicable federal floodplain management and flood insurance programs, have a floodplain management plan, have a hazard mitigation plan that includes all levee risks, and act in accordance with the voluntary national levee safety guidelines. In addition, among other things, the act called for several reports to be prepared. Specifically, the Corps is to submit to Congress and make publicly available a biennial report that describes the state of levees in the United States and the effectiveness of the levee safety initiative, as well as any recommendations for legislation and other congressional actions necessary to ensure national levee safety. The Corps and FEMA are also required to submit a report that included recommendations on the advisability and feasibility of, and potential approaches for, establishing a joint national dam and levee safety program, and the Corps is required to submit a report that includes recommendations that identify and address any legal liabilities associated with levee engineering projects. The Corps and FEMA have made little progress in implementing key national levee-safety-related activities under the Water Resources Reform and Development Act of 2014 primarily because of resource constraints, according to officials from both agencies. The Corps has been working on its development of a national levee inventory, but the Corps and FEMA have not begun work on other key national levee- safety-related activities required by the act and do not have a current plan for doing so (see table 1). Concerning the national levee inventory, a summary document that the Corps developed for us states that the Corps is incorporating levee data that FEMA has provided from the National Flood Insurance Program and is working to incorporate levee data voluntarily provided by state and local agencies. The Corps’ actions are an extension of earlier work on the database, which it was directed to establish and maintain under the Water Resources and Development Act of 2007. Corps officials said that improving the inventory will be an ongoing process. The Corps had allocated $5 million for the inventory in fiscal year 2016, and the Corps’ fiscal year 2017 Operations and Maintenance budget justification lists an allocation of an additional $5 million to further expand the inventory. The agencies have taken no action on the remaining key national levee- safety-related activities for which they were responsible and have missed several statutory deadlines for developing guidelines and reports. For example, the agencies took no action on developing the guidelines for the preparation of floodplain management plans under the levee rehabilitation assistance program, which were due on December 7, 2014; the voluntary national levee-safety guidelines, due June 10, 2015; or a report, due June 10, 2015, that was to include, among other things, recommendations for legislation and other congressional actions necessary to ensure national levee safety. Additionally, according to agency officials we interviewed, the agencies have no current plan for implementing the remaining activities. Without a plan, including milestones for accomplishing these activities using existing resources or requesting additional resources as needed, the agencies are unlikely to make further progress on implementing the remaining activities required by the act. Corps officials we interviewed said that they have continued to make progress on other activities that will complement activities required by the Water Resources Reform and Development Act of 2014 and that are within the scope of their existing Levee Safety Program and Flood Risk Management Program. Similarly, FEMA officials stated that they also are working to provide general public education and promote awareness about the risks associated with living behind levees through their existing National Flood Insurance Program. In a slide presentation that the Corps prepared for us, dated October 2015, the Corps identified resource constraints as a primary reason why the Corps has not been able to carry out certain key national levee- safety-related activities under the Water Resources Reform and Development Act of 2014. Specifically, the Corps’ presentation indicated that new appropriations would be needed to (1) provide technical assistance and training; (2) develop guidelines and provide financial assistance for a state and tribal levee-safety program; and (3) develop guidelines and provide financial assistance for a levee rehabilitation assistance program. Corps officials we interviewed stated that the remaining national levee-safety-related activities required in the act could be funded using existing appropriations, but these activities would have to compete with existing Corps projects in the Corps civil works program. We reviewed a 2016 Corps budget document and determined that, except for the national inventory of levees, the Corps did not specifically allocate funds for national levee-safety-related activities required in the act. FEMA officials we interviewed stated that the agency would need additional appropriations to carry out the agency’s main responsibility under the act—providing assistance for a state and tribal levee safety program—and told us that the agency had not received any funding directed toward national activities required by the act. They also said that even if these activities were funded, the agency would need additional staffing resources—specifically, in its 10 regional offices—to carry out requirements under the act. As of this report, FEMA has one staff person who is available part-time to implement the national levee-safety-related activities required by the act. As noted above, the Corps’ 2017 budget includes $5 million for the national levee inventory; however, it does not specify funds for implementing the other national levee-safety-related activities in the Water Resources Reform and Development Act of 2014. Corps headquarters officials told us that not implementing the act’s national levee-safety-related activities could result in several potential impacts, including that the disaster relief burden for the federal government may increase, safety risks and loss of life may increase, and risk education in communities with levees may not be carried out. Since the devastation of Hurricane Katrina in 2005, Congress has enacted legislation, including the Water Resources Reform and Development Act of 2014 that provided the Corps and FEMA with lead responsibility for undertaking certain national levee-safety-related activities, including some that would increase the capacity of nonfederal stakeholders to promote levee safety. The Corps is working on one of the key national levee-safety-related activities required by the act, namely expanding a national inventory of levees. However, the Corps and FEMA have not taken action to implement the other activities, required by the act, citing resource constraints. Further, Corps officials have identified potential impacts—including safety and financial risks—of not carrying out these activities, but the agencies do not have a plan for implementing these activities. Without a plan, including milestones for accomplishing the activities using existing resources or requesting additional resources as needed, the agencies are unlikely to make further progress implementing the activities under the act. To help ensure that the Corps and FEMA carry out the national levee- safety-related activities required in the Water Resources Reform and Development Act of 2014, we recommend that the Secretary of Defense direct the Secretary of the Army to direct the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers and that the Secretary of Homeland Security direct the FEMA Administrator to develop a plan, with milestones, for implementing these activities, using existing resources or requesting additional resources as needed. This plan could be posted on the Corps’ website and monitored for progress. We provided a draft of this report for review and comment to the Departments of Defense and Homeland Security. In their written comments, reproduced in appendixes I and II, respectively, both agencies generally concurred with our recommendation. The Department of Defense stated that the agencies are drafting an implementation plan and suggested that we focus our recommendation on finalization of this plan. However, the agencies did not provide a copy of the draft plan or a date when it would be finalized, so we believe that the current focus of the recommendation is appropriate. The Department of Defense further stated that, to date, no funding has been allocated to the Corps specifically to implement provisions under the Water Resources Reform and Development Act of 2014, except for the levee inventory activities, as we have acknowledged in our report. In addition, the Department of Defense suggested that the recommendation be revised to include posting the plan on the Corps’ website and monitoring the plan for progress. We have modified our recommendation to incorporate this suggestion, which we believe would help inform nonfederal stakeholders who own, maintain, or operate the majority of levees. The Department of Homeland Security said that FEMA will continue to work with the Corps to develop and implement a plan to carry out key national safety-related activities required in the act. Both agencies also provided technical comments that we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In addition to the individual named above, key contributors to this report included Vondalee R. Hunt (Assistant Director), Kevin Bray, Patricia Donahue, John Johnson, Armetha Liles, Cynthia Norris, and Kyle Stetler . | Levees, which are man-made structures such as earthen embankments or concrete floodwalls, play a vital role in reducing the risk of flooding. Their failure can contribute to loss of lives or property, as shown by the devastation of Hurricane Katrina in 2005. It is estimated that there are over 100,000 miles of levees across the United States, many of which are owned or operated by nonfederal entities. The Corps and FEMA are the two principal federal agencies with authorities related to levee safety. The Water Resources Reform and Development Act of 2014 requires the Corps and FEMA to take the lead on certain national levee-safety-related activities including developing a national levee inventory, which Congress authorized in 2007. The act also includes a provision for GAO to report on related issues. This report examines the Corps' and FEMA's progress in carrying out key national activities related to levee safety required in the act. GAO reviewed pertinent federal laws and executive orders as well as budget, planning, and policy documents from the Corps and FEMA; compared agency activities with federal internal control standards; and interviewed Corps and FEMA headquarters officials. The U.S. Army Corps of Engineers (Corps) and the Federal Emergency Management Agency (FEMA) have made little progress in implementing key national levee-safety-related activities required in the Water Resources Reform and Development Act of 2014. More specifically, the Corps has been working to develop a national levee inventory, but the agencies have taken no action on the remaining key national levee-safety-related activities for which they are responsible under the act, as shown in the table below. Agency officials identified resource constraints as a primary reason for their lack of progress in implementing such activities, and Corps officials said that not implementing these activities could potentially result in safety risks and federal financial risks for disaster relief, among other impacts. However, the agencies have no plan for implementing the remaining activities required by the act. Without a plan that includes milestones for accomplishing these activities using existing resources or requesting additional resources as needed, the agencies are unlikely to make progress implementing the activities under the act. GAO recommends that the Corps and FEMA develop a plan that includes milestones for implementing the required national levee-safety-related activities using existing resources or requesting additional resources as needed. The agencies generally concurred with GAO's recommendation. |
Since its establishment by the Treaty of Rome in 1957, the EU has tried to create a single market among its Member States to facilitate the free movement of goods, services, capital, and people. As part of this effort, the Commission has attempted to consolidate and harmonize many of the pharmaceutical regulations that have existed among the Member States. Specifically, the Commission established two methods, called the multistate and concertation procedures, allowing pharmaceutical products to be marketed in all the Member States if approved by one Member State. The Commission believed that these methods would promote public health, by making drugs available to patients in a more timely manner, and advance industry interests, by stimulating investment in European research and development activities. However, these initial efforts were not successful because the Commission did not require Member States to accept drug approval decisions made by the Commission or other Member States. In 1975, the Commission established a multistate procedure to allow a pharmaceutical company to market a product in all Member States if just one of them approved the product application—a procedure referred to as “mutual recognition.” The Commission also created the Committee for Proprietary Medicinal Products (CPMP) to coordinate the Member States’ assessments of pharmaceutical products and arbitrate disputes among the Member States regarding the marketing of pharmaceutical products. However, the multistate procedure was unsuccessful in obtaining mutual recognition of drug approval decisions because at least one Member State raised an objection to every multistate application. Moreover, the CPMP opinions were not legally binding and, as a result, did not resolve disputes among the Member States. In 1987, the Commission established another process—the concertation procedure—designed to foster a single market. Under this procedure, the CPMP reviewed all biotechnology and other high-technology pharmaceutical products for approval across the EU. The EU decided to centralize the review process for biotechnology and other high-technology products because many of the Member States did not have the scientific expertise needed to review such products. However, only 5 of 30 product applications reviewed under the concertation procedure and approved by the CPMP were authorized for marketing by all the Member States. Thus, neither the multistate nor concertation procedure achieved the goal of free circulation of pharmaceuticals across all EU Member States because these procedures did not compel the Member States to accept a majority opinion of the CPMP. While the Member States professed allegiance to the principles of mutual recognition, their national regulatory authorities continued to review product applications and render their own opinions before allowing the products to be marketed in their country. Because the CPMP opinions were not binding, Member States issued different decisions on drug approvals, which prevented pharmaceutical companies from obtaining EU-wide approval for their products. Under the new EU drug approval process, pharmaceutical companies may use either a centralized or a decentralized procedure to obtain approval to market their pharmaceutical products in more than one Member State using one application. These procedures modify the former multistate and concertation procedures by (1) defining specific review steps and establishing time limits for review processes and (2) requiring Member States to accept as binding, decisions that are issued by the Commission. In addition, the CPMP, which was formally an advisory arm of the Commission, now serves as one of the EMEA’s scientific committees. The CPMP, composed of two representatives from each Member State, renders opinions about the safety, efficacy, and quality of human pharmaceutical products that are binding on all the Member States. Although the new EU drug approval process changes the method for obtaining a marketing authorization, it does not affect drug pricing and reimbursement policies, which remain the responsibility of each Member State. Thus, in order to actually market a pharmaceutical product approved under the new process, manufacturers must still negotiate a product’s price with individual Member States. Pharmaceutical companies are now required to use the centralized procedure for biotechnology products and have the option to use it for other innovative products. Under the centralized procedure, Commission approval of a new drug application allows a pharmaceutical company to market its pharmaceutical product in all 15 Member States without having to obtain separate approvals from each Member State. As shown in figure 1, once the EMEA ensures that the application is complete, the CPMP selects two of its members—known as rapporteurs—to perform independent scientific evaluations of the safety, efficacy, and quality of an application. The rapporteurs can draw on two sources of EU-wide scientific expertise in forming their review teams—experts from the national marketing authorities of Member States and any of the 1,200 outside experts located at universities and institutes throughout Europe. Once the rapporteurs have completed their respective evaluations, they present the results to the CPMP, which then renders an opinion. The CPMP must render its opinion within 210 days after the application was submitted. If a CPMP opinion is favorable, it is transmitted to the applicant, all Member States, and the Commission. The Commission uses the CPMP’s opinion to prepare a draft decision. If the Member States raise important new scientific or technical questions, the Commission may refer the case back to the CPMP for further consideration. At this point in the approval process, Member States may object to the decision only if they believe the product poses a significant risk to public health in their country. If no objections are raised by the Member States, the Commission’s draft decision is submitted to its Standing Committee on Medicinal Products for Human Use. The Standing Committee either agrees with the Commission’s decision or, if there is no qualified majority, refers the decision to the Council of Ministers for consideration. Upon request, the EMEA will inform any concerned parties about the final decision, and the public is notified when a marketing authorization is granted through publication in the Official Journal of the European Communities. If, on the other hand, the CPMP renders an unfavorable opinion, the applicant may appeal the decision to the EMEA. During the appeal process, the CPMP may obtain the views of additional experts who were not involved in the first consideration of the application. The CPMP’s final opinion is processed in essentially the same manner as a favorable opinion; that is, the final decision is made by the Commission or Council of Ministers. The centralized procedure is expected to take between 298 and 448 days depending on whether the applicant appeals an unfavorable CPMP opinion, the Member States raise important new scientific or technical questions, or the Standing Committee cannot reach consensus on a Commission draft decision and refers the matter to the Council of Ministers. According to an EMEA official, as of December 1995, almost 1 year after the EMEA had become operational, pharmaceutical companies had filed or intended to file 30 new applications under the centralized procedure, and 20 had started the evaluation process. In addition, the EMEA had received 18 applications submitted under the former concertation process. The CPMP has given positive opinions on 8 of these 18 applications, and the Commission has granted EU marketing authorizations for three of those opinions. For optional innovative products, pharmaceutical companies can either use the EMEA’s centralized procedure or follow a decentralized procedure to obtain mutual recognition of a new drug by the EU Member States. Under the decentralized procedure (see fig. 2) an applicant can go directly to a national marketing authority to obtain permission to market its product in that Member State and then seek to have other Member States accept the marketing approval of the first Member State. Once an application has been submitted, a Member State’s national marketing authority has 210 days to decide whether or not to grant an authorization to market the product in the Member State. If a Member State grants a marketing authorization, the applicant may seek to have one or more other Member State(s) where the applicant wishes to market its product recognize the authorization of the first Member State. Within 90 days of receiving the application, the other Member State(s) must decide whether to recognize the approval. If the other Member State(s) recognize the marketing authorization of the first Member State, an applicant may market its product in each Member State. If the other Member State(s) raise objections to mutual recognition that cannot be resolved within 90 days, the case is referred to the CPMP for arbitration. Once the CPMP gets involved in the process, the steps are the same as those followed for the centralized procedure. CPMP opinions under the decentralized procedure, once accepted by the Commission, are binding on all the Member States. The decentralized approval procedure is expected to take between 300 and 686 days depending on whether other Member States object to the marketing authorization granted by the first Member State, objections lead to a formal arbitration by the CPMP, the applicant appeals an unfavorable opinion, the Member States raise important new scientific or technical questions, or the Standing Committee cannot reach consensus on a Commission draft decision and refers the matter to the Council of Ministers. According to an EMEA official, as of December 1995, the EMEA had not been involved in any arbitration proceedings relating to disputes among the Member States under the decentralized procedure. Pharmaceutical industry officials acknowledge that filing NDAs under the centralized procedure will allow a company to market its product(s) in all Member States within a relatively short period of time at approximately 60 percent of the cost of obtaining 15 individual marketing authorizations. However, some officials said they are hesitant to use the centralized procedure in the short term to obtain approval for nonbiotechnology pharmaceutical products for several reasons. First, under the centralized procedure, a company has less influence over which rapporteurs will review its application than it does under the decentralized procedure. While a company can request particular rapporteurs, the CPMP will ultimately make the selection. According to industry officials, firms want their preferred rapporteurs because of the significant time and resources they have invested in establishing relationships with certain national marketing authorities, particularly in countries with large pharmaceutical markets. Under the centralized procedure, drug sponsors are concerned that the EMEA may assign an innovative product to a less experienced rapporteur who cannot adequately review or convincingly support the product before the full CPMP. Regulatory and industry officials believe that this concern will be somewhat mitigated by the new procedures’ use of two rapporteurs. They expect that using two rapporteurs, rather than the one used under earlier procedures, will improve the quality of the drug approval process in several ways. First, by working independently, the two rapporteurs—and the teams they assemble—should uncover most concerns that might be raised at a meeting of the full CPMP. Second, being a rapporteur for an NDA carries great prestige, and the CPMP and Member States will place pressure on the review teams to prepare a thorough evaluation. Third, rapporteurs will have access to the scientific expertise available across the EU. Moreover, according to an EMEA official, the CPMP does consider drug sponsor preferences in its selection of rapporteurs. In 1995, the CPMP was able to give drug sponsors one of their rapporteur choices in every case. However, the CPMP recognizes that this may not always be possible in the future. Under the centralized procedure, in 1995, representatives from all of the Member States except Greece were chosen as rapporteurs or corapporteurs for at least two applications. The United Kingdom was selected as a rapporteur or corapporteur most often (nine times) with members from France and Germany involved in eight and seven applications, respectively. A second concern voiced by industry and regulatory officials is that the new procedures will function as intended only if members of the CPMP and the Standing Committee, who are appointed by their Member States on the basis of their scientific or regulatory expertise, are able to look beyond their national identity to represent EU-wide interests. The members of these committees have to accept an EU-based approval process and EU-based decisions in order for the new procedures to successfully expedite the drug approval process. According to a senior EMEA official, the EMEA is doing all that it can to encourage the CPMP members to act in the best interests of the EU, regardless of their national identities. However, the EMEA official acknowledged that it will take time before the members feel comfortable with one another and the new procedures. Finally, pharmaceutical industry officials told us that, in the short term, industry will monitor progress with the centralized procedure and may delay using it for nonbiotechnology product approvals until the EMEA can establish a track record for drug approvals. Industry likes the multiple approval options for pharmaceutical products because they create competition among the national marketing authorities and the EMEA, encouraging them to be more efficient. Further, these options allow firms to pursue different marketing strategies for their various pharmaceutical products. During the EMEA’s first year of operation, however, there were indications that industry was using the centralized procedure for optional nonbiotechnology products. According to EMEA status reports, two-thirds of the 30 new centralized applications that industry filed or intended to file could have been filed using the decentralized procedure. Nevertheless, industry officials contend that future prospects for using the centralized procedure are dependent on the EMEA’s success in expediting the drug approval process. The EMEA was created by the Commission in 1993 to administer the new centralized approval procedure, which is mandatory for biotechnology and optional for other high-technology and innovative pharmaceutical products. The EMEA also arbitrates disputes under the new decentralized procedure in order to achieve mutual recognition of Member State approvals for most other medicines. The EMEA is funded by the Commission and industry application fees and has a small permanent staff and two scientific committees that draw upon EU-wide scientific expertise. The EMEA provides administrative, technical, and scientific support for both drug approval decisions under the centralized procedure and disputed decisions under the decentralized procedure. Under the centralized procedure, the EMEA is responsible for coordinating the evaluation of the safety, efficacy, and quality of human pharmaceutical products that will be marketed throughout the EU. Through its scientific committee, the CPMP, the EMEA also evaluates assessment reports, summaries of product characteristics, labels, and package inserts for pharmaceutical products. Finally, the EMEA provides advice to drug sponsors on issues relating to the conduct of tests and trials necessary to demonstrate the safety, efficacy, and quality of pharmaceutical products. In 1995, the CPMP received 20 requests for scientific advice from pharmaceutical companies. According to EMEA and industry officials, this interaction between the industry and the EMEA is beneficial to the European pharmaceutical industry because it increases the industry’s interaction with the European reviewers of its product applications. In addition to coordinating the assessment of new drug applications and resolving Member State disputes, the EMEA is responsible for monitoring adverse drug reactions, an activity known as pharmacovigilance. The EMEA also ensures that the public receives timely and accurate information about the safe and effective use of these products. While national pharmacovigilance systems have existed for some time in the EU, the requirements and structure of those systems have varied considerably. According to a recent report, these differences have made compliance with all the regulatory requirements difficult for multinational pharmaceutical companies, thereby endangering patients who may not have received standard safety information about a particular product. The new EU regulations are intended to strengthen and coordinate existing pharmacovigilance systems. As part of the new system, the EMEA is responsible for creating a data-processing network for the rapid transmission of information among the national marketing authorities in the event of a pharmacovigilance alert. The EMEA is also responsible for formulating, as necessary, opinions on measures to ensure the safe and effective use of such pharmaceutical products. The EMEA also performs several other functions. It coordinates Commission and Member States’ responsibilities for verifying industry compliance with good manufacturing, laboratory, and clinical practices. It also provides technical assistance for maintaining a database on pharmaceutical products for public use and assists the Commission and Member States in providing information about pharmaceutical products to the public. In addition, the EMEA is in the process of developing ways to electronically transmit data between its administrative arm, the secretariat, and the national marketing authorities to track the flow of information during the review process. The EMEA also translates all documents into the 11 languages used in the Member States. Finally, the EMEA promotes technical cooperation among the Commission, Member States, international organizations, and other countries regarding the evaluation of pharmaceutical products. The EMEA is composed of a Management Board, two scientific committees, and a permanent secretariat. The Management Board is the EMEA’s governing body and is responsible for budgetary and resource matters. It consists of two representatives each from the European Commission, the European Parliament, and the Member States, for a total of 34 members. The scientific committees, the CPMP and the CVMP, each consist of 30 members—two from each Member State—who are primarily responsible for acting as rapporteurs to coordinate the review of NDAs. The rapporteurs have access to the staffs of national marketing authorities in other Member States, as well as to any of the 1,200 outside experts on the EMEA’s European experts list. By the end of 1995, the permanent secretariat consisted of about 67 staff but was expected to grow to 250 staff by the year 2000. The secretariat is charged with providing general administrative and logistical support to the scientific committees, as well as administering the day-to-day activities of the EMEA. The permanent secretariat consists of four units: the Administration and Logistical Unit, which is responsible for personnel, administration, budget, accounting, and organization of and interpretation for conferences and meetings; the Human Medicines Evaluation Unit, whose two sections support the centralized and decentralized procedures for approval of pharmaceutical products for human use; the Veterinary Medicines Evaluation Unit, which supports centralized and decentralized procedures for approval of pharmaceutical products for veterinary use and monitors the maximum residue levels in foodstuffs of animal origin; and the Technical Coordination Unit, which is responsible for inspection, pharmacovigilance, and technical documentation activities. Initially, the EMEA was expected to be financed equally by industry application fees and Commission funds. However, the EMEA reported that about one-third of its funding for 1995 actually came from industry fees, while about two-thirds came from the Commission. The EMEA’s budget for 1995 was approximately $17 million. The application fee for authorizing a pharmaceutical product for human use under the centralized procedure ranges from about $165,200 to approximately $236,000, depending on how many different product strengths and forms, such as tablet or liquid, are being considered. The EMEA receives half of the fees to support its operations, and the other half are split between the two review teams formed by the designated rapporteurs. Other fees, which are detailed in Commission regulations, are charged to process application variations, extensions, and renewals; inspect manufacturers’ facilities; and arbitrate Member State disputes. According to industry and regulatory officials, the Member States differ in how they would like to see the EMEA funded. Some of the Member States, particularly the United Kingdom, would like the EMEA to be fully financed by industry fees. Other Member States have resisted a total fee-based financing scheme because they view industry support of a public health agency as a conflict of interest. Consequently, they want the Commission to maintain oversight responsibility of the EMEA through its funding mechanism. The Member States and Commission agree that the EMEA’s financing should be reviewed in about 3 years, with the objective of increasing the proportion of the budget financed by the industry. However, according to a senior Commission official, the Commission is likely to retain its oversight control by funding at least 20 percent of the EMEA budget in the future. We obtained comments on a draft of this report from the EMEA, FDA, representatives of the European-based pharmaceutical industry, and experts in international drug regulatory policies. In general, they found the report to be accurate and complete and provided specific technical comments, which we incorporated into the report where appropriate. This report was prepared by John C. Hansen, Assistant Director; Thomas J. Laetz; and Mary W. Freeman. Please call Mr. Hansen at (202) 512-7105 if you or your staff have any questions about this report. The central regulatory body in the EU that (1) drafts legislation in the form of directives and regulations designed to foster a single market in Europe and (2) enforces EU rules. The Commission also prepares draft decisions, on the basis of CPMP opinions, on the licensing of pharmaceutical products. Committee within the EMEA, composed of two representatives from each Member State, that renders scientific opinions about the safety, efficacy, and quality of new pharmaceutical products. The CPMP also has a role in pharmacovigilance issues, developing guidelines, giving scientific advice to companies developing pharmaceutical products, and providing quality information to health professionals and patients. European Council composed of representatives from all the Member States. The Council analyzes Commission proposals and enacts EU-wide legislation. Central agency within the EU that supports the CPMP in its scientific evaluations of pharmaceutical products. The EMEA also verifies compliance with EU good clinical practices and good manufacturing practices and provides technical support to the Member States’ national marketing authorities. Formerly known as the European Community, the EU was established by treaty to create a single market. The EU currently consists of 15 countries commonly referred to as Member States. The 15 Member States are Austria, Belgium, Denmark, Finland, France, Germany, Greece, Ireland, Italy, Luxembourg, the Netherlands, Portugal, Spain, Sweden, and the United Kingdom. The regulatory authority in each Member State that is responsible for the approval of new human and veterinary pharmaceutical products in that Member State. National marketing authorities also inspect manufacturing facilities, monitor quality control, and perform pharmacovigilance activities. The size and structure of each national marketing authority vary among Member States. The process of collecting information on adverse drug reactions at the pre- and postmarketing stages, scientifically evaluating these adverse drug reaction reports, and making the regulatory decisions that result from this analysis. A CPMP member selected to lead the scientific evaluation of a new drug application and discuss its merits and shortcomings before the CPMP. Committee within the Commission, comprising representatives from all 15 Member States, that is responsible for approving draft licensing decisions for pharmaceutical products on the basis of the Commission’s draft decisions. The EU’s version of the full prescribing information for a product that is supplied to physicians separately from the product. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed: (1) the new European Union (EU) procedures for approving new drug applications (NDA); and (2) why the European Medicines Evaluation Agency (EMEA) was established, how it operates, and how it is financed. GAO found that: (1) because member states did not always accept EU or other members' drug approvals, the EU Commission of European Communities makes decisions on drug approvals and dispute resolutions that are binding on all members; (2) EU has also established new approval procedures for biotechnology, other high-technology, and innovative products; (3) regulating drug prices and reimbursement policies remains the responsibility of member states; (4) the centralized approval procedure for biotechnology and some innovative products is expected to take between 298 and 448 days; (5) the decentralized procedure allows manufacturers to seek approval from member states and appeal denied approvals; (6) the decentralized procedure is expected to take between 300 and 686 days; (7) industry officials are concerned about drug evaluators' qualifications and whether EU-wide interests will be upheld over national interests; (8) EMEA is responsible for the timeliness and coordination of new drug approvals, administrative duties, ensuring that drugs meet the highest standards of safety, efficacy, and quality, and maintaining information on the drugs and their adverse reactions; (9) the Commission and industry application fees fund EMEA, which has a small permanent staff and 2 scientific evaluation committees that draw on EU-wide scientific expertise; and (10) EMEA also provides advice to companies on their trial procedures and other matters. |
Planning for an influenza pandemic is a difficult and daunting task, particularly because so much is currently unknown about a potential pandemic. While some scientists and public health experts believe that the next influenza pandemic could be spawned by the H5N1 avian influenza strain, it is unknown when an influenza pandemic will occur, where it will begin, or whether a variant of H5N1 or some other strain would be the cause. Moreover, the severity of an influenza pandemic, as well as the groups of people most at risk for infection, cannot be accurately predicted. Past pandemics have spread worldwide within months and a future pandemic is expected to spread even more quickly given modern travel patterns. The implication of such a rapid spread is that many, if not most, countries will have minimal time to implement preparations and responses once a pandemic virus begins to spread. However, as we have previously reported, despite all of these uncertainties, sound planning and preparedness could lessen the impact of any influenza pandemic. Preparing for an influenza pandemic can be helpful not only to lessen a pandemic’s impact, but also to help prepare for other disasters that may occur. As we have previously reported, the issues associated with preparation for and response to an influenza pandemic are similar to those for any other type of disaster: clear leadership roles and responsibilities, authority, and coordination; risk management; realistic planning, training, and exercises; assessing and building the capacity needed to effectively respond and recover; effective information sharing and communication; and accountability for the effective use of resources. At the same time, a pandemic poses some unique challenges. Rather than being localized in particular areas and occurring within a short period of time, as do disasters such as earthquakes, explosions, or terrorist incidents, an influenza pandemic is likely to affect wide areas of the world and continue for weeks or months. Past pandemics have spread globally in two and sometimes three waves, according to WHO, and a pandemic is likely to come in waves lasting months, according to the national implementation plan. Additionally, responding to an influenza pandemic would be more challenging than dealing with annual influenza. Each year, annual influenza causes approximately 226,000 hospitalizations and 36,000 deaths in the United States. According to WHO, an influenza pandemic would spread throughout the world very quickly, usually in less than a year, and could sicken more than a quarter of the global population, including young, healthy individuals who are not normally as affected by the annual flu. WHO defines the emergence of an influenza pandemic in six phases (see fig. 2). Based on this definition, the world currently is in phase 3, in which there are human infections from a new influenza subtype, but no or very limited human-to-human transmission of the disease. In addition, the Homeland Security Council developed “stages” that characterize the outbreak in terms of the threat that the pandemic virus poses to the U.S. population. These stages, also shown in figure 2, provide a framework for a federal government response to an influenza pandemic. Currently there are new domestic animal outbreaks in an at-risk country, which corresponds to the federal government’s stage 0. COCOMs have taken numerous management and operational actions to prepare for an influenza pandemic and the COCOMs’ efforts are evolving. While the COCOMs are at different stages in their planning and preparedness efforts, each has taken actions to plan and prepare for an influenza pandemic. These actions include establishing working groups, developing plans, exercising plans, implementing strategies to inform personnel about pandemic influenza, and coordinating with other nations. Table 1 summarizes the COCOMs’ actions to prepare for an influenza pandemic. Each of the geographic COCOMs has established a working group to address various aspects of pandemic influenza, and each of the functional COCOMs has either established a working group or is planning to do so. Medical and operational planning officials from the geographic COCOMs told us they viewed pandemic influenza planning as both an operational and force health protection issue and, accordingly, these groups are generally led by officials in the operations or plans and policy directorates, the office of the command surgeon, or a combination of these offices. Officials from across the command, and in some cases service subcomponents and other federal agencies, participate regularly or as needed. These working groups oversee pandemic influenza plan development and work on other aspects of pandemic influenza preparation. For example, PACOM’s working group is headed by three officials, one each from the operations directorate, plans and policy directorate, and the Office of the Command Surgeon. According to a PACOM official, intelligence, logistics, and public affairs officials regularly attend meetings, and officials from other directorates and subcomponents attend as needed. The group was established to develop a pandemic influenza response plan covering PACOM’s geographic area of responsibility based on the November 2005 Joint Staff order to plan for an influenza pandemic. In addition to its core pandemic influenza planning team, PACOM tasked two of its service subcomponents to lead operational groups with responsibilities for pandemic influenza preparation and response in PACOM’s area of responsibility. PACOM designated its Marine subcomponent, Marine Forces Pacific, to lead PACOM’s international support response during an influenza pandemic, which will be conducted through a multiservice task force formed to conduct relief operations during an influenza pandemic. The task force may also conduct noncombatant evacuation operations of Americans living abroad. PACOM also tasked its Army subcomponent, U.S. Army Pacific, to assist partner governments and conduct defense support of civil authorities in PACOM’s domestic area of responsibility through a standing task force that defends PACOM’s domestic region from external military threats. PACOM’s domestic area of responsibility, in contrast to the command’s foreign area of responsibility, consists of the state of Hawaii, and various U.S. territories, possessions, and protectorates, including Guam, American Samoa, and the Marshall Islands. Normally in a supporting role, the functional COCOMs were not formally tasked to plan for pandemic influenza by the November 2005 Joint Staff planning order. However, each established or intends to establish a group to prepare for pandemic influenza. For example, JFCOM is in the process of establishing a pandemic influenza working group. Prior to establishing the group, JFCOM’s operations directorate was leading its pandemic influenza planning efforts. Once established, JFCOM’s working group will include representatives from select directorates, the installation where JFCOM’s headquarters is located, and the regional public health emergency officer, according to JFCOM officials. Additionally, in 2007 NORTHCOM established a working group, called the Global Pandemic Influenza Working Group, to develop DOD’s global plan for pandemic influenza that applies to all of DOD’s COCOMs, military services, and defense agencies. The working group has met three times in 2007 and included representatives from the Office of the Secretary of Defense; the Joint Staff; the geographic COCOMs; three of the four functional COCOMs; the four military services; two defense agencies—the Defense Intelligence Agency and the Defense Logistics Agency—and the Air Force Medical Intelligence Center; and other interagency partners, including the Departments of State, Health and Human Services, Homeland Security, and Agriculture. At the time of our review, eight of the nine COCOMs had developed or were developing a plan to prepare for and respond to a potential pandemic influenza outbreak. Figure 3 illustrates when the COCOMs started their pandemic influenza planning efforts. In November 2005, the Joint Staff requested that the geographic COCOMs develop or adapt existing pandemic influenza plans to address force health protection, defense support of civil authorities, and humanitarian assistance. Two geographic COCOMs, EUCOM and PACOM, began developing plans before the November 2005 planning order. In August 2005, PACOM issued an instruction on pandemic influenza preparation and response. Similarly, in August 2005, EUCOM began developing its plan as a result of media reports of avian influenza cases. Although the Joint Staff did not request that the functional COCOMs develop plans, three of the four functional COCOMs are developing plans to preserve their ability to continue their own operations or to address their support role during an influenza pandemic. While SOCOM’s headquarters was not developing a pandemic influenza plan, SOCOM planning officials said they expect each of the geographically-based special operations commands will develop an annex for their respective geographic COCOMs’ plan; the tasking to develop these plans will come from the geographic COCOM, rather than SOCOM. For example, PACOM’s special operations component is developing a plan for special operations forces in PACOM’s area of responsibility. Each of the geographic COCOMs’ plans contain phases that indicate various actions for the COCOMs to take prior to and during a potential pandemic. DOD generally uses phases in its plans when conducting complex joint, interagency, or multinational operations to integrate and synchronize interrelated activities. The Joint Staff required that the geographic COCOMs’ plans take into account the WHO phases for an influenza pandemic; however, the COCOMs were not required to adopt the same phases. This allowed the COCOMs to develop their own phasing structures for their plans and, as a result, the COCOMs plans have different phasing structures. By definition, an influenza pandemic would simultaneously affect multiple geographic COCOMs’ areas of responsibility and would, therefore, require unified and cohesive efforts to respond. According to officials from the Office of the ASD(HD&ASA), the Joint Staff, and two of the COCOMs, differing phasing structures may result in the COCOMs’ plans having gaps and duplication of effort among the COCOMs. Using a uniform phasing structure may increase the likelihood that all COCOMs understand what actions to take and when to take those actions, resulting in a unified and cohesive effort. At the time of our review, NORTHCOM, as the lead COCOM for DOD’s planning efforts, was drafting an overarching plan for the COCOMs’ response to an influenza pandemic, which is to include a common phasing structure for the COCOMs’ plans. The COCOMs’ plans include not only actions to respond to an influenza pandemic, but also actions to prepare for an influenza pandemic. According to planning officials, each of the geographic COCOMs is implementing actions from the initial phases of their plans. Planning officials at four of the five geographic COCOMs told that us that advance preparation was essential for an effective pandemic response. To test their pandemic influenza plans, five of the nine COCOMs have conducted a pandemic influenza-related exercise. Three of the geographic COCOMs—CENTCOM, EUCOM, and PACOM—and one of the functional COCOMs—STRATCOM—conducted a pandemic or avian influenza- specific exercise. For example, EUCOM conducted its Avian Wind exercise in June 2006, which included more than 100 participants representing partner nations, other federal agencies, and DOD and EUCOM components. The exercise was designed to identify and enhance the coordination of actions to plan for, respond to, contain, and mitigate the effects of avian or pandemic influenza within EUCOM’s area of responsibility. The other three COCOMs held smaller tabletop exercises to familiarize participants with pandemic influenza in general and the COCOMs’ plans more specifically. Additionally, two of the geographic COCOMs—NORTHCOM and PACOM—included a pandemic influenza scenario within another exercise. SOUTHCOM planning and medical officials said they have not yet conducted a pandemic influenza exercise because they are waiting for information from the countries in their area of responsibility to determine the status of pandemic influenza planning and preparedness of those countries which, in turn, will help SOUTHCOM recommend exercises to address gaps in those countries’ preparedness. Until SOUTHCOM has a clearer assessment of its partner nations’ capabilities, SOUTHCOM officials do not believe generic pandemic influenza-related exercises are cost-efficient. In the absence of pandemic influenza-related exercises, medical and operational planning officials from SOUTHCOM said the command is coordinating with interagency partners, such as the Pan American Health Organization and the U.S. Agency for International Development, to gather information on other countries’ capabilities and planning efforts. Although SOUTHCOM plans to conduct its own regional tabletop exercise later in fiscal year 2007, SOUTHCOM officials said the command will not (and cannot) get ahead of the Department of State as the lead federal agent—and other interagency partners—in such activities. Each of the geographic COCOMs and three of the four functional COCOMs are planning to conduct pandemic influenza- specific exercises or include pandemic influenza scenarios in future exercises. For example, STRATCOM plans to conduct three tabletop exercises—an internal exercise for STRATCOM’s staff; an exercise with the installation where STRATCOM’s headquarters is located (Offutt Air Force Base, Nebraska); and an exercise with STRATCOM’s staff, the installation, and the civilian community—to test STRATCOM’s pandemic influenza plan to continue its own operations. Officials from the five COCOMs that have held exercises said they identified some lessons as a result of their exercises and are starting to take steps to address these lessons. Some of these lessons were general and related to overall planning efforts. For example, in March 2006, CENTCOM conducted a tabletop exercise to familiarize participants with the command’s pandemic influenza plan. The results of the exercise facilitated establishing an operational planning team to continue to address pandemic influenza efforts, according to CENTCOM’s lead planning official. Similarly, an official responsible for planning PACOM’s exercises said the command included avian influenza in one scenario in its Cobra Gold exercise in May 2006, a regularly scheduled multinational exercise hosted by Thailand. In the exercise, PACOM, the Royal Thai Army, and the Singapore Army planned for implications and conducted operations supporting humanitarian assistance in an area where H5N1 avian influenza was a factor. According to a planning official, PACOM determined that the command needs to hold a separate pandemic influenza exercise to effectively test its pandemic influenza plan. However, an official responsible for planning PACOM’s exercises said it has been a challenge to meet another exercise requirement without additional resources, including personnel and funding. Similarly, U.S. Forces Korea planning officials said the command has not held a pandemic influenza- specific exercise or included a pandemic influenza scenario in any war- planning exercises because of the time required and lack of funding for such a scenario. Influenza pandemic exercises have not been a priority because U.S. Forces Korea has been focused on events involving North Korea. According to a representative from one of the U.S. Army garrisons in South Korea, the key lesson learned from a tabletop exercise was that they are “very unprepared” for an influenza pandemic. Lessons learned from other exercises pertained to more specific aspects of plans. For example, officials involved in EUCOM’s Avian Wind exercise identified the need to update the command’s continuity of operations plan to increase the likelihood that critical missions, essential services, and functions could continue during an influenza pandemic. As a result, EUCOM planning officials report that the command plans to update its continuity of operations plan in spring 2007 to include pandemic influenza. Five of the nine COCOMs—EUCOM, NORTHCOM, PACOM, SOUTHCOM, and STRATCOM—have started to provide information to their personnel, including military and civilian personnel, contractors, dependents, and beneficiaries, about a potential influenza pandemic. COCOMs have used various strategies to inform personnel about pandemic influenza, including using various media outlets, training programs, and outreach events. Each of the COCOMs that have started to provide information to their personnel used radio or television commercials, news articles, briefings, or a combination of these means, to inform personnel about avian and pandemic influenza. Additionally, three of the COCOMs had a page on their publicly available Web sites that included some avian and pandemic influenza information and links to other Web sites, such as the federal government’s pandemic influenza Web site, www.pandemicflu.gov. Three COCOMs—EUCOM, PACOM, and STRATCOM—offered training courses to inform personnel about pandemic influenza. Both EUCOM and PACOM offered training for public health emergency officers. In May 2006 and September 2006, EUCOM’s training for its public health emergency officers included general information about pandemic and avian influenza as well as strategies about how to communicate pandemic influenza- related information to beneficiaries. According to STRATCOM officials, in October 2006, STRATCOM required military and civilian personnel to complete a computer-based training module about pandemic and avian influenza that included information on force health protection measures, among other issues. Additionally, three COCOMs—PACOM, STRATCOM, and EUCOM—have used outreach programs to inform personnel, including military and civilian personnel, contractors, dependents, and beneficiaries, about pandemic influenza. A group of military medical professionals at PACOM conducted a series of public outreach events at military exchanges in Hawaii that combined providing seasonal flu vaccinations to military personnel, dependents, and beneficiaries with educating personnel by distributing information about general preventive health measures, as well as pandemic influenza. For example, the PACOM officials distributed pamphlets on cough etiquette, how to prepare for an influenza pandemic, and a list of items to keep on hand in an emergency kit. Figure 4 shows one of PACOM’s military medical professionals sharing information with dependents and beneficiaries at a November 2006 event at the Navy Exchange in Honolulu, Hawaii. Similarly, STRATCOM held an outreach event, called “Pandemic Influenza Focus Day,” in November 2006 for its military and civilian personnel and contractors. During the Focus Day, each directorate or office met to discuss the impact that a 40 percent absenteeism rate due to personnel being sick, caring for someone who was sick, or afraid to come to work, would have on the individual directorate or office. Additionally, in March 2006, EUCOM directed service subcomponents that had not already done so to hold installation- level meetings to inform military and civilian personnel, contractors, dependents, and beneficiaries about the threat of avian influenza and related preventive measures. Each of the geographic COCOMs has started to work or plans to work with nations in its area of responsibility to raise awareness about and assess capabilities for responding to avian and pandemic influenza. COCOMs undertook some of these outreach efforts as a result of an action assigned to DOD as a lead agency in the national implementation plan to conduct assessments of avian and pandemic influenza preparedness and response plans of the militaries in partner nations (action 4.1.1.3). For example, CENTCOM’s lead planning official reported that CENTCOM performed assessments and identified gaps for Afghanistan’s pandemic influenza preparedness and response and has obtained funding for projects with the Afghanistan National Army and the Ministries of Public Health, Agriculture, and Higher Education. The CENTCOM official also noted, among other outreach efforts in the region, a meeting with a military medical delegation from Pakistan to discuss assessing the Pakistani military’s pandemic influenza preparedness and response efforts. Officials involved in EUCOM’s pandemic influenza planning and humanitarian assistance programs reported that EUCOM plans to complete the assessments through its regular coordination efforts with militaries in partner nations. While EUCOM obtained $1 million from the Combatant Commander Initiative Fund to complete actions assigned to DOD as a lead agency in the national implementation plan, EUCOM officials cited resources, including funding, as a challenge to completing these assessments by the November 2007 deadline. COCOMs also have started to take or plan to take other actions to work with other nations related to pandemic influenza. For example, SOUTHCOM plans to hold regional conferences focused on pandemic influenza to help educate partner nations, assess the preparedness of nations in the region, and identify appropriate contacts within the nations. SOUTHCOM planning and medical officials said they have two conferences tentatively planned, but noted that the number of conferences they can hold will be determined by the availability of funding. According to these officials, the conferences will address a variety of topics related to pandemic influenza, including developing plans and interagency collaboration. Moreover, officials from PACOM, Marine Forces Pacific, U.S. Forces Japan, and U.S. Forces Korea participated in a multilateral workshop with officials from Japan and South Korea to discuss the potential threat of a pandemic influenza in the Asia-Pacific region. Participants shared information about national strategies and military response plans and discussed ways to leverage existing partnerships, enhance interoperability, and integrate planning efforts to minimize the health and economic impact of an influenza pandemic. While COCOMs have taken numerous actions to prepare for an influenza pandemic, we identified three management challenges that the COCOMs face as they continue their planning and preparedness efforts. First, the roles, responsibilities, and authorities of key organizations involved in the COCOMs’ planning and preparedness efforts relative to other lead and supporting organizations remained unclear. As a result, the unity and cohesiveness of DOD’s pandemic influenza preparation could be impaired and the potential remains for confusion among officials and gaps and duplication in actions taken by the COCOMs relative to the military services and other DOD organizations in implementing tasks, such as the actions assigned to DOD as a lead agency in the national implementation plan. Second, we identified a disconnect between the COCOMs’ planning and preparedness activities and resources, including funding and personnel, to complete those activities. The continued disconnect between activities and resources may limit the COCOMs’ ability to effectively prepare for and respond to an influenza pandemic. Third, we identified some factors that are beyond the COCOMs’ control—such as limited detailed guidance from other federal agencies on the support expected from DOD, lack of control over DOD’s antiviral stockpile, limited information on decisions that other nations may make during an influenza pandemic, reliance on civilian medical providers for medical care, and reliance on military services for medical materiel—that they have not yet fully planned how to mitigate. While we recognize the difficulty in planning for an influenza pandemic, not yet developing options to mitigate the effects of such factors may place at risk the COCOM commanders’ ability to protect their personnel—including military and civilian personnel, contractors, dependents, and beneficiaries—or to perform their missions during an influenza pandemic. The roles, responsibilities, and authorities of key organizations involved in DOD’s pandemic influenza planning and preparedness efforts relative to other organizations leading and supporting the department’s pandemic influenza planning efforts—including NORTHCOM as the lead for DOD’s planning and the individual COCOMs—remained unclear because of the continued lack of sufficiently detailed guidance from the Secretary of Defense or his designee. We have previously reported that in preparing for and responding to any type of disaster, leadership roles and responsibilities must be clearly defined, effectively communicated, and well understood to facilitate rapid and effective decision making. As a result of not yet issuing guidance fully and clearly defining the roles, responsibilities, authorities, and relationships of key organizations, the unity and cohesiveness of DOD’s pandemic influenza preparation could be impaired, and the potential remains for confusion among COCOM officials and gaps or duplication in actions taken by the COCOMs relative to the military services and other DOD organizations. In our September 2006 report, we identified the absence of clear and fully defined guidance on roles, responsibilities, and lines of authority for the organizations involved in DOD’s pandemic influenza preparedness efforts as a potential hindrance to DOD’s ability to effectively prepare for an influenza pandemic, and recommended that DOD take actions to address this issue, but DOD had not yet done so. Officials from the Office of the ASD(HD&ASA), the Office of the ASD for Health Affairs, and the Joint Staff responded to the recommendations in our September 2006 report by stating that DOD’s implementation plan for pandemic influenza clearly establishes the roles and responsibilities for organizations throughout DOD. In its implementation plan, DOD established offices of primary responsibility for policy oversight of various tasks and outlined medical support tasks assigned to various organizations, but we found that the plan stopped short of fully and clearly identifying roles, responsibilities, and lines of authority for all key organizations, including the COCOMs. Since planning has occurred concurrently within DOD at various levels from the Office of the Secretary of Defense to installations, a more extensive delineation of roles, responsibilities, and lines of authority could lead to a more efficient and effective effort. DOD has outlined NORTHCOM’s roles and responsibilities as the lead COCOM for the department’s pandemic influenza planning efforts. In August 2006, the Secretary of Defense named NORTHCOM the lead COCOM for directing, planning, and synchronizing DOD’s global response to pandemic influenza, or the “global synchronizer” for DOD’s pandemic influenza planning. In April 2007, the Joint Staff issued a planning order that, among other things, outlined NORTHCOM’s roles and responsibilities as global synchronizer, including serving as a conduit between the Joint Staff or Office of the Secretary of Defense and the COCOMs, military services, and defense agencies on pandemic influenza-related issues; assessing and advocating for resources for the COCOMs, military services, and defense agencies; and leading planning efforts for the COCOMs, military services, and defense agencies, but not the execution of those plans in the other COCOMs’ areas of responsibility. While DOD has outlined NORTHCOM’s roles and responsibilities as the global synchronizer, the command’s roles, responsibilities, and authorities relative to the lead offices for DOD’s overall pandemic influenza planning efforts, as well as the relationships between the organizations, were not yet fully and clearly defined. The ASD(HD&ASA) is the lead, in coordination with the ASD for Health Affairs, for DOD’s pandemic influenza planning and preparedness efforts departmentwide, and the Joint Staff also plays a key role in DOD’s pandemic influenza planning. However, neither the Secretary of Defense nor his designee had yet issued guidance fully and clearly stating how NORTHCOM’s roles and responsibilities as the lead for the COCOMs’ planning efforts differed from the roles and responsibilities of the other lead offices for pandemic influenza preparedness efforts, including the Joint Staff, which led to varying expectations among some COCOM officials. For example, COCOM officials had different expectations about whether NORTHCOM would provide guidance to the COCOMs. Planning officials from two geographic COCOMs noted that the Joint Staff, not NORTHCOM, has the primary authority to provide guidance to the COCOMs. However, planning officials from at least three COCOMs were expecting NORTHCOM to provide guidance on key issues, such as quarantine, social distancing, treatment of DOD beneficiaries, and troop rotation. Additionally, there was confusion among the COCOMs on which organization was responsible for overseeing interagency coordination. Planning officials at one COCOM, as well as officials from the Office of the ASD(HD&ASA), the Office of the ASD for Health Affairs, and the Joint Staff, said offices within the Office of the Secretary of Defense and the Joint Staff would remain the points of contact for the actions assigned to DOD in the national implementation plan and would also remain the primary contacts for coordinating with other federal government agencies. However, a planning official from another geographic COCOM said that the global synchronizer role meant that NORTHCOM would coordinate with other federal government agencies for pandemic influenza planning. At the time of our review, officials leading NORTHCOM’s planning and preparedness efforts acknowledged that the command’s roles and responsibilities relative to the Joint Staff and offices within the Office of the Secretary of Defense were not well-defined, especially concerning direct coordination and sharing information with the other federal agencies, and that the command needed further guidance from the Office of the Secretary of Defense and the Joint Staff to more clearly establish its roles and responsibilities. Similarly, the roles, responsibilities, and authorities of the individual COCOMs for DOD’s pandemic influenza planning and preparedness efforts were not yet fully and clearly defined. While there is guidance—such as the Unified Command Plan and 10 U.S.C. § 164—that describes the overall roles, responsibilities, and authorities of the COCOMs, we found that the COCOMs’ roles, responsibilities, and authorities related to DOD’s pandemic influenza planning and preparedness efforts were unclear. For example, medical and operational planning officials from three COCOMs said it was not clear to them which of the 31 actions assigned to DOD as a lead agency in the national implementation plan the COCOMs were to help complete. Officials from two of these COCOMs said that officials within the Office of the Secretary of Defense and the Joint Staff had not yet clearly stated which actions assigned to DOD in the national implementation plan should be implemented by COCOMs and which by the military services. Officials from the Office of the ASD(HD&ASA) and the Joint Staff said the COCOMs were responsible for implementing few of the actions assigned to DOD as a lead agency in the national implementation plan. However, in the absence of clear guidance, each of the COCOMs identified the actions they believed they are partly responsible for implementing. COCOM officials told us they determined they were partly responsible for between 12 and 18 of the 31 actions for which DOD is a lead agency, as shown in table 2. We identified some inconsistency in which actions the geographic COCOMs saw as their responsibility to fulfill. COCOM officials’ varying interpretations of which actions applied to them could lead to gaps in the completion of actions assigned to DOD or duplications in effort. For example, operational and medical planning officials from the Joint Staff, the Office of the ASD(HD&ASA), and the Office of the ASD for Health Affairs told us that there were no additional force health protection actions assigned to COCOMs, but COCOM medical and planning officials told us they shared responsibility for some of the force health actions, including actions relating to monitoring force health (action 4.2.2.6), analyzing medical materiel needs (action 6.1.6.3), and implementing infection control campaigns (action 6.3.2.5). Officials from the Joint Staff and the Office of the ASD(HD&ASA) told us this confusion was evident in the collection of information on funding needs from COCOMs, as the COCOMs identified funding needs for actions these officials thought the COCOMs were not intended to fulfill. In addition, we identified that there was little guidance on what constituted fulfillment of the actions, some of which were open to interpretation and potentially were quite broad. For example, one action, which the Joint Staff issued to the geographic COCOMs, calls for DOD to assess the avian and pandemic influenza response plans of partner militaries, develop solutions for national and regional gaps, and develop and execute military-to-military influenza exercises to validate such plans (action 4.1.1.3), by November 2007. The wide scope for interpretation of the actions meant that COCOMs could expend unnecessary effort or fail to complete actions intended for them. Without fully and clearly identifying the roles, responsibilities, and authorities of the COCOMs, including a clear delineation of which actions apply to which organizations and what constitutes fulfillment of an action, DOD’s preparation for an influenza pandemic risks gaps in efforts by failing to execute some actions by assuming that an action will be fulfilled by other organizations; duplicating efforts, as COCOMs may undertake actions that other DOD organizations are meant to complete; or both. Furthermore, the roles, responsibilities, and authorities of COCOMs relative to the military services for DOD’s pandemic influenza planning and preparedness efforts were also not yet fully and clearly defined. The memorandum that names NORTHCOM the lead for directing, planning, and synchronizing DOD’s global response to pandemic influenza is not limited to the efforts of the COCOMs; however, planning officials from one COCOM said it was unclear what authority NORTHCOM had over the military services. The April 2007 planning order directs the military services to coordinate with NORTHCOM to ensure that the services’ pandemic influenza plans are synchronized with DOD’s global pandemic influenza plan but does not define what this coordination entails. In addition to the need for more information on which actions the COCOMs were to complete compared to the military services discussed above, COCOM medical and planning officials sought clarification on the differences in the roles and responsibilities of the COCOMs and military services in implementing force health protection actions and moving medical assets within the area of responsibility. The November 2005 Joint Staff planning order tasked COCOMs to include force health protection in their plans for pandemic influenza. Planning officials from two of the geographic COCOMs said that, in general, COCOMs set the requirements for force health protection in their areas of responsibility and the military services are responsible for ensuring that their forces meet these requirements. However, medical and planning officials from one COCOM viewed the November 2005 Joint Staff planning order as assigning force health protection activities to the COCOMs and noted that pandemic influenza is the only area where the COCOMs are responsible for medical issues. Moreover, medical and planning officials from one of the COCOM’s service subcomponents noted that because the COCOM’s plan includes a “shaping” phase, which currently is being implemented, the COCOMs have a greater responsibility for force health protection than in other operations. A medical official from one COCOM noted that COCOMs can identify many of the things needed to prepare for and respond to an influenza pandemic, but the COCOMs lack the day-to-day authority over installations and resources to direct that these measures be taken during the initial phases of the COCOM’s plan because force health protection typically is the responsibility of the military services. Similarly, planning officials at two geographic COCOMs reported concerns that they would not have the authority in a pandemic to move medical assets, such as antivirals, from one base in their area of responsibility controlled by one military service to another base controlled by a different service. An official from the Office of the ASD for Health Affairs confirmed that this is an issue, particularly within the United States, and noted that the military services and COCOMs will have to resolve this issue on their own because the Office of the ASD for Health Affairs is not part of the COCOMs’ or military services’ chains-of-command. The unity and cohesiveness of DOD’s pandemic influenza planning, preparation, and response efforts could be hindered by the continued lack of fully and clearly defined roles, responsibilities, authorities, and relationships of organizations throughout DOD involved in these efforts. While the April 2007 planning order outlines NORTHCOM’s roles and responsibilities, the lack of clarity of the roles, responsibilities, and authorities of key organizations involved in the COCOMs’ planning and preparedness efforts relative to other lead and supporting organizations has created the potential for confusion, gaps, and overlaps in areas such as the actions assigned to DOD in the national implementation plan as well as force health protection measures for DOD’s personnel. Without more fully and clearly defined roles and responsibilities, various organizations could fail to carry out certain actions or, alternatively, may perform actions that other organizations were to complete. Additionally, it may be difficult for DOD to accurately capture funding requirements without a clear delineation of which actions are to be executed by which organizations, as well as the scope of the actions. Finally, COCOM planning and response could be less effective if commanders do not have a clear sense of the assets under their control, such as medical materiel at service-controlled installations. We identified a disconnect between the COCOMs’ planning and preparedness activities and resources, including funding and personnel, to complete those activities. This disconnect is, in part, because DOD guidance, including DOD’s implementation plan for pandemic influenza and the Joint Staff planning order that directed the COCOMs to plan, did not identify the resources required to complete these activities. We have previously reported that information on required resources is critical for making sound analyses of how to pursue goals. Without realistic information on required resources, decision makers cannot determine whether a strategy to achieve those goals is realistic and cost-effective or make trade-offs against other funding priorities. In September 2006, we reported that DOD had not yet identified an appropriate funding mechanism or requested funding tied to its departmentwide goals, which could impair the department’s overall ability to prepare for a potential pandemic, and recommended that DOD take actions to address this issue. DOD generally concurred with our recommendation, but had not yet taken actions to address this recommendation. The continued lack of a link between the COCOMs’ planning and preparedness activities and the resources required for them may limit the COCOMs’ ability to effectively prepare for and respond to an influenza pandemic. DOD did not request dedicated funding for its pandemic influenza preparedness activities in its fiscal year 2007 or fiscal year 2008 budget requests because, according to the Principal Deputy to the ASD(HD&ASA), several baseline plans, including the national implementation plan, DOD’s implementation plan, and the geographic COCOMs’ plans, needed to be drafted before DOD could assess its potential preparedness costs. Officials from the Office of the ASD(HD&ASA) and the Office of the ASD for Health Affairs were aware of the disconnect between the COCOMs’ planning and preparedness activities and resources to accomplish these activities. The officials said that when the Homeland Security Council originally developed the national implementation plan, the officials expected to receive supplemental funding to complete the actions assigned to DOD. However, in the absence of sustained supplemental funding, the officials said they are struggling to find programs from which to divert resources to fund the department’s planning and preparedness activities. In December 2005, DOD received $130 million in supplemental appropriations for pandemic influenza; $120 million was for expenses, including health-related items for its own personnel, and $10 million was to provide equipment and assistance to partner nations. However, as the Congressional Research Service reported, tracking federal funds for influenza preparedness is difficult because funds designated for pandemic influenza preparedness do not reflect the sum of all relevant activities, including developing the department’s pandemic influenza plan. The COCOMs have a certain amount of discretion over their operations and maintenance budgets to fund pandemic influenza-related activities. Although COCOM officials have started to identify funding requirements through multiple Joint Staff inquiries regarding COCOM funding needs, planning, medical, and budget officials from the geographic COCOMs said there is still not an accurate assessment of actual funding needs and DOD has not yet requested funding for the department’s planning and preparedness activities. An official from the Office of the ASD(HD&ASA) said obtaining funding to fully establish NORTHCOM as the global synchronizer for the department’s efforts is the office’s top priority. After NORTHCOM establishes its global synchronizer role, the official said one of NORTHCOM’s responsibilities will be to assist the Joint Staff in determining how much funding is required for DOD’s pandemic influenza planning and preparedness activities. Without resources identified for planning and preparedness activities, COCOMs have reallocated resources from other sources to undertake these activities. For example, budget officials at EUCOM said, in the absence of dedicated funding for pandemic influenza-related activities, EUCOM spent about $145,000 of its Operations and Maintenance funding in fiscal year 2006 for travel to pandemic influenza-related conferences and for its Avian Wind exercise. COCOMs have also diverted planners from other areas to develop pandemic influenza plans. Planning officials from four of the five geographic COCOMs and four of the subcomponents we met with said pandemic influenza planning was one of many responsibilities for the personnel involved in their pandemic influenza planning and preparedness efforts, and often their other responsibilities were a higher priority. For example, planning officials from U.S. Forces Korea stated that they cannot dedicate the level of effort that pandemic influenza planning requires because of other more immediate priorities on the Korean peninsula. Similarly, members of CENTCOM’s pandemic influenza planning team said they were distracted by a variety of other tasks calling for immediate action, many of which are related to the wars in Iraq and Afghanistan, and devoted a small percentage of their time to pandemic influenza; only the lead planner in the team was able to devote a significant percentage of time to pandemic influenza planning. As a result of the lack of identified resources for DOD’s pandemic influenza planning and preparedness activities, planning officials from at least three COCOMs said that they will likely be unable to complete some important activities. For example, although the Joint Staff planning order tasked geographic COCOMs to exercise their pandemic influenza plans at least once a year, officials responsible for CENTCOM’s planning and PACOM’s planning and exercises told us they need additional resources to conduct these exercises. While EUCOM has conducted an exercise, planning officials told us that they have had to reconsider future exercises because of the lack of resources. Additionally, officials from each of the COCOMs said they lack resources to complete some of the actions in the national implementation plan. For example, while the Joint Staff tasked all of the geographic COCOMs to assess the avian and pandemic influenza response plans of partner militaries, develop solutions for national and regional gaps, and develop and execute military-to-military influenza exercises to validate such plans (action 4.1.1.3), planning and medical budget officials from each of the geographic COCOMs said that they may be unable to complete this action by the November 2007 deadline because of the lack of resources, including funding. We identified factors that are beyond the COCOMs’ control—such as limited detailed guidance from other federal agencies on the support expected from DOD, lack of control over DOD’s antiviral stockpile, limited information on decisions that other nations may make during an influenza pandemic, reliance on civilian medical providers for medical care, and reliance on military services for medical materiel—that they have not yet fully planned how to mitigate. While we recognize the difficulty of planning for an influenza pandemic, not yet developing options to mitigate the effects of such factors may limit the COCOM commanders’ ability to protect their personnel—including military and civilian personnel, contractors, dependents, and beneficiaries—or to perform their missions during an influenza pandemic. We have recommended a comprehensive risk-management approach as a framework for decision making. Risk involves three elements: (1) threat, which is the probability that a specific event will occur; (2) the vulnerability of people and specific assets to that particular event; and (3) the adverse effects that would result from the particular event should it occur. We define risk management as a continuous process of assessing risks; taking actions to reduce, where possible, the potential that an adverse event will occur; reducing vulnerabilities as appropriate; and putting steps in place to reduce the effects of any event that does occur. Since it is not possible for the COCOMs to reduce the potential for an influenza pandemic, it is important they reduce their vulnerabilities and put in place steps to mitigate the effects of a potential pandemic. Planning officials from four of the five COCOMs told us they had received limited detailed guidance from other federal agencies on what support they might be asked to provide during an influenza pandemic or information that could help the COCOMs estimate such potential support. This is one factor that has hindered their ability to plan to provide support to other federal agencies domestically and abroad during an influenza pandemic. DOD was designated as a supporting agency for pandemic influenza response in the national implementation plan. After Hurricane Katrina, we reported that the military has significant and sometimes unique capabilities, but additional actions are needed to ensure that its contributions are clearly understood and well planned and integrated. Additionally, we reported that many challenges faced in the response to Hurricane Katrina point to the need for plans that, among other things, identify capabilities that could be available and provided by the military. Planning officials from each of the geographic COCOMs said they anticipate that, during an influenza pandemic, the COCOM will provide support domestically and abroad as requested by other federal agencies and approved by the Secretary of Defense. However, planning officials from four of the five geographic COCOMs said they had not yet received detailed information from the Department of State on what assistance other nations may request from the United States. Without this information, the officials said they cannot effectively plan to provide support. Department of State officials told us they would not know what specific kinds of support other nations may need until an influenza pandemic occurred, but they had developed a list of priority countries for the U.S. government’s pandemic influenza response. Additionally, Department of State officials said they had started to assess what kinds of support may be needed for embassies and they have developed a request for information about the level of assistance DOD may be able to provide at a specific list of posts deemed most vulnerable from a medical and security standpoint should an influenza pandemic emerge. Department of State officials expected that the request for information would be sent to DOD by the end of June 2007. At least one COCOM has taken steps to mitigate the effects of limited information, pending further information from the Department of State. PACOM established multiservice teams to work with nations, territories, possessions, and protectorates in its area of responsibility to identify potential needs during an influenza pandemic. For example, in September 2006 about 15 PACOM officials went to Malaysia to provide an avian and pandemic influenza “train the trainer” workshop, obtain information on the country’s pandemic influenza planning efforts, and identify areas of mutual collaboration to increase the likelihood of a coordinated response to the current threat of avian influenza and a potential influenza pandemic. Planning officials from three COCOMs and two service subcomponents that we met with said planning to provide support at the last minute could lead to a less effective and less efficient use of resources. While identifying what capabilities may be needed and available at an indefinite point in the future is difficult, taking these steps now could allow the COCOMs to be better prepared to provide support to other federal agencies domestically and abroad during an influenza pandemic. COCOM medical and planning officials have expressed concern about how they would gain access to and use DOD’s stockpile of antivirals. These officials reported that their lack of control over DOD’s stockpile of antivirals has limited their ability to plan to use this resource. The ASD for Health Affairs procured antivirals and prepositioned DOD’s antiviral stockpile in the continental United States, Europe, and the Far East. The ASD for Health Affairs retained the authority to release the antivirals to allow more flexibility to direct these limited resources where they are needed the most, according to an official in the Office of the ASD for Health Affairs. However, according to planning and medical officials at three of the COCOMs, the absence of information about these assets has made it more difficult to plan for their use because the COCOM officials did not know when they would receive the antivirals or how many doses they would receive. For example, EUCOM planning and medical officials said that during a NORTHCOM exercise in 2006, it took 96 hours for the ASD for Health Affairs to authorize the release of antivirals. The EUCOM officials expressed concern that a lengthy release process could impact the effectiveness of antivirals, as they are most effective if given within 48 hours of showing influenza-like symptoms. According to the officials, the lack of information on when the COCOMs might receive antivirals and how many antivirals they may receive limits the COCOMs’ ability to plan for how they will use these resources and what steps they may need to take to transport, store, and secure these resources after the ASD for Health Affairs releases the stockpile. To help address this issue, the Office of the ASD for Health Affairs distributed about 470,000 treatment courses of an antiviral to military treatment facilities, which can be administered as determined by the facility’s commander. Additionally, at least two service subcomponents purchased their own supply of antivirals to be used for critical personnel during an influenza pandemic. However, by not yet taking steps to mitigate the effect of not having sufficient information to plan to use antivirals in their areas of responsibility, COCOMs may not be prepared to effectively and efficiently use these resources or protect their personnel. Planning officials at four of the geographic COCOMs and one of the functional COCOMs mentioned the need for information on decisions other nations may make during an influenza pandemic, such as closing borders or restricting transportation into and out of the country, as a factor that has hindered their ability to plan to continue ongoing missions during an influenza pandemic. For example, currently most servicemembers injured in Iraq and Afghanistan, in the CENTCOM area of responsibility, travel to Germany for essential medical care. EUCOM planning officials noted that Germany has reserved the right to close off access to Ramstein Air Base, Germany, which is a key European transit point for EUCOM and CENTCOM. Additionally, CENTCOM planning officials said that the borders of Kuwait and Qatar could be shut down in a pandemic, causing problems for transporting personnel and supplies into Iraq and Afghanistan. EUCOM planning officials said they discussed the need for information on decisions other nations may make with officials from the Department of State to help mitigate the effect of limited information from other countries. However, according to the EUCOM officials, most countries are not at a point in their planning to make decisions on border closures or transportation restrictions. The EUCOM officials said they will assume there will be movement restrictions for the purpose of developing their plan, but will not develop specific plans for addressing the movement restrictions until they receive more information. However, information on other nations’ decisions may not be available before an influenza pandemic. Developing plans at the last minute to address other nations’ decisions could limit the COCOMs’ ability to obtain or use certain assets, placing at risk the COCOMs’ ability to effectively protect personnel and continue missions due to potential restrictions by other nations on ground, sea, and air transportation during an influenza pandemic. For example, if a nation decides to close its borders at the start of a pandemic, COCOMs and installations may not be able to obtain needed supplies, such as antivirals. Identifying specific options to mitigate the effects of other nations’ possible decisions in advance of an influenza pandemic may help the COCOMs more fully develop their pandemic influenza plans, provide more flexibility in the COCOMs’ response to an influenza pandemic, and better allow the COCOMs to continue ongoing missions. Officials at each of the geographic COCOMs expressed concern that the COCOMs are reliant on civilian medical providers in the United States and abroad to provide medical care for military personnel, dependents, and beneficiaries. This is a factor that has hindered the COCOMs’ ability to plan for how personnel will access medical care during an influenza pandemic. In fiscal year 2006, DOD provided health care to more than 9 million active duty personnel, retirees, and their dependents through the department’s TRICARE program. TRICARE beneficiaries can obtain health care through DOD’s direct care system of military hospitals and clinics or through DOD’s purchased care system of civilian providers. We reported that, in fiscal year 2005, an estimated 75 percent of inpatient care and 65 percent of outpatient care for TRICARE beneficiaries was delivered by civilian providers. Medical and planning officials at each of the five geographic COCOMs expressed concern that civilian medical facilities would not be able to meet the medical needs of their military personnel, dependents, and beneficiaries during an influenza pandemic, either because there may not be sufficient capacity in the civilian medical facilities or civilian medical facilities may choose to treat their own citizens ahead of these personnel. While COCOMs realistically cannot reduce their reliance on civilian medical capabilities, at least one COCOM has taken actions to mitigate the effect of the military’s reliance on civilian medical care. EUCOM planning officials said they have invited host nation officials to planning conferences and met with at least two medical providers in Germany to coordinate efforts. However, the COCOMs do not control the civilian medical system and, therefore, cannot allocate resources or guarantee treatment for personnel in the civilian medical system during an influenza pandemic. Without options to mitigate the effects of DOD’s reliance on the civilian medical system, COCOMs’ risk being unable to protect personnel and carry out their missions during an influenza pandemic. Planning officials from eight of the nine COCOMs expressed concern that their headquarters are tenants on military services’ installations and, therefore, are reliant on the military services to distribute medical materiel and other supplies. This is a factor that has hindered the COCOMs’ ability to fully address how their headquarters will receive medical materiel and other supplies during an influenza pandemic. Medical and planning officials at two COCOMs expressed concern with the variance among the military services’ health-related policies and priorities. For example, the officials said that each military service has a different doctrine or policy on pandemic influenza-related health issues, such as the distribution of vaccines, antivirals, and other drugs. Although guidance from the ASD for Health Affairs is the same for all of the military services, it could be applied differently among the military services. For example, medical and planning officials from four of the COCOMs noted that the military services would determine how vaccines and antivirals would be used because these supplies would be provided through the military services. This variance in policy implementation could lead to different preparedness levels and limit the operational control COCOM commanders have during a pandemic, which could impair the COCOMs’ ability to carry out their missions. At least two of the COCOMs—JFCOM and STRATCOM—have taken steps to mitigate the impact of this issue by participating in pandemic influenza planning efforts with the installation where their headquarters are located, according to planning officials. The reliance of COCOMs’ headquarters on the military services for plans, decisions, and supplies and the COCOMs’ lack of plans to mitigate the impact of that dependence could impact the COCOMs’ ability to maintain their own operations and missions during an influenza pandemic. The COCOMs have taken numerous actions to plan and prepare for an influenza pandemic, and their efforts continue. However, the COCOMs have faced some management challenges that have and will continue to impair their ability to plan and prepare for an influenza pandemic in a unified and cohesive manner. Planning in an environment of tremendous uncertainty is an extremely difficult and daunting task, but the potential impact of an influenza pandemic on DOD’s personnel and operations makes sound planning all the more crucial. Additionally, preparing for a pandemic can be helpful for preparing for and responding to other disasters that may occur. While we recognize that DOD’s planning and preparedness efforts departmentwide continue to evolve, failure to address these challenges could affect DOD’s ability to protect its personnel, maintain the military’s readiness, conduct ongoing operations abroad, carry out day-to-day functions of the department, and provide civil support at home and humanitarian assistance abroad during an influenza pandemic. Clarifying what is expected of COCOMs and other organizations within DOD in planning and preparing for an influenza pandemic, what constitutes fulfillment of planning tasks, and the roles and responsibilities of key organizations involved in DOD’s pandemic influenza planning and preparedness efforts could help lessen the potential for confusion among COCOM officials, limit gaps or duplication in DOD’s efforts, and increase the likelihood that DOD will be prepared to efficiently and effectively respond to an influenza pandemic. Additionally, linking expectations to resources should help the COCOMs establish appropriate priorities and accomplish the actions assigned to them from the national implementation plan, as well as other planning and preparedness activities. Finally, while the COCOMs cannot control certain factors that have hindered their preparedness efforts, they can take various steps to mitigate their effects on certain aspects of the COCOMs’ plans, including developing options to address these factors. Without taking steps to address these challenges, DOD risks being insufficiently prepared to respond in a unified manner to protect its personnel and conduct its missions during an influenza pandemic. To reduce the potential for confusion, gaps, and duplications in the COCOMs’ pandemic influenza planning and preparedness efforts and enhance the unity and cohesiveness of DOD’s efforts, we recommend that the Secretary of Defense instruct the ASD(HD&ASA) to issue guidance that specifies the following: Which of the actions assigned to DOD in the Implementation Plan for the National Strategy for Pandemic Influenza and other pandemic influenza-related planning tasks apply to the individual COCOMs, military services, and other organizations within DOD, as well as what constitutes fulfillment of these actions. NORTHCOM’s roles and responsibilities as global synchronizer relative to the roles and responsibilities of the various organizations leading and supporting the department’s pandemic influenza planning. To increase the likelihood that the COCOMs can effectively continue their pandemic influenza planning and preparedness activities, including accomplishing actions assigned to DOD in the national implementation plan within established time frames, we recommend that the Secretary of Defense instruct the ASD(HD&ASA) to work with the Under Secretary of Defense (Comptroller) to identify the sources and types of resources that COCOMs need to accomplish their pandemic influenza planning and preparedness activities. To increase the likelihood that COCOMs are more fully prepared to protect personnel and perform ongoing missions during an influenza pandemic, we recommend that the Secretary of Defense instruct the Joint Staff to work with the COCOMs to develop options to mitigate the effects of factors that are beyond the COCOMs’ control, such as limited detailed information from other federal agencies on the support expected from DOD, lack of control over DOD’s antiviral stockpile, limited information on decisions that other nations may make during an influenza pandemic, reliance on civilian medical providers for medical care, and reliance on military services for medical materiel. In written comments on a draft of this report, DOD concurred with all of our recommendations and noted that the department is confident that future plans will adequately address specific roles, resources, and risk mitigation. DOD also provided us with technical comments, which we incorporated in the report, as appropriate. DOD’s comments are included in appendix III. We also provided the Department of State an opportunity to comment on a draft of the report, but the department had no comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to the Chairman and Ranking Member of the Senate and House Committees on Appropriations, Subcommittees on Defense; Senate and House Committees on Armed Services; Senate Committee on Homeland Security and Governmental Affairs; House Committee on Homeland Security; and other interested congressional parties. We are also sending copies of this report to the Secretary of Defense; Secretary of State; Director, Office of Management and Budget; Chairman of the Joint Chiefs of Staff; Commanders of CENTCOM, EUCOM, JFCOM, NORTHCOM, PACOM, SOCOM, SOUTHCOM, STRATCOM, and TRANSCOM; and the Commander, U.S. Forces Korea. We will also provide copies to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-5431 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix IV. DOD, in coordination with the Department of State and other appropriate federal agencies, host nations, and regional alliance military partners, shall, within 18 months: (1) conduct bilateral and multilateral assessments of the avian and pandemic preparedness and response plans of the militaries in partner nations or regional alliances, such as NATO, focused on preparing for and mitigating the effects of an outbreak on assigned mission accomplishment; (2) develop solutions for identified national and regional military gaps; and (3) develop and execute bilateral and multilateral military-to-military influenza exercises to validate preparedness and response plans. Measure of performance: all countries with endemic avian influenza engaged by U.S. efforts; initial assessment and identification of exercise timeline for the military of each key partner nation completed. DOD, in coordination with the Department of State, host nations, and regional alliance military partners, shall assist in developing priority country military infection control and case management capability through training programs, within 18 months. Measure of performance: training programs carried out in all priority countries with increased military infection control and case management capability. The Department of Health and Human Services and DOD, in coordination with the Department of State, shall enhance open source information sharing efforts with international organizations and agencies to facilitate the characterization of genetic sequences of circulating strains of novel influenza viruses within 12 months. Measure of performance: publication of all reported novel influenza viruses which are sequenced. DOD shall develop active and passive systems for inpatient and outpatient disease surveillance at its institutions worldwide, with an emphasis on index case and cluster identification, and develop mechanisms for utilizing DOD epidemiological investigation experts in international support efforts, to include validation of systems/tools and improved outpatient/inpatient surveillance capabilities, within 18 months. Measure of performance: monitoring system and program to utilize epidemiological investigation experts internationally are in place. DOD shall monitor the health of military forces worldwide (bases in the continental United States and outside of the continental United States, deployed operational forces, exercises, units, etc.), and in coordination with the Department of State, coordinate with allied, coalition, and host nation public health communities to investigate and respond to confirmed infectious disease outbreaks on DOD installations, within 18 months. Measure of performance: medical surveillance “watchboard” reports show results of routine monitoring, number of validated outbreaks, and results of interventions. DOD, in coordination with the Department of State and with the cooperation of the host nation, shall assist with influenza surveillance of host nation populations in accordance with existing treaties and international agreements, within 24 months. Measure of performance: medical surveillance “watchboard” expanded to include host nations. DOD, in coordination with the Department of Health and Human Services, shall develop and refine its overseas virologic and bacteriologic surveillance infrastructure through Global Emerging Infections Surveillance and Response System and the DOD network of overseas labs, including fully developing and implementing seasonal influenza laboratory surveillance and an animal/vector surveillance plan linked with World Health Organization (WHO) pandemic phases, within 18 months. Measure of performance: animal/vector surveillance plan and DOD overseas virologic surveillance network developed and functional. DOD, in coordination with the Department of Health and Human Services, shall prioritize international DOD laboratory research efforts to develop, refine, and validate diagnostic methods to rapidly identify pathogens, within 18 months. Measure of performance: completion of prioritized research plan, resources identified, and tasks assigned across DOD medical research facilities. DOD shall work with priority nations’ military forces to assess existing laboratory capacity, rapid response teams, and portable field assay testing equipment, and fund essential commodities and training necessary to achieve an effective national military diagnostic capability, within 18 months. Measure of performance: assessments completed, proposals accepted, and funding made available to priority countries. DOD shall incorporate international public health reporting requirements for exposed or ill military international travelers into the geographic combatant commanders’ pandemic influenza plans within 18 months. Measure of performance: reporting requirements incorporated into geographic combatant commanders’ pandemic influenza plans. DOD, in coordination with the Department of State, the Department of Health and Human Services, the Department of Transportation, and the Department of Homeland Security, shall limit official DOD military travel between affected areas and the United States. Measure of performance: DOD identifies military facilities in the United States and outside of the continental United States that will serve as the points of entry for all official travelers from affected areas, within 6 months. DOD, in coordination with the Department of Homeland Security, the Department of Transportation, the Department of Justice, and the Department of State, shall conduct an assessment of military support related to transportation and borders that may be requested during a pandemic and develop a comprehensive contingency plan for Defense Support of Civil Authorities, within 18 months. Measure of performance: Defense Support of Civil Authorities plan in place that addresses emergency transportation and border support. DOD, in coordination with the Department of Homeland Security and the Department of State, shall identify those domestic and foreign airports and seaports that are considered strategic junctures for major military deployments and evaluate whether additional risk-based protective measures are needed, within 18 months. Measure of performance: identification of critical air and seaports and evaluation of additional risk-based procedures, completed. DOD, when directed by the Secretary of Defense and in accordance with law, shall monitor and report the status of the military transportation system and those military assets that may be requested to protect the borders, assess impacts (to include operational impacts), and coordinate military services in support of federal agencies and state, local, and tribal entities. Measure of performance: when DOD activated, regular reports provided, impacts assessed, and services coordinated as needed. DOD, as part of its departmental implementation plan, shall conduct a medical materiel requirements gap analysis and procure necessary materiel to enhance Military Health System surge capacity, within 18 months. Measure of performance: gap analysis completed and necessary materiel procured. The Department of Health and Human Services, DOD, the Department of Veterans Affairs, and the states shall maintain antiviral and vaccine stockpiles in a manner consistent with the requirements of the Food and Drug Administration’s Shelf Life Extension Program and explore the possibility of broadening the Shelf Life Extension Program to include equivalently maintained state stockpiles, within 6 months. Measure of performance: compliance with the Shelf Life Extension Program requirements documented; decision made on broadening the Shelf Life Extension Program to state stockpiles. DOD shall establish stockpiles of vaccine against H5N1 and other influenza subtypes determined to represent a pandemic threat adequate to immunize approximately 1.35 million persons for military use within 18 months of availability. Measure of performance: sufficient vaccine against each influenza virus determined to represent a pandemic threat in DOD stockpile to vaccinate 1.35 million persons. DOD shall procure 2.4 million treatment courses of antiviral medications and position them at locations worldwide within 18 months. Measure of performance: aggregate 2.4 million treatment courses of antiviral medications in DOD stockpiles. DOD shall supply military units and posts, installations, bases, and stations with vaccine and antiviral medications according to the schedule of priorities listed in the DOD pandemic influenza policy and planning guidance, within 18 months. Measure of performance: vaccine and antiviral medications procured; DOD policy guidance developed on use and release of vaccine and antiviral medications; and worldwide distribution drill completed. DOD shall enhance influenza surveillance efforts within 6 months by: (1) ensuring that medical treatment facilities monitor the Electronic Surveillance System for Early Notification of Community- based Epidemics and provide additional information on suspected or confirmed cases of pandemic influenza through their service surveillance activities; (2) ensuring that Public Health Emergency Officers report all suspected or actual cases through appropriate DOD reporting channels, as well as to the Centers for Disease Control and Prevention, state public health authorities, and host nations; and (3) posting results of aggregated surveillance on the DOD Pandemic Influenza Watchboard; all within 18 months. Measure of performance: number of medical treatment facilities performing Electronic Surveillance System for Early Notification of Community-based Epidemics surveillance greater than 80 percent; DOD reporting policy for public health emergencies, including pandemic influenza, completed. Department of Health and Human Services-, DOD-, and Department of Veterans Affairs-funded hospitals and health facilities shall have access to improved rapid diagnostic tests for influenza A, including influenza with pandemic potential, within 6 months of when tests become available. DOD and the Department of Veterans Affairs shall be prepared to track and provide personnel and beneficiary health statistics and develop enhanced methods to aggregate and analyze data documenting influenza-like illness from their surveillance systems within 12 months. Measure of performance: influenza tracking systems in place and capturing beneficiary clinical encounters. As appropriate, DOD, in consultation with its combatant commanders, shall implement movement restrictions and individual protection and social distancing strategies (including unit shielding, ship sortie, cancellation of public gatherings, drill, training, etc.) within its posts, installations, bases, and stations. DOD personnel and beneficiaries living off-base should comply with local community containment guidance with respect to activities not directly related to the installation. DOD shall be prepared to initiate within 18 months. Measure of performance: the policies/procedures are in place for at-risk DOD posts, installations, bases, stations, and for units to conduct an annual training evaluation that includes restriction of movement, shielding, personnel protection measures, health unit isolation, and other measures necessary to prevent influenza transmission. All Department of Health and Human Services-, DOD-, and Department of Veterans Affairs-funded hospitals and health facilities shall develop, test, and be prepared to implement infection control campaigns for pandemic influenza, within 3 months. Measure of performance: guidance materials on infection control developed and disseminated on www.pandemicflu.gov and through other channels. DOD shall enhance its public health response capabilities by: (1) continuing to assign epidemiologists and preventive medicine physicians within key operational settings; (2) expanding ongoing DOD participation in the Centers for Disease Control and Prevention’s Epidemic Intelligence Service program; and (3) within 18 months, fielding specific training programs for Public Health Emergency Officers that address their roles and responsibilities during a public health emergency. Measure of performance: all military Public Health Emergency Officers fully trained within 18 months; increase military trainees in the Centers for Disease Control and Prevention’s Epidemic Intelligence Service program by 100 percent within 5 years. DOD and Department of Veterans Affairs assets and capabilities shall be postured to provide care for military personnel and eligible civilians, contractors, dependants, other beneficiaries, and veterans and shall be prepared to augment the medical response of state, territorial, tribal, or local governments and other federal agencies consistent with their Emergency Support Function #8—Public Health and Medical Services support roles, within 3 months. Measure of performance: DOD and Department of Veterans Affairs’ pandemic preparedness plans developed; in a pandemic, adequate health response provided to military and associated personnel. DOD shall develop and implement guidelines defining conditions under which Reserve Component medical personnel providing health care in nonmilitary health care facilities should be mobilized and deployed, within 18 months. Measure of performance: guidelines developed and implemented. DOD and the Department of Veterans Affairs, in coordination with the Department of Health and Human Services, shall develop and disseminate educational materials, coordinated with and complementary to messages developed by the Department of Health and Human Services but tailored for their respective departments, within 6 months. Measure of performance: up-to-date risk communication material published on DOD and Department of Veterans Affairs pandemic influenza Web sites, Department of Health and Human Services Web site www.pandemicflu.gov, and in other venues. DOD, in consultation with the Department of Justice and the National Guard Bureau, and in coordination with the states as such training applies to support state law enforcement, shall assess the training needs for National Guard forces in providing operational assistance to state law enforcement under either federal (Title 10) or state (Title 32 or State Active Duty) in a pandemic influenza outbreak and provide appropriate training guidance to the states and territories for units and personnel who will be tasked to provide this support, within 18 months. Measure of performance: guidance provided to all states. DOD, in consultation with the Department of Justice, shall advise state governors of the procedures for requesting military equipment and facilities, training, and maintenance support as authorized by 10 U.S.C. §§ 372-74, within 6 months. Measure of performance: all state governors advised. The Department of Justice, the Department of Homeland Security, and DOD shall engage in contingency planning and related exercises to ensure they are prepared to maintain essential operations and conduct missions, as permitted by law, in support of quarantine enforcement and/or assist state, local, and tribal entities in law enforcement emergencies that may arise in the course of an outbreak, within 6 months. Measure of performance: completed plans (validated by exercise) for supporting quarantine enforcement and/or law enforcement emergencies. To determine the actions the combatant commands (COCOM) have taken to date to prepare for an influenza pandemic, we reviewed drafts of the five geographic COCOMs’ plans and one functional COCOM’s plan that were available at the time of our review. We did not evaluate these plans; rather we used the plans to determine what actions the COCOMs have taken and plan to take to prepare for an influenza pandemic. Additionally we reviewed planning orders issued by the Joint Staff to the COCOMs in November 2005 and April 2007, DOD’s implementation plan for pandemic influenza issued in August 2006, the Implementation Plan for the National Strategy for Pandemic Influenza issued by the Homeland Security Council in May 2006, DOD’s budget requests for fiscal years 2007 and 2008 and appropriations for fiscal year 2007, and after-action reports from exercises related to pandemic influenza. Furthermore, we met with more than 200 officials involved in pandemic influenza planning and preparedness efforts at the nine COCOMs, including operational, medical, logistics, and continuity of operations planners; budget analysts; intelligence analysts and planners; public affairs professionals; humanitarian assistance liaisons; and representatives from the office of the command surgeon, including officials involved in force health protection activities. To better understand the extent of the COCOMs’ efforts to plan and prepare for an influenza pandemic, we met with officials or, in one case, received written responses to our questions from the following COCOMs and their subcomponents: Headquarters, U.S. Central Command, MacDill Air Force Base, Florida; Headquarters, U.S. European Command, Patch Barracks, Germany; Marine Forces Europe, Patch Barracks, Germany; Naval Forces Europe, Patch Barracks, Germany; Special Operations Command Europe, Patch Barracks, Germany; U.S. Air Forces Europe, Ramstein Air Base, Germany; U.S. Army Europe, Campbell Barracks, Germany; Installation Management Command Europe, Campbell Barracks, Germany; European Regional Medical Command, Campbell Barracks, U.S. Army Medical Materiel Command Europe, Pirmasens, Headquarters, U.S. Joint Forces Command, Norfolk, Virginia; Headquarters, U.S. Northern Command, Peterson Air Force Base, Headquarters, U.S. Pacific Command, Camp H.M. Smith, Hawaii; Marine Forces Pacific, Camp H.M. Smith, Hawaii; Pacific Air Force, Hickam Air Force Base, Hawaii; Pacific Fleet, Naval Station Pearl Harbor, Hawaii; Special Operations Command Pacific, Camp H.M. Smith, Hawaii; U.S. Army Pacific, Fort Shafter, Hawaii; U.S. Forces Korea, Yongsan Army Garrison, South Korea; U.S. Naval Forces Korea, Yongsan Army Garrison, South Korea; 7th Air Force, Osan Air Base, South Korea; 18th Medical Command, Yongsan Army Garrison, South Korea; Installation Management Command Korea Regional Office, Yongsan Army Garrison, South Korea; Installation Management Command Pacific, Fort Shafter, Hawaii; Tripler Army Medical Center, Hawaii; Headquarters, U.S. Southern Command, Miami, Florida; Headquarters, U.S. Special Operations Command, MacDill Air Force Headquarters, U.S. Strategic Command, Offutt Air Force Base, Headquarters, U.S. Transportation Command, Scott Air Force Base, We elected to meet with officials from the military service and special operations subcomponents at the U.S. European Command and U.S. Pacific Command because these two commands have had to address outbreaks of H5N1 avian influenza in their areas of responsibility. We selected U.S. Forces Korea because of the number of cases of H5N1 avian influenza in South Korea and the large number of U.S. military personnel stationed in U.S. Forces Korea’s area of responsibility. Furthermore, to better understand how the COCOMs’ planning and preparedness efforts relate to DOD’s departmentwide planning efforts, we met in the Washington, D.C., area with officials from the Office of the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs, Office of the Assistant Secretary of Defense for Health Affairs, and Joint Staff. We also met with officials from the Department of State to better understand their pandemic influenza planning and preparedness efforts, as they relate to the COCOMs’ efforts. We did not assess the efforts of the individual installations to prepare for an influenza pandemic or whether installations’ implementation plans supported the COCOM or military services’ plans because many installations had not yet completed their implementation plans and because our focus for this report was on the COCOM-level planning and preparedness efforts. To determine management challenges that COCOMs face as they continue their planning efforts, we compared the COCOMs’ actions to date to best practices that we have identified in our prior work. Specifically, we reviewed our previous work on risk management, influenza pandemics, emergency preparedness, and overall management to determine whether other issues or lessons learned addressed in these reports were applicable to the COCOMs’ pandemic influenza planning and preparedness efforts. This work is referenced in the list of Related GAO Products at the end of this report. We conducted our review from September 2006 through April 2007 in accordance with generally accepted government auditing standards. Mark A. Pross, Assistant Director; Susan Ditto; Nicole Gore; Simon Hirschfeld; Aaron Johnson; and Hilary Murrish made key contributions to this report. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-835T. Washington, D.C.: May 15, 2007. Financial Market Preparedness: Significant Progress Has Been Made, but Pandemic Planning and Other Challenges Remain. GAO-07-399. Washington, D.C.: March 29, 2007. Public Health and Hospital Emergency Preparedness Programs: Evolution of Performance Measurement Systems to Measure Progress. GAO-07-485R. Washington, D.C.: March 23, 2007. Homeland Security: Preparing for and Responding to Disasters. GAO-07-395T. Washington, D.C.: March 9, 2007. Homeland Security: Applying Risk Management Principles to Guide Federal Investments. GAO-07-386T. Washington, D.C.: February 7, 2007. Influenza Pandemic: DOD Has Taken Important Actions to Prepare, but Accountability, Funding, and Communications Need to be Clearer and Focused Departmentwide. GAO-06-1042. Washington, D.C.: September 21, 2006. Hurricane Katrina: Better Plans and Exercises Need to Guide the Military’s Response to Catastrophic Natural Disasters. GAO-06-808T. Washington, D.C.: May 25, 2006. Hurricane Katrina: Better Plans and Exercises Needed to Guide the Military’s Response to Catastrophic Natural Disasters. GAO-06-643. Washington, D.C.: May 15, 2006. Continuity of Operations: Agencies Could Improve Planning for Telework during Disruptions. GAO-06-740T. Washington, D.C.: May 11, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Emergency Preparedness and Response: Some Issues and Challenges Associated with Major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. Statement by Comptroller General David M. Walker on GAO’s Preliminary Observations Regarding Preparedness and Response to Hurricanes Katrina and Rita. GAO-06-365R. Washington, D.C.: February 1, 2006. Influenza Pandemic: Applying Lessons Learned from the 2004-05 Influenza Vaccine Shortage. GAO-06-221T. Washington, D.C.: November 4, 2005. Influenza Vaccine: Shortages in 2004-05 Season Underscore Need for Better Preparation. GAO-05-984. Washington, D.C.: September 30, 2005. Influenza Pandemic: Challenges in Preparedness and Response. GAO-05-863T. Washington, D.C.: June 30, 2005. Influenza Pandemic: Challenges Remain in Preparedness. GAO-05-760T. Washington, D.C.: May 26, 2005. Flu Vaccine: Recent Supply Shortages Underscore Ongoing Challenges. GAO-05-177T. Washington, D.C.: November 18, 2004. Emerging Infectious Diseases: Review of State and Federal Disease Surveillance Efforts. GAO-04-877. Washington, D.C.: September 30, 2004. Infectious Disease Preparedness: Federal Challenges in Responding to Influenza Outbreaks. GAO-04-1100T. Washington, D.C.: September 28, 2004. Emerging Infectious Diseases: Asian SARS Outbreak Challenged International and National Responses. GAO-04-564. Washington, D.C.: April 28, 2004. Public Health Preparedness: Response Capacity Improving but Much Remains to Be Accomplished. GAO-04-458T. Washington, D.C.: February 12, 2004. HHS Bioterrorism Preparedness Programs: States Reported Progress but Fell Short of Program Goals for 2002. GAO-04-360R. Washington, D.C.: February 10, 2004. Hospital Preparedness: Most Urban Hospitals Have Emergency Plans but Lack Certain Capacities for Bioterrorism Response. GAO-03-924. Washington, D.C.: August 6, 2003. Severe Acute Respiratory Syndrome: Established Infectious Disease Control Measures Helped Contain Spread, But a Large-Scale Resurgence May Pose Challenges. GAO-03-1058T. Washington, D.C.: July 30, 2003. SARS Outbreak: Improvements to Public Health Capacity Are Needed for Responding to Bioterrorism and Emerging Infectious Diseases. GAO-03-769T. Washington, D.C.: May 7, 2003. Infectious Disease Outbreaks: Bioterrorism Preparedness Efforts Have Improved Public Health Response Capacity, but Gaps Remain. GAO-03-654T. Washington, D.C.: April 9, 2003. Flu Vaccine: Steps Are Needed to Better Prepare for Possible Future Shortages. GAO-01-786T. Washington, D.C.: May 30, 2001. Flu Vaccine: Supply Problems Heighten Need to Ensure Access for High- Risk People. GAO-01-624. Washington, D.C.: May 15, 2001. Influenza Pandemic: Plan Needed for Federal and State Response. GAO-01-4. Washington, D.C.: October 27, 2000. Global Health: Framework for Infectious Disease Surveillance. GAO/NSIAD-00-205R. Washington, D.C.: July 20, 2000. | An influenza pandemic could impair the military's readiness, jeopardize ongoing military operations abroad, and threaten the day-to-day functioning of the Department of Defense (DOD) due to a large percentage of sick or absent personnel. GAO was asked to examine DOD's pandemic influenza planning and preparedness efforts. GAO previously reported that DOD had taken numerous actions to prepare departmentwide, but faced four management challenges as it continued its efforts. GAO made recommendations to address these challenges and DOD generally concurred with them. This report focuses on DOD's combatant commands (COCOM) and addresses (1) actions the COCOMs have taken to prepare and (2) management challenges COCOMs face going forward. GAO reviewed guidance, plans, and after-action reports and interviewed DOD officials and more than 200 officials at the 9 COCOMs. COCOMs have taken numerous management and operational actions to prepare for an influenza pandemic, and the COCOMs' efforts are evolving. Each of DOD's nine COCOMs has established or intends to establish a working group to prepare for an influenza pandemic. Additionally, eight of the nine COCOMs have developed or are developing a pandemic influenza plan. Half of the COCOMs have conducted exercises to test their pandemic influenza plans and several are taking steps to address lessons learned. Five of the nine COCOMs have started to use various media, training programs, and outreach events to inform their personnel about pandemic influenza. Each of the geographic COCOMs has worked or plans to work with nations in its area of responsibility to raise awareness about and assess capabilities for responding to avian and pandemic influenza. Although COCOMs have taken numerous actions, GAO identified three management challenges that may prevent the COCOMs from being fully prepared to effectively protect personnel and perform missions during an influenza pandemic, two of which are related to issues GAO previously recommended that DOD address. First, the roles, responsibilities, and authorities of key organizations relative to others involved in DOD's planning efforts remained unclear in part due to the continued lack of sufficiently detailed guidance from the Secretary of Defense or his designee. As a result, the unity and cohesiveness of DOD's efforts could be impaired and the potential remains for confusion and gaps or duplication in actions taken by the COCOMs relative to the military services and other DOD organizations, such as in completing actions assigned to DOD in the Implementation Plan for the National Strategy for Pandemic Influenza. Second, GAO identified a disconnect between the COCOMs' planning and preparedness activities and resources, including funding and personnel, to complete these activities, in part, because DOD's guidance does not identify the resources required to complete these activities. The continued lack of a link between planning and preparedness activities and resources may limit the COCOMs' ability to effectively prepare for and respond to an influenza pandemic, including COCOMs' ability to exercise pandemic influenza plans in the future. Third, GAO identified factors that are beyond the COCOMs' control--such as limited detailed guidance from other federal agencies on support expected from DOD, lack of control over DOD's stockpile of antivirals, limited information on decisions that other nations may make during an influenza pandemic, reliance on civilian medical providers for medical care, and reliance on military services for medical materiel--that they have not yet fully planned how to mitigate. While GAO recognizes the challenge of pandemic influenza planning, not yet developing options to mitigate the effects of factors that are beyond their control may place at risk the COCOM commanders' ability to protect their personnel and perform missions during an influenza pandemic. For example, if a nation decides to close its borders at the start of a pandemic, COCOMs and installations may not be able to obtain needed supplies, such as antivirals. |
The EZ/EC and RC programs target federal grant monies to public and private entities, tax benefits to businesses, or both in order to improve conditions in competitively selected, economically distressed communities. To be considered for these programs, areas must be nominated by one or more local governments and the state or states in which they are located. Areas on Indian reservations must be nominated by the reservation’s governing body. Congress authorized the EZ/EC and RC programs under four separate acts of legislation (see table 1). To date, Congress has authorized the designation of three rounds of EZs, two rounds of ECs, and one round of RCs. See appendix II for a list of all designated communities. The Omnibus Budget Reconciliation Act of 1993 authorized the number of EZ/EC designations to be awarded in the first round of the program, as well as the benefits that the designated communities would receive. The legislation authorized the special use of $1 billion in Social Services Block Grant funds for the EZ/EC program. It also established three tax benefits for businesses in the designated communities: (1) a tax credit for wages paid to employees who both live and work in an EZ, (2) an increased expensing deduction for depreciable property, and (3) tax-exempt bonds. At the same time that the Department of Housing and Urban Development (HUD) and the U.S. Department of Agriculture (USDA) announced the Round I designations, HUD created two additional designations— Supplemental Empowerment Zones and Enhanced Enterprise Communities. Unlike EZs or ECs, these designations were not legislatively mandated. Rather, they were awarded to communities that had been nominated for but did not receive EZ designations. HUD designated two communities as Supplemental Empowerment Zones and four communities as Enhanced Enterprise Communities. HUD provided these communities with certain grants and loan guarantees, which can be used for activities eligible under the Community Development Block Grant program. The second round of EZ/EC designations and the benefits those designees would receive were authorized by two acts of legislation—the Taxpayer Relief Act of 1997 and the Omnibus Consolidated and Emergency Supplemental Appropriations Act of 1999. However, neither act authorized block grant funding for Round II EZs and ECs. Instead, Round II EZs and ECs received annual appropriations through HUD and USDA appropriations bills each year from 1999 to 2004. The Community Renewal Tax Relief Act of 2000 enhanced the tax benefits available to businesses in newly designated EZs and made these new tax benefits available to EZs that had been designated in previous rounds, but not to ECs. The legislation also did not make any appropriations or grant funds available to Round III EZs or to RCs. However, in January 2004, the Consolidated Appropriations Act of 2004 appropriated a total of $994,100 to Round III rural EZs. This legislation did not make any funding available to Round III urban EZs or RCs. Four federal agencies are responsible for administering the programs: HUD oversees the EZ/EC program in urban areas, administers the grants to Round II urban EZs, and oversees the RC program in both urban and rural areas. USDA oversees the EZ/EC program in rural areas and administers the grants to Round II rural EZ/ECs and Round III EZs. The Department of Health and Human Services (HHS) administers the Social Services Block Grant funds to communities designated in Round I of the EZ/EC program. The Internal Revenue Service (IRS) is responsible for administering the tax benefits available under the EZ/EC and RC programs. Although the EZ/EC and RC programs have similar goals and objectives, several features of the programs vary within the EZ/EC program by round and between the EZ/EC and RC programs. The EZ/EC and RC programs share the goal of improving conditions in distressed communities by reducing unemployment and fostering investment in designated areas. However, certain administrative mechanisms, eligibility requirements, selection criteria, and benefits vary among EZ/EC rounds and between the EZ/EC and RC programs. Although the legislation that created the EZ/EC and RC programs does not explicitly state the goals for these programs, HUD’s and USDA’s performance plans suggest that the goals of the programs are similar. According to HUD’s Annual Performance Plan for Fiscal Year 2004, the EZ/EC and RC programs are contained within its strategic goal to strengthen communities. Similarly, in its Fiscal Year 2004 Annual Performance Plan and Revised Plan for Fiscal Year 2003, USDA includes the EZ/EC program under the strategic goal to “support increased economic opportunities and improved quality of life in rural America.” HUD’s and USDA’s implementing regulations for the EZ/EC program include a statement of their “objective and purpose,” each of which generally states that the EZ/EC program is intended to reduce unemployment and promote the revitalization of economically distressed areas. HUD’s regulations implementing the RC program do not have an objective and purpose statement; however, HUD’s guidance states that the RC program is intended “to foster investment in the designated areas, which are some of the most severely distressed and development-resistant areas in the Nation.” Further, HUD program officials have stated that they regard the RC program as pursuing the same objective and purpose as the EZ/EC program, but relying on different methods. Certain features designed to help in the administration of the EZ/EC and RC programs varied by round in the EZ/EC program and between the EZ/EC and RC programs. To facilitate federal interagency coordination in the EZ/EC program, a 26-member Community Empowerment Board was established in 1993, with the U.S. Vice President as its chair and cabinet secretaries and other high-ranking federal officials as members. The board’s function was to consult in the designation of Round I and II EZs and ECs and coordinate the various federal agency resources that EZs and ECs would use to implement their strategic plans. For example, the Community Empowerment Board encouraged other agencies to provide preference points to EZs and ECs in selection competitions for other federal programs. The Community Empowerment Board was disbanded prior to Round III of the EZ program. In 2000, the legislation creating the RC program established a seven- member Advisory Council on Community Renewal to advise the HUD Secretary on the selection of designees and the operation of the RC program. Unlike the Community Empowerment Board, the Advisory Council does not have federal interagency membership. Instead, the members of the Advisory Council include individuals from nonprofit and for-profit organizations who are appointed by the HUD Secretary. The legislation that created it required the Advisory Council to hold hearings “as appropriate,” obtain data from federal agencies, and submit a report containing a detailed statement of the council’s findings and conclusions and any recommendations to the HUD Secretary by September 30, 2003. HUD officials expect an interim report from the Advisory Council to be released sometime in February 2004 and the final report in October 2004. Communities nominated for EZ/EC or RC designations have been required to meet certain eligibility requirements based largely on the socioeconomic characteristics of the residents living in the nominated areas. Specifically, nominated census tracts have been required to meet statutory or regulatory requirements for (1) poverty in each census tract, (2) overall unemployment, (3) total population, (4) total area in square miles (in the case of the EZ/EC program), and (5) general distress. In most cases, these requirements were based on 1990 census data. The levels required for eligibility differed by round, by program, and between urban and rural nominees. For example, the statutory requirements for poverty differed between Round I and subsequent rounds of the EZ/EC program, and between the EZ/EC program and the RC program. In the absence of statutory guidelines, HUD and USDA regulations defined other eligibility requirements differently (see table 2). For example, the requirements for unemployment differed between urban and rural nominees and between the EZ/EC program and the RC program. The population requirements also differed between urban and rural nominees and by program. Finally, communities nominated for the EZ/EC program were required to meet area requirements, while RC nominees were not. Nominated communities were also required to show conditions of general distress. Because the legislation did not define the term “general distress,” HUD and USDA each provided communities with lists of potential indicators containing criteria that could be used to meet this requirement. HUD provided Round I urban EZs and ECs with a list of six indicators and communities in Rounds II and III of the EZ/EC program and the RC program with a list of 17 indicators. USDA provided Round I, II, and III rural EZs and ECs with a list of 14 indicators. For example, USDA’s Round I list included indicators not included in HUD’s Round I list, such as a below-average or declining per capita income, earnings per worker, per capita property tax base, and average years of school completed. USDA, in turn, did not include homelessness as an indicator of general distress, while HUD included homelessness. As a part of the EZ/EC eligibility requirements, nominated communities were also required to submit a strategic plan. The strategic plan had to follow the four key principles of the EZ/EC program, which were established by HUD and USDA in their regulations: Economic opportunity—including job creation within the community; supporting entrepreneurship; small business expansion; and job training, job readiness, and job support services. Sustainable community development—advancing the creation of livable and vibrant communities through comprehensive approaches that coordinate economic, physical, environmental, community, and human development. Community-based partnerships—involving the participation of all segments of the community, including the political leadership, community groups, the private and nonprofit sectors, and individual citizens. Strategic vision for change—coordinating a response to community needs in a comprehensive fashion and setting goals and performance measures. HUD’s and USDA’s regulations implemented legislative requirements regarding community participation in the development of their strategic plans. Nominees were to obtain community input to identify their communities’ needs and to develop plans for addressing them according to the four principles. Nominees were also required to describe the role citizens would play in the implementation of the plans. To be eligible for the RC program, nominees were required to submit a “course of action,” in which they committed to carry out four of six specific legislatively mandated activities: A reduction of tax rates or fees applying within the RC; An increase in the level of efficiency of local services within the RC; Certain crime reduction strategies; Actions to reduce, remove, simplify, or streamline governmental requirements applying within the RC; Involvement in economic development activities by private entities, organizations, neighborhood organizations, and community groups; and The gift or sale at below fair-market value of surplus real property in the RC held by state or local governments to neighborhood organizations, community development corporations, or private companies. In addition, communities nominated for RC designation had to certify that they would meet four of five legislatively specified economic growth promotion requirements, such as repealing or reducing some occupational licensing requirements, zoning restrictions, permit requirements, or franchise and other business restrictions. The designees were also responsible for submitting plans within 6 months of designation for promoting the use of the tax benefits and for carrying out other state and local commitments. RCs were required to certify that they had solicited community input but not that community representatives had been involved in developing the course of action. The selection criteria contained in the authorizing legislation for the EZ/EC and RC programs differed substantially. For the EZ/EC program, HUD and USDA were required to rank nominees based on the effectiveness of their strategic plans, the nominees’ assurances that the plans would be implemented, and additional criteria specified by the respective Secretary. In Round I of the EZ/EC program, the legislation also reserved designation for nominees with certain characteristics. In contrast, the RC selection process did not require a review of the effectiveness of the planning documents, such as the course of action, that communities submitted to meet eligibility requirements. Instead, the legislation authorizing the program required HUD to select the highest average ranking nominees based on poverty, unemployment, and, in urban areas, income statistics. HUD was also required to consider the extent of crime in the area and whether the nominated area contained any tracts that were identified in one of our reports as being distressed. For the first 20 designations, HUD was to give preference to existing EZs and ECs that had been nominated and met the eligibility requirements for designation as an RC; the remaining designations went to the next-highest scorers. As discussed earlier, the authorizing legislation provided EZ/EC and RC program participants with grants, tax benefits, or both. Over the course of the three rounds of the EZ/EC program, however, the amount of the grants available to EZs declined, and the number of tax benefits increased. In Rounds I and II, ECs received much smaller grant benefits than EZs. Businesses in Round I ECs were eligible for one tax benefit; however, businesses in Round II ECs were not eligible for any tax benefits. RCs did not receive grants, but businesses operating in RCs were eligible to receive tax benefits. In addition, HUD and USDA provided designated communities with other benefits. For example, HUD and USDA provided Round II EZ/ECs with grant funds from their annual appropriations. See appendix III for a table summarizing the benefits provided to the designated communities. Congress appropriated a total of $1 billion in Social Services Block Grants for the benefit of Round I EZs and ECs (see table 3). These funds were to be used to (1) prevent, reduce, or eliminate dependency; (2) achieve or maintain self-sufficiency; and (3) prevent neglect, abuse, or exploitation of children and adults. In addition, the legislation required that the funds be used to benefit EZ/EC residents and in accordance with designees’ strategic plans. Like other Social Services Block Grant funds, those allotted for the EZ/EC program were granted to the states, which were given fiscal responsibility for them. The legislation authorizing the EZ/EC program requires that the states obligate these grants for specific EZ or EC community-based organizations in accordance with state laws and procedures and within 2 years of the date that HHS awarded the funds. These block grant funds remain available to finance qualified projects until December 21, 2004, after which time the grants are subject to state close- out procedures, and all amounts reported as unspent must be returned to the federal government. As stated previously, HUD created the Supplemental Empowerment Zone and Enhanced Enterprise Community designations at the same time as the Round I EZ/EC designations. HUD awarded $300 million in Economic Development Initiative grants to the two Supplemental Empowerment Zones and four Enhanced Enterprise Communities (see table 4). These grants were designed to enhance the feasibility of certain economic development or revitalization projects by paying for certain project costs or providing additional security for loans that finance such projects. The government entities were required to use these grants in accordance with the community’s strategic plan and Community Development Block Grant regulations. Initially, the administration planned to provide the Round II EZ designees with the same level of funding as Round I designees. Instead, Round II EZ/EC designees received funding through annual appropriations for HUD and USDA in fiscal years 1999 through 2003 (see tables 5 and 6). According to HUD’s annual appropriations legislation, program grants for Round II urban designees are to be used in conjunction with economic development activities consistent with designees’ strategic plans. The USDA appropriations language did not impose these requirements on Round II rural designees; however, in March 2002, USDA issued regulations limiting the allowable uses of Round II grants to those for Round I EZ/EC Social Services Block Grants. Unlike Round I funds, which pass through a state agency, Round II EZs and ECs access their grants directly from HUD or USDA. In most cases, these funds are available to communities until expended. As of September 30, 2003, no direct funding was available for Round III EZs or RCs. However, the Consolidated Appropriations Act of 2004 appropriated $994,100 for Round III rural EZs. It did not appropriate funding for Round III urban EZs or any RCs. Businesses operating in or employing residents of EZs and RCs are eligible for a number of federal tax benefits designed to encourage business investment (see table 7). Businesses operating in ECs are generally ineligible for the tax benefits, although state and local governments can issue tax-exempt bonds for businesses in Round I ECs. Since the initial legislation authorizing the EZ/EC program, the number of federal tax benefits has grown. For example, businesses operating in Round I EZs were originally eligible for three tax benefits: (1) a credit for wages paid to employees who both live and work in an EZ, (2) an increased expensing deduction for depreciable property, and (3) tax-exempt bonds. By 2002, businesses operating in EZs were eligible for several additional tax benefits, including capital gains exclusions, and more generous tax- exempt bond and expensing provisions. In addition to some of the same tax benefits available in EZs, businesses operating in RCs were also eligible for a deduction on commercial property and a different type of capital gains exclusion. These benefits are generally available until 2009, when the EZ and RC designations expire. Taxpayers operating businesses in or employing residents of EZs, ECs, or RCs can also claim other tax benefits not specific to the federal designations. Some federal tax benefits are aimed at businesses that operate or invest in a distressed community or that employ or provide housing for low-income persons. For example, certain banks, insurance companies, and corporations that lend money may purchase Qualified Zone Academy Bonds, which raise funds for public schools located in low- income areas, such as those located in EZs or ECs. Purchasers of these bonds receive a tax credit in lieu of interest payments. (See app. IV for examples of federal tax benefits aimed at distressed communities and low- income individuals.) In addition to the federal tax benefits, businesses operating in federally designated EZs, ECs, and RCs may also be eligible for tax benefits from the state when a federal designation overlaps with a state-designated Enterprise Zone. While the specific tax benefits provided to businesses operating in state Enterprise Zones vary from state to state, they can include credits on state taxes against withholdings, property tax reductions, and sales tax exemptions. In addition to grant monies and tax benefits, certain designees have also been eligible for other benefits. In Round I of the EZ/EC program, HUD and USDA guidance invited nominees to request, as an addendum to their applications, waivers from federal programmatic, statutory, or regulatory requirements to facilitate their ability to conduct revitalization efforts. Also, communities designated by HUD as Supplemental Empowerment Zones or Enhanced Enterprise Communities were provided with a total of $653 million in Section 108 Loan Guarantees to provide security for loans that finance economic development and revitalization projects. Finally, HUD and USDA expect EZs, ECs, and RCs to use their designations to attract additional investment. In some cases, EZ, EC, and RC designees were provided with a competitive priority in other federal programs to help them meet this expectation. Appendix V provides details on these benefits. To date, the federal agencies have selected three rounds of EZs, two rounds of ECs, and one round of RCs; provided program benefits, outreach, and technical assistance; and established oversight procedures. Available data indicate that community organizations and businesses are using the program benefits; however, certain limitations with tax benefit data will, among other things, make it difficult to audit or evaluate the programs and limit the ability of designated communities to report on their activities. HUD and USDA have implemented three rounds of selection for the EZ/EC program, and HUD has implemented one round of selection for the RC program. Following the authorizing legislation for each round, HUD and USDA released interim rules for designation and formally invited community nominations through a notice inviting applications or notice of funding availability. Figure 1 shows a timeline of the selection process for each of the programs. The implementation of the selection process varied by round of the EZ/EC program and between the EZ/EC and RC programs. In Round I of the EZ/EC program, HUD and USDA used several interagency review teams to rank nominees based on their strategic plans’ effectiveness and alignment with the program’s four principles. As a part of their review of nominees for Rounds II and III of the EZ/EC program, the interagency review teams assigned point values based on the quality of various characteristics of the nominees’ strategic plans. In Rounds I and II, nominees that had applied for an EZ designation but had not been chosen were added to the list of eligible nominations for EC designations. For the RC program, HUD used a statistical formula that was based on the eligibility criteria to identify the eligible nominees with the highest scores. Figure 2 shows the geographic locations of the designated communities by round. Communities that had received designations in prior rounds of the EZ/EC program were permitted to apply for subsequent rounds. Within the EZ/EC program, communities that had received EC designations in Rounds I and II were permitted to apply for EZ status in subsequent rounds. If an EC was chosen to receive an EZ designation, it maintained both designations, along with the associated benefits. The five Round I ECs that also received Enhanced Enterprise Community or Supplemental Empowerment Zone designations maintained their EC status as well. In addition, ECs and EZs were encouraged to apply for RC designations. However, in contrast to the EZ program, the authorizing legislation for the RC program required that the communities forfeit prior designations when they received RC designation. Table 8 provides more information on the number of nominations, communities selected in each round, and the number of designations retained as of September 30, 2003. Figure 3 shows the geographic locations of the designated communities by designation status as of September 30, 2003. The average characteristics of the designated geographic areas in the three rounds of the EZ/EC program and the RC program had slightly different poverty and unemployment levels, but their average population, area, and population density statistics differed more greatly. As noted earlier, with some exceptions, nominees were directed to use 1990 census data to qualify for poverty, unemployment, population, and area criteria; however, the levels required for eligibility varied among rounds of the EZ/EC program and between the EZ/EC program and RC program. According to 1990 census data, a total of about 8 million people lived in EZs, ECs, and RCs. Among urban EZs, each round had somewhat lower average percentages of poverty, but similar percentages of unemployed. The poverty and unemployment rates of rural EZs were relatively constant, although unemployment was higher in Round I. ECs did not vary greatly between rounds in terms of poverty and unemployment. The RCs’ average percentages in poverty and unemployed are comparable to those of EZ designees in rounds II or III. However, the average population of the rural communities designated as RCs was much higher than that of rural EZs or ECs, and the average population for urban communities was highest in Round I EZs and in RCs. The average area of the communities varied among the rounds and between the two programs. The population density for urban communities remained much higher than that of rural communities, with urban communities ranging from about 4,000 persons per square mile to over 10,000 persons per square mile and rural communities ranging from less than 10 persons per square mile to about 45 persons per square mile. Figure 4 shows the average characteristics of the communities by designation, round, and urban or rural location. As of September 30, 2003, (1) state agencies had drawn down about 71 percent of the Social Services Block Grants authorized for the EZ/EC program, (2) the eight local government entities that received Economic Development Initiative grants as part of the Supplemental Empowerment Zone and Enhanced Enterprise Community designations had drawn down about 55 percent of the $300 million in Economic Development Initiative funds that HUD had awarded, and (3) Round II EZs and ECs had drawn down 42 percent in HUD and USDA program grants appropriated between fiscal years 1999 and 2003 (see table 9). IRS maintains two principal sources of tax data. The first is an electronic master file system, which includes a business master file and an individual master file, each of which contains selected line-item data from business and individual tax filings. In addition, IRS’s Statistics of Income Division maintains a second set of data files that are generally based on a sample of tax returns. IRS maintains selected information on the EZ/EC and RC programs’ tax benefits in the master file data sets for tax years 1996 through 2002 and the Statistics of Income data set for tax years 1994 through 2001. In addition, the contractor preparing the HUD Interim Assessment performed a survey of businesses to determine their use of tax incentives, but these findings are limited to Round I designees. Currently, IRS can report on the use of the EZ Employment Credit at the national level. The most readily available IRS data from the Statistics of Income Division indicate that taxpayers are making some use of this credit. Nationally, corporations and individuals claimed an estimated total of $251 million in EZ Employment Credits between 1995 and 2001 (see figs. 5 and 6). Businesses were able to begin claiming the RC Employment Credit in 2002, using the same form they use to claim the EZ Employment Credit. IRS will be able to report on taxpayers’ use of the RC Employment Credit when some data becomes available in mid-2004. However, because the same line is used to record the amount claimed for both the RC Employment Credit and the EZ Employment Credit, IRS will not be able to distinguish between the amount claimed for RC Employment Credits and the amount claimed in EZ Employment Credits. In addition, according to IRS officials, the agency cannot reliably link businesses claiming the employment credit with specific EZs or RCs due to two factors. First, according to IRS officials the addresses business owners list on tax forms do not necessarily correspond with the location of their business operations, but may be a residence or the address where the business is incorporated. Second, the IRS form used to claim the EZ and RC Employment Credits does not require the taxpayer to identify the EZ(s) or RC(s) where the business operations eligible for the credit are located. To identify the amount of employment credits claimed by businesses in specific EZs or RCs, taxpayers would have to identify the EZs or RCs where they had business operations. One way to collect this information would be for IRS to amend its form to request additional information. Senior IRS officials cited several reasons why amending its tax forms is not a high priority for the agency. First, they said that IRS’s role is to administer tax laws, and that collecting more comprehensive data on the use of these benefits does not help the agency to acheive this objective. Second, the officials indicated that requesting taxpayers to provide more information would add to taxpayer burden and IRS workload. Third, IRS officials told us that they allocate their resources based on the potential effect of abuse on federal revenue and noted that these tax benefits are not considered high risk, since the amount claimed is small compared with revenues collected from other tax provisions or the amount of potential losses from abusive tax schemes. Between 1995 and 2001, state and local governments issued a total of 36 different series of tax-exempt bonds with an aggregate issue price of $315 million explicitly for the benefit of businesses operating in EZs and Round I ECs, as well as the D.C. Enterprise Zone. Figure 7 shows the number and aggregate issue price of tax-exempt bonds issued for the benefit of businesses operating in these communities between fiscal years 1995 and 2001. The dramatic increase in the amount of bonds issued since 1999 can be attributed to the issuance of EZ Facility Bonds, which are not subject to state volume caps and can be issued for generally larger amounts than the original Enterprise Zone Facility Bonds, and the issuance of D.C. Enterprise Zone Facility Bonds. IRS cannot report on the extent to which businesses operating in an EZ or RC are claiming the increased expensing deduction, the Commercial Revitalization Deduction, or the Nonrecognition of Gain on the Sale of EZ Assets, because taxpayers do not report these benefits as separate items on their returns. In addition, two benefits, the Zero Percent Capital Gains Rate for RC Assets and the Partial Exclusion of Gain on the Sale of EZ Stock cannot be claimed until 2007 and 2005, respectively. Table 10 provides a summary of the data available on all nine tax benefits. The lack of data on the use of some of the tax benefits available to businesses in EZs and RCs limits the ability of HUD and USDA to administer the programs. For example, HUD requires EZs and RCs to report on the extent to which businesses are using certain tax benefits, such as the EZ or RC Employment Credit, to demonstrate progress in meeting program outputs. However, EZ and RC officials have had difficulty in obtaining tax information directly from businesses. As a result, the lack of data on the use of these benefits limits the ability of the designated communities to comply with this requirement. Also, the lack of data on these tax benefits limits the ability of EZs and RCs to use their designations to attract additional resources, which is a program expectation. For example, according to tax and community development specialists, the inability to report on the extent to which some existing tax benefits are being used limits the ability of EZs and RCs to demonstrate the effectiveness of their revitalization programs. Moreover, the lack of data on these benefits limits the ability of HUD, USDA, or others to audit or evaluate the programs. Although available data show that businesses are using some tax benefits in EZs, ECs, or RCs, we found that some businesses might face obstacles in using the tax benefits. In 1999, we reported that businesses cited several reasons for not taking advantage of the tax benefits, including not knowing about them, finding them too complicated, not qualifying for them, and not having federal tax liability. During our current audit work, research results showed and tax experts expressed similar concerns. For example, a HUD-sponsored report noted that in 2000 many businesses did not know about the tax benefits available to them. In addition, one tax expert noted that businesses must make several complicated calculations about their business activities to determine whether they satisfy the requirements for using the tax benefits. IRS officials suggested that the complexities of the tax code and changes in it over time might prevent smaller businesses from taking advantage of the EZ or RC benefits, because smaller businesses may not have access to tax professionals. Also, one tax expert noted that some businesses, such as farms with assets greater than $500,000, do not qualify for the tax benefits. Moreover, businesses can only claim the tax credits against their reported profits. Since small companies and start-up businesses may not have federal tax liabilities, they may not be able to claim the EZ or RC credits to the same degree as larger or more profitable businesses. HUD, USDA, IRS, and HHS have provided outreach for the EZ/EC and RC programs through conferences, training, and other resources. HUD and USDA have sponsored conferences to educate nominated and designated communities on a variety of subjects. HHS has provided information on its EZ/EC Web site and has participated in HUD and USDA conferences. Both IRS and HUD have made efforts to educate businesses about the tax benefits available to them through educational workshops. HUD sponsors satellite broadcasts on a semiannual basis on issues pertaining to performance, tax benefits, availability of funds, regulatory changes, and other issues. To aid in the designees’ outreach to businesses, HUD has provided communities with lists of local businesses. The agencies have also provided technical assistance through a variety of means, including Web sites, published guidance, and desk officers. HUD, USDA, and HHS each have Web sites dedicated to the programs that include links to resources, such as relevant audit guidance, best practices from designated communities, and online training materials. HUD also provides an EZ/EC/RC address locator on its Web site that enables taxpayers to determine whether businesses or employees are located within a zone and therefore eligible for tax benefits. USDA worked with HUD to ensure that their rural EZs and ECs were included in this tool. In addition, HUD, IRS, USDA, and HHS have provided guidance to designated zones, community groups, and businesses about available benefits. HUD and USDA have also prepared application guides that explain the application process and benefits of designation. HUD and USDA have each published strategic planning guidebooks to help communities through the required strategic planning process, as well as guidebooks on program implementation and benchmarking. HUD and USDA also have a group of desk and field officers available to respond to community inquiries about the program. HUD and USDA are responsible for overseeing the progress of the urban and rural EZs and ECs, and HUD is responsible for overseeing the progress of the RCs. The legislated role of HUD and USDA in both the EZ/EC and RC programs is to ensure that a community (1) does not modify the boundaries of the designated area and (2) is complying with or making progress toward the benchmarks of the EZ’s or EC’s strategic plan or, in the case of HUD, the RC’s course of action and economic growth promotion requirements. HUD or USDA can revoke a community’s designation if either of these two provisions is not met. To implement their oversight responsibility for the EZ/EC program, HUD and USDA rely on Memorandums of Agreement, benchmark tracking, and annual progress reports. The Memorandum of Agreement that is signed by the state and local governments and HUD or USDA states the responsibilities of the parties for compliance with federal requirements. HUD and USDA require designees to report annually on the steps they have taken in conjunction with their strategic plans. RC designees do not sign Memorandums of Agreement but do have to certify that they will take certain actions as part of the course of action and economic growth promotion requirements submitted with the application. HUD and USDA make determinations as to the progress of the EZs or ECs in implementing their strategic plans and, when necessary, send warning letters to the designated communities based on data reported in the performance management systems. In 1997, HUD issued warning letters to five EZs and ECs. These warning letters ultimately did not result in any de- designations. As of late 2003, USDA has not sent warning letters to any rural EZs or ECs. Beginning with the Round II designations in 1998, HUD and USDA began using Internet-based performance management systems—the Performance Management System and the Benchmark Management System, respectively—to track the communities’ performance. These systems allow each community to enter baseline and benchmark data on funding and results. Although designated communities update the systems constantly, an annual reporting deadline exists to provide a firm date at which a community should have all of its information up to date. HUD’s Round III EZ and RC annual reporting requirements are less stringent than those of prior rounds. Since Round III urban EZs and RCs do not receive grant funds, HUD officials do not expect Round III EZs to have as many projects as Round II EZs to include in their annual reporting. The RCs use a much simpler report template that includes sections on required goals and actions, economic growth promotion requirements, tax incentive utilization, and other accomplishments. HUD officials have also said that the use of tax benefits in the Round III EZ and RC programs makes it difficult to design and execute a way of tracking the programs. HUD is exploring the idea of developing a data collection method on the use of the tax benefits. This data collection effort would require approval from the Office of Management and Budget. HUD officials noted that one concern in developing a data collection procedure is that potentially the additional paperwork burden could discourage businesses from using the benefits. USDA has the same reporting requirements for Round III EZs as it does for Rounds I and II. To assess the reliability of the performance data, field staff from HUD and USDA verify some of the community-reported data in the performance management systems. As part of its oversight role, HUD also provides field offices with a grant monitoring checklist, that requires determining whether (1) the documentation matches the information in the performance management system, (2) spending is reasonable in relation to progress, (3) the program and funds serve eligible purposes, (4) the activity took place within the EZ, (5) the resident benefit data are accurate, and (6) the EZ has appropriate accounting records and procedures. USDA state field staff check the reliability of the data entered by communities, and headquarters staff check for inconsistencies and outliers. In addition, every 2 years USDA has a “Management Control Review,” in which headquarters staff verify the accuracy of the state-level reviews. In our previous reports, we found weaknesses in program oversight. In 1996, we reported that HUD and the EZs had not yet (1) described measurable outcomes for the program’s key principles or (2) indicated how the outputs anticipated from one or more benchmarks would help to achieve those outcomes. A HUD official reported that the agency implemented its Internet-based performance measurement system in response to a recommendation contained in our report. In 1997, we reported that rural EZ/ECs were not consistently reporting their progress to USDA and that USDA’s EZ/EC state coordinators were not providing systematic reporting on the progress of rural EZ/ECs. In response, USDA officials told us that they addressed this deficiency by adopting a nationwide community development field training program. Both HUD’s and USDA’s IGs have raised concerns about the accuracy of the performance management systems and the adequacy of EZ/EC program oversight. For example, in a 2003 audit the HUD IG found that some Round II activities were benefiting people who did not live in the zones. They also found that 76 percent of the activities in the performance management system selected for their audit contained inaccurate data. HUD noted in response that it had developed revised monitoring procedures and that it was developing regulations to clarify the requirements for resident benefits. During audits it conducted in 1999 and 2001, USDA’s IG found that data pertaining to a Round I rural EZ and a Round II rural EC were not accurate or had not been updated as required. In response to the 2001 audit, USDA officials stated that the agency would initiate immediate actions to increase training efforts to community personnel in the areas of reporting, using the performance management system, maintaining proper accountability, and preparing annual budgetary reports. In another 2001 audit, which focused on one EZ, the USDA IG found that USDA staff had not looked at the performance and progress of any EZ-funded projects and provided little oversight of how the EZ used program funds. In addition, the audit noted that one funded project was located outside of the EZ and did not serve the required number of EZ residents. In response to the audit, USDA stated that the EZ tracking system would be amended to ensure that residents within the EZ were benefiting from EZ services but that the EZ was authorized to waive the resident benefit requirement if the project was deemed important to the community. USDA officials informed us that limited resources prevented the agency from performing a 100-percent verification of data contained in the performance management system. The few evaluations that have systematically collected and analyzed data on the effectiveness of the EZ/EC program involved a variety of research methods and reported results that varied depending upon the aspect of the program studied. In some cases, the methods researchers used depended on the data available to them. The most comprehensive of these evaluations—the HUD Interim Assessment—found, among other things, that employment of Round I EZ residents had increased from 1995 to 2000, that larger businesses were more likely to use tax benefits than smaller businesses, and that resident participation in EZ or EC governance had been uneven. As with all evaluations of community development programs, these evaluations were also subject to some limitations. In particular, the researchers faced challenges demonstrating what would have happened in the communities in the absence of the program. Although 10 years have passed since the first round of communities were designated, we found only 11 evaluations that systematically collected and analyzed data on EZ/EC program effectiveness. We found that none of these evaluations had assessed the effect of the EZ/EC program on poverty, one had assessed the effects of the EZ/EC program on resident employment, and three had assessed the effects of the EZ/EC program on aspects of economic growth in the distressed communities. Further, most of the evaluations examined the first few years of implementation by Round I designees. We found only one evaluation that assessed the effectiveness of Round II EZ/EC designees and no evaluations of the Round III EZs. In addition, we found that nine evaluations had assessed the implementation of the EZ/EC program’s community participation and governance processes. Only the HUD Interim Assessment attempted to assess on a national scale the effect of the early stages of Round I EZ/EC implementation on unemployment and aspects of economic growth. Although USDA, IRS, and HHS each published studies of some aspect of the Round I EZ/EC designees, we did not find that these agencies had sponsored or conducted a comparable evaluation. A summary of the 11 evaluations we reviewed, listing the purpose and scope, primary research methods and data sources, major findings, and major limitations, appears in appendix VI. Quantitative research methods were generally used to assess the effectiveness of the EZ/EC program in increasing employment and promoting economic growth, while qualitative research methods were generally used to determine the program’s effectiveness in incorporating community participation and developing governance structures. Statistical analysis of survey data and existing data (e.g., census data) were the most common research methods used to assess the effect of the EZ/EC program on employment and economic growth. Document reviews, interviews, and comparative analysis were the most frequent type of methods used to examine site governance structures and community participation. When multiple program objectives were evaluated, researchers generally used both quantitative and qualitative methods. For example, the HUD Interim Assessment used multiple methods and analytical approaches, such as regression analysis, surveys, and document reviews, to assess the effectiveness of the federal EZ/EC program in the first 5 years after the first round of EZ/EC designation. In some instances, neither qualitative nor quantitative data were available, requiring researchers to collect original data. For example, because IRS data on tax benefits cannot be reliably linked to individual designated communities, some researchers were required to collect data on tax usage through qualitative methods. According to a HUD official, the HUD Interim Assessment relied on commercial data and surveys to collect information on the use of the EZ Employment Credit, because existing IRS data could not be attributed to individual EZs. He added that without the ability to attribute the credits to particular EZs, researchers cannot establish the connection between the businesses’ use of the employment credit and observable changes in the EZ communities. Some evaluations, such as the HUD Interim Assessment and our 1998 report, also used surveys of business owners to assess their attitudes toward the tax benefits and to gauge the potential influence of the tax benefits on businesses. To collect qualitative information on the strategic planning process or community participation, researchers reviewed documents and conducted interviews. Lastly, several evaluations reported results from agency performance data on the EZ/EC program; however, both the HUD IG and USDA IG reported concerns about the reliability of these data. The HUD Interim Assessment found that resident employment within five of the six Round I EZs had generally increased between 1995 and 2000. Researchers also found that the employment of residents of four of the six EZs had grown faster than in similar neighborhoods. In addition, these researchers found that EZ residents owned an increasing number of the EZ businesses and that these businesses were more likely to hire other EZ residents. Several evaluations reported factors that affected economic growth. For example, the HUD Interim Assessment found that a variety of factors influenced the success of businesses (one measure of economic growth) in a designated community, with the central location of zones in their metropolitan areas cited as a positive influence and crime and poor public safety cited as the worst problems. In addition, GAO and the HUD Interim Assessment found that larger businesses were more likely to use available tax benefits than smaller businesses. Results of some evaluations concluded that citizen participation and influence were greatest during the planning phase of the EZ/EC initiative and tapered off as the initiative moved into the implementation phase. Researchers suggested that the decline in citizen participation might have occurred for several reasons, including a lack of federal supervision during implementation, the difficulty of implementing projects in reasonable time frames, and the technical nature of implementation activities, such as benchmarking reports. Two evaluations concluded that citizen participation in the EZ/EC program was higher than participation in similar types of federal initiatives. However, the HUD Interim Assessment found that local attention to resident participation in zone governance has been uneven. The findings of these evaluations provide valuable descriptions of the early progress of Round I EZ/ECs. However, because of the difficulty of proving what would have happened in the absence of the program, these evaluations cannot be used to conclude that the program actually caused the observed changes. For example, the evaluations found that resident employment levels had increased, but it is possible that factors other than EZ/EC designation and the program benefits influenced employment levels. Although the HUD Interim Assessment attempted to control for these other factors by comparing EZ/EC employment to employment in both adjacent and nonadjacent comparison neighborhoods, the authors could not say conclusively what the employment levels in the zones might have been if these areas had not been designated as EZs or ECs. If, for example, the communities selected for the EZ/EC program were those with the most potential for development from among the eligible areas within a city, it is possible that these analyses overstated the program effects. Many evaluations were unable to make generalizations about all EZ/ECs because their conclusions were limited to the specific communities that they studied. For example, the large number of EC sites makes it relatively difficult to fully investigate the effectiveness of programs in all sites. Authors of the HUD Interim Assessment and others have noted that creating an evaluation of the EZ/EC program and its four principles is difficult, because the program is based on the idea that effective community revitalization occurs when the strategy is tailored to the local site. This type of program requires an evaluation design, such as a case study, that is able to capture changes that result from each site’s specific strategic plan, but generalizing the results of such evaluations to a larger population may not be possible. For example, the diverse nature of the Round I EZ/ECs—in which each EZ/EC may differ in terms of objective, size of the targeted areas, type of designation, governance structure, projects used, and strategies for implementation—made it difficult for researchers to generate general conclusions for the early stages of Round I implementation. HUD, USDA, HHS, and IRS have implemented the EZ/EC and RC programs. As part of the implementation, HUD and USDA expect designated communities to report on how program benefits are being used, both directly and as a method of attracting additional resources. However, because of the limited amount of data on the use of EZ and RC tax benefits—which the Joint Committee on Taxation estimates will reduce federal tax revenue by about $11 billion between 2001 and 2010— EZs and RCs cannot reliably report on their use by local businesses. In addition, Congress has requested that we audit and evaluate the programs. Acquiring additional data that can attribute the use of the tax benefits to particular EZs and RCs would provide information to facilitate an audit of these programs. Also, additional tax data would be necessary to evaluate certain aspects of the programs, such as the use of the tax benefits. While it is difficult to isolate the effects of these programs on improving conditions in distressed communities, without the ability to attribute the tax benefits to particular EZs and RCs, researchers cannot begin to establish the connection between the businesses’ use of the tax benefits and observable changes in the communities. Such data could potentially come from a variety of sources, including IRS forms, surveys of businesses, or commercial databases. To facilitate the administration, audit, and evaluation of the EZ/EC and RC programs, we recommend that HUD, USDA, and IRS collaborate to (1) identify the data needed to assess the use of the tax benefits and the various means of collecting such data; (2) determine the cost-effectiveness of collecting these data, including the potential impact on taxpayers and other program participants; (3) document the findings of their analysis; and, if necessary, (4) seek the authority to collect the data, if a cost- effective means is available. We provided copies of a draft of this report for review and comment to HUD, IRS, USDA, and HHS. These agencies’ written replies are reproduced in appendixes VII through X, respectively. HUD and IRS agreed with the report’s descriptions of the EZ/EC and RC programs as well as the recommendation. HUD generally agreed with our descriptions of the features of the EZ/EC and RC programs, and the status of their implementation at the national level. IRS stated that the data they provided us on the tax benefits available to businesses operating in EZs, ECs, and RCs were accurately reflected in our report. USDA raised several concerns about our discussion of the EZ/EC program. In particular, the Acting Under Secretary for Rural Development stated that our focus on grants and tax incentives neglected the broader purpose of empowering impoverished communities to plan for and achieve their own goals through “holistic means.” HHS’s Administration for Children and Families had no comments. HUD, IRS, and USDA also provided technical comments, which we have incorporated into this report where appropriate. According to HUD’s Deputy Assistant Secretary for Economic Development, HUD agrees with the report’s recommendation that HUD, USDA, and IRS collaborate to examine options for collecting data on the use of tax benefits available to businesses operating in EZs, ECs, and RCs. He noted that HUD concurs with the report’s findings, including the finding that data on the use of the tax benefits is limited at this time, and added that HUD is currently developing a strategy to use available federal agency resources to develop the data needed to assess the use of the EZ/EC and RC tax incentives. In addition, he noted that IRS had assisted HUD in answering technical questions and participating in HUD training conferences and workshops. He stated that HUD would continue its efforts to work with IRS to gather the data needed to measure businesses’ use of tax benefits in designated communities. In response to findings from its Office of Inspector General, HUD’s Deputy Assistant Secretary indicated that HUD was taking action to improve program oversight, including developing guidance on eligible uses of EZ funds and instructions for reporting on the progress of EZ activities. The Commissioner of Internal Revenue agreed with our recommendation to work with HUD and USDA to identify the data needed to assess the use of the tax benefits and the costs of collecting that data. The Acting Under Secretary of Rural Development for USDA raised a number of concerns with our discussion of the EZ/EC program. While he indicated that USDA welcomed the opportunity to collaborate with IRS and HUD, he raised concerns that responding to the report’s recommendation could stretch the agency’s already scarce resources to produce data of possibly marginal utility and potentially invade taxpayers’ privacy. We believe that the lack of data on the usage of EZ/EC tax benefits limits the ability of USDA and HUD to administer the EZ/EC program, and limits the ability of these agencies and others to evaluate such programs. We stated in our report that data on the use of the tax benefits could come from a variety of sources, such as IRS forms, surveys of businesses, or commercial databases. As a result, collecting such information may not necessarily have a significant impact on USDA’s resources. Moreover, the recommendation calls for HUD, USDA, and IRS to determine the cost effectiveness of collecting these data. Any data that are collected on the use of the tax benefits should be done according to IRS rules for protecting taxpayer privacy. In his letter, the Acting Under Secretary also stated that our report addresses only two program tools (grants and tax incentives) and, by doing so, neglects the broader purpose behind the program of “empowering impoverished communities to plan for and achieve their own goals through holistic means.” To the contrary, the report describes a variety of program features including, but not limited to, grant and tax benefits. In particular, the report discusses the programs’ stated goals and objectives and the key principles nominated communities were required to include in their strategic plans, including the need for community-based partnerships that involve all segments of the community. However, because the report is largely descriptive, it does not examine the extent to which the objectives and purposes of the EZ/EC program have been met. In addition, the USDA Acting Under Secretary indicated that our statement that the lack of data on tax benefits is a limiting factor in communities’ ability to attract additional resources presumes that these data are the only tool that designated communities have to attract additional resources. We agree with the Acting Under Secretary’s statement that such data are not the only means that communities can use to attract additional resources. However, representatives of EZs and RCs we spoke with stated that their ability to attract additional resources was limited by not being able to report on the extent to which the programs’ tax benefits were being used in their communities. The Acting Under Secretary also raised concerns about our description of the agency’s online performance management system. Our purpose in discussing the system was to describe the efforts USDA has made to oversee the EZ/EC program. We described the purpose of the system and the steps USDA takes to verify data contained in it. We also reported the findings of the USDA IG audits concerning the system and actions USDA officials reported that the agency had taken in response to the audits. We did not use data from the system in our report, nor did we evaluate the reliability of the data contained in it. Finally, the Acting Under Secretary also noted that our report did not include all federal programs that provided benefits to communities designated in the EZ/EC program, such as the AmeriCorps program benefits that Round I rural EZs and ECs received in the first two years of the program. As we reported in appendix V, we found that several federal programs offered preferences to applicants located in EZs, ECs, and RCs. However, the extent to which these applicants have taken advantage of these preferences is not known. We are sending copies of this report to the Secretary of Housing and Urban Development, the Secretary of Treasury, the Commissioner of the Internal Revenue Service, the Secretary of Agriculture, the Secretary of Health and Human Services, and other interested Members of Congress. We will make copies of this report available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please call me on (202) 512-8678 if you or your staff have any questions about this report. Key contributors to this report are listed in appendix XI. The objectives of this study were to describe (1) the features of the Empowerment Zone and Enterprise Community (EZ/EC) program and Renewal Community (RC) program; (2) the extent to which the programs have been implemented; and (3) the methods that have been used and the results that have been found in evaluations of the programs’ effectiveness, especially on poverty, unemployment, and economic growth in the participating communities. While our descriptions included analyses of the data that have been and could be used to address these three topics, we did not evaluate the effectiveness of the programs or their implementation in this report. We addressed the activities of federal programs in both rural and urban areas in all three rounds of the EZ/EC program and the RC program. To describe the features of each round of the program, we relied on a review of congressional legislation related to the EZ/EC and RC programs; the Department of Housing and Urban Development (HUD), the U.S. Department of Agriculture (USDA), and the Department of Health and Human Services (HHS) regulations; and HUD, USDA, the Internal Revenue Service (IRS), and HHS publications. To supplement our understanding of these documents, we interviewed a number of program officials at these agencies. We put the available program data and descriptions into data matrices and spreadsheets in order to summarize, compare, and contrast the features of the program in each of the rounds. To describe the implementation of the programs at the national level, we relied on reviews of legislation, regulations, program publications, Web sites, and existing Inspector General (IG) studies and interviews with program officials and other area experts. We analyzed 1990 census data to report on some characteristics of the designated communities. We also obtained and analyzed data from HUD, USDA, HHS, and IRS on the utilization of grants, tax benefits, and loan guarantees. To assess the data’s reliability, we searched for missing data and values outside an expected range and assessed the relationship of data elements to each other. As appropriate, we analyzed a sample of some financial data to confirm their reliability by comparing the agencies’ financial data with original source documents. We interviewed knowledgeable agency officials from IGs’ offices, divisions of payment management, and program offices regarding the work they had done to assess the data’s integrity and the results of that work. We determined that the data were sufficiently reliable for use in this report. This report does not include data from HUD’s or USDA’s performance management systems. To determine the average characteristics of designated communities, we used data from the 1990 census. Because census estimates of poverty and unemployment rates for EZs, ECs, and RCs are estimates based on probability samples, each estimate is based on just one of a large number of samples that could have been drawn. Since each sample could have produced different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. For example, the estimated poverty rate is 45.23 percent for urban Round I EZ communities, and the 95 percent confidence interval for this estimate ranges from 44.86 to 45.59 percent. This is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. All poverty and unemployment percentage estimates have 95 percent confidence intervals of plus or minus 1.2 percentage points or less, as summarized in table 11. In addition to sampling errors, census data (both sampled and 100 percent data) are subject to nonsampling errors, which may occur during the operations used to collect and process census data. Examples of nonsampling errors are not enumerating every housing unit or person in the sample, failing to obtain all required information from a respondent, obtaining incorrect information, and recording information incorrectly. Operations such as field review of enumerators’ work, clerical handling of questionnaires, and electronic processing of questionnaires also may introduce nonsampling errors in the data. The Census Bureau discusses sources of nonsampling errors and attempts to control them in detail. To address businesses’ use of the EZ Employment Credit, we obtained information from IRS’s Statistics of Income databases of corporate and individual tax returns for 1995 through 2001. Although there were some data available for 1994, we excluded it from our analysis because the sample was based on a small number of returns in this year and yielded unreliable confidence intervals. All reported individual return statistics and the corporate return statistics for years 1995 to 1997 and 2001 are estimates based on probability samples. Using IRS’s Statistics of Income Division sampling weights, we estimated confidence intervals for the estimated number and amount of total credits. These estimates and their confidence intervals are summarized in table 12. To identify the methods that have been used and the results that have been found in assessments of the effectiveness of the EZ/EC and RC programs, we conducted searches of the literature and interviewed agency personnel and community development experts. We reviewed more than 1,100 article and report abstracts to identify outcomes-focused evaluations of the federal EZ/EC and RC programs. We included evaluations that met the following criteria: (1) they focused on the federal EZ/EC or RC program; (2) they were research evaluations that systematically collected and analyzed empirical data (as opposed to reports of Best Practices/Lessons Learned or policy discussions of the program); and either (3) they evaluated the program in terms of its effect on poverty, unemployment, and/or economic growth consistent with our congressional mandate; or (4) they evaluated the program’s effectiveness in any of the other program goal areas. The remaining 11 evaluations, on which we focused our work, were program evaluations or described the process of implementation that affected how the program operated. We conducted our work in Washington, D.C., from April 2003 through February 2004 in accordance with generally accepted government auditing standards. To date, Congress has authorized the designation of three rounds of EZs, two rounds of ECs, and one round of RCs. In addition, at the same time that HUD and USDA announced the Round I designations, HUD created two additional designations—Supplemental Empowerment Zones and Enhanced Enterprise Communities. See table 13 for a complete list of designated communities. The authorizing legislation for the EZ/EC and RC programs provided participants with grant and/or tax benefits. A prominent feature of the EZ/EC program—the Social Services Block Grant—was only offered to Round I EZ and EC participants. However, Round II EZs and ECs received grant funds from HUD’s and USDA’s annual appropriations. In addition, the number of tax benefits increased over the course of the three rounds of the EZ/EC program and was the main benefit available to RCs. (See table 14.) In addition to federal tax benefits that have been specifically available for businesses in EZs, ECs, and RCs, taxpayers who engage in business in these communities are typically eligible for other federal tax benefits not specifically designed for such communities. These benefits include tax credits and deductions intended to encourage taxpayers to invest in distressed communities, employ low-income people, and provide housing for low-income people. (Table 15 provides a list of other federal tax benefits intended to assist distressed communities.) In addition to grants and federal tax benefits, EZs, ECs, and RCs were eligible to seek waivers from federal programmatic, statutory, or regulatory requirements to facilitate their revitalization efforts. However, nominees made limited use of the waiver initiative and, although still available to designees, it is no longer considered a primary feature. Also, communities designated by HUD as Supplemental Empowerment Zones or Enhanced Enterprise Communities were provided with Section 108 Loan Guarantees to provide security for loans that finance economic development and revitalization projects. Finally, in some cases, EZs, ECs, and RCs received a preference in receiving assistance under a variety of federal programs. However, the extent to which communities have received these preferences is unknown. In Round I of the EZ/EC program, both HUD and USDA guidance invited nominees to request, as an addendum to their applications, waivers from federal programmatic, statutory, or regulatory requirements to facilitate their ability to conduct revitalization efforts. This feature was not established in the EZ/EC program legislation or regulations; rather, it was an administrative initiative. In response to a request, HUD and USDA would offer to work with the communities to seek statutory authority for broader flexibility of federal programs. For example, nominees could ask for exemptions, increased flexibilities, or changes in eligibility requirements for other federal programs. Available data indicate that Round I designees made limited use of the available waivers from programmatic and regulatory requirements to facilitate their ability to conduct revitalization efforts. Program officials said that many of the nominees requested waivers of statutory requirements, which HUD and USDA could not provide. The HUD Interim Assessment examined 244 waiver requests made by 18 urban Round I EZ/ECs. The HUD Interim Assessment found that the most commonly requested waivers included requests such as exemption from Davis-Bacon Act requirements; flexibility in using block grant funds; and changes in eligibility for federal programs, such as Aid to Families with Dependent Children, Food Stamps, and Medicaid. The report found that federal agencies fully or partially approved 5 percent of the requests and denied 33 percent of the requests either because the agencies lacked the authority to grant them or for other reasons. In addition, 21 percent of the waiver requests asked for flexibilities that already existed, and the remaining requests either required more information or the agency reached some other disposition. Although USDA officials did not quantify the disposition of waiver requests through an independent study, a USDA official estimated that fewer than 10 waiver requests had been approved annually. Because of the Round I experience, both HUD and USDA de-emphasized the waiver initiative. In the application materials for HUD Round II and III, and for the RCs, no mention is made of encouraging requests for waivers. In the USDA Round II application materials, the waiver request initiative is retained, but in the Round III application, no mention is made of the opportunity for nominees to request waivers. HUD and USDA officials told us that designees could still request waivers, but that there was no longer a formal initiative such as was used in Round I. After HUD designated the Supplemental Empowerment Zones and Enhanced Enterprise Communities in Round I, the agency provided them with a total of $653 million in Section 108 Loan Guarantees (See table 16). Like the Economic Development Initiative Grants also offered to these designees, these loan guarantees are to provide security for loans that finance economic development and revitalization projects. A Section 108 loan guarantee allows local governments to obtain loans for economic development projects that (1) benefit low and moderate income families; (2) prevent or eliminate slums or blight; or (3) meet other urgent community development needs. These loans are secured by a community’s current and future Community Development Block Grant allocations for up to five years and carry the full faith and credit of the U.S. government in the event of a default. This benefit allows a local government entity to reduce its borrowing costs. The local government entities can also provide additional security for their section 108 loan repayments or enhance the feasibility of certain projects by paying directly for certain project expenses with its Economic Development Initiative grants. As of September 30, 2003, the eight local government entities that received Section 108 Loan Guarantees as part of the Supplemental Empowerment Zone and Enhanced Enterprise Community designations had used about 36 percent of the loan guarantees that HUD awarded (see table 17). HUD and USDA expect EZs, ECs, and RCs to use their designations to attract additional investment. Businesses and community organizations in these communities can seek grants and loans from for-profit corporations, nonprofit entities, foundations, state and local governments, and other federal agencies. For example, a bank might help capitalize a community lending institution or a private foundation might contribute to a recreational facility for youths. In some cases, organizations or individuals operating or residing in EZs, ECs, or RCs receive competitive priority for federal grants, loans, or technical assistance based solely on their EZ, EC, or RC designations. HUD and USDA officials said that during Round I of the EZ/EC program the Community Empowerment Board encouraged federal agencies to provide preferences to applicants from EZs or ECs in competition for federal funds. For example, communities designated as federal EZs, ECs, or RCs could receive a competitive preference for the Environmental Protection Agency’s 2003 National Brownfields Assessment, Revolving Loan Fund, and Cleanup Grants. In addition, in 2003, the U.S. Department of Education’s Teacher Quality Enhancement Grants program provided a competitive priority to applicants who proposed to carry out activities in EZs or ECs. In addition, Congress has regularly earmarked federal funds, such as grants for low-income housing repair or direct loans for rural development projects, to projects located in EZs and ECs. The extent to which EZs, ECs, and RCs have taken advantage of competitive preferences is not known. Although HUD and USDA maintain data in their performance management systems on EZs’, ECs’, and RCs’, use of other sources of funding, these systems do not differentiate whether the funding source included a preference for applicants located in EZs, ECs, or RCs. However, officials from HUD and USDA told us that their perception was that many federal agencies that provided competitive preferences to applicants located in EZ/ECs in Round I no longer offer these preferences. For example, one HUD official told us that the Department of Justice’s Weed and Seed program, which provides assistance to communities for reducing crime and drug abuse and bringing in human services, no longer offers bonus points to applicants located in EZs, ECs, or RCs. One HUD official noted that the number of preferences offered might have decreased because the Community Empowerment Board disbanded. Collected data through existing data sources (e.g., several concepts (e.g., empowerment paradigm, citizen participation, and inclusion, civil society) in local sites’ ability to leverage dollars for sustainable development. Chaskin, Robert J. and Clark M. Peters. The study findings are not generalizable outside of the board increased, so did the number of women board members (in this study, women served as a proxy for how “inclusive” boards were in representing diverse groups). sample studied. Findings related to dollars leveraged relied heavily on the USDA little confidence in the accuracy of the performance data. Therefore, such findings are not reported here. performance management system data. The study examined only the first year of the functioning of during the planning phase than during the implementation phase, because of difficulties of implementing projects in reasonable time frames. the EZs. Findings are not generalizable to other EZ/EC rural or urban sites. Annual report and executive summary data were self- of the EZ/EC Program; however, recruiting and sustaining citizen participation over the long term is an on-going challenge in rural communities. reported and were not independently verified. A lack of consistency across sites was observed. Sustained community participation seems to be influenced by the capacity of existing community-based organizations. Effective EZ/EC boards are those that represent low-income communities and lessen the role of typical political players. Time frames of the study are not completely clear. the participation of local groups, that process was enforced only in the initial planning phase of the program. During the implementation of the EZ program in designated cities, federal supervision diminished and local elites asserted their control. The study examined only the first year of the functioning of the EZs. One year into the EZ process, there was little evidence that the program had contributed significantly to the development of community capacity. Between 1995 and 2000, employment of EZ residents grew faster in zones than in demographically similar neighborhoods in four of the six cities and in the six-city total. It is not possible to determine whether zone designation caused employment rates to increase or whether rates increased for other reasons, such as general economic conditions or local initiatives. more frequently than smaller businesses. Many small businesses were not aware of the tax incentives. be biased toward firms that favor the EZ. Both zone-resident owned businesses and small businesses located in zones were more likely to hire local zone residents. businesses were unaware of financial incentives, it is not plausible that these businesses could have been motivated by the financial incentives of which they were not aware. Tax Credit and Section 179 Expensing Provision and the percentage change in the number of residents employed. sample and cannot be generalized to ECs that were not studied. Particular cases in the analysis are identified as successes participate in zone governance has been extremely uneven. or failures, but the criteria for determining success or failure are not specified. Residents and community-based organization representatives combined constitute typically 50 percent of board memberships. Zone governance was difficult to monitor. Reviews by HUD’s Inspector General stated that they had organized associations, to long-term formalized alliances. little confidence in the accuracy of the performance data. Therefore, such findings are not reported here. Reported findings on program activities relied entirely on the HUD performance management system data. Home loans, measured as the loan activities per 1000 dwellings, were almost half as prevalent in the 18 EZs/ECs as in their corresponding Metropolitan Statistical Areas. No discussion of Home Mortgage Disclosure Act reliability was included in the report; however, GAO work has found these data to be sufficiently reliable for similar purposes. Report findings on commercial lending relied on Small Business Administration data, for which there is no knowledge about the reliability of the data. Therefore, such findings are not reported here. The researchers did not control for other factors influencing the differences in lending activities in the 18 EZ/ECs when comparing with corresponding Metropolitan Statistical Areas. Findings regarding ECs are based on a nonprobability sample and cannot be generalized to ECs that were not studied. Citizens in most communities were able to play a moderate to substantial role in the governance of their community’s EZ/EC Initiative. Because the study covered only the years 1994-1997, insufficient time had passed when the study was published to draw definitive conclusions. In all 18 study cities, citizens had some opportunity to advise local EZ/EC governing bodies and community development finance entities. Empowerment Zone Initiative: Building a Community Plan for Strategic Change. Findings from the First Round Assessment of the Empowerment Zone/Enterprise Community Initiative. GAO/RCED-99-253, (Washington, D.C.: September 30, 1999). In 11 of the study cities, including all six EZs, citizens were provided with some means of citizen participation and involvement in the governance of various community development initiatives. There were few instances in which zone residents and stakeholders actually controlled the allocation of EZ/EC resources. Findings regarding ECs are based on a nonprobability sample government process and evolved into a more community- directed process headed by steering committees of 20 to100 members. and cannot be generalized to ECs that were not studied. Local sites provided self-reported data to the authors, which were not independently verified. The most common governance structure was at least moderately integrated into the city government and used one central structure to oversee the Initiative. Cities with considerable community control over plan development tended to have an existing citizen participation structure that could be used as a foundation to initiate planning efforts. Citizen influence was greatest in determining site strategies and Citizen participation decreased as the initiative moved from Citizen participation was reported to be higher in the EZ/EC Initiative than in other similar federal initiatives. Most cities did not have grassroots participation. Reviews by USDA’s Inspector General stated that they had little confidence in the accuracy of the performance data. Therefore, such findings are not reported here. The employment credit was the most frequently used tax incentive, especially by the larger businesses. Because more than half of the large urban businesses and rural businesses did not respond to the surveys, the survey results only reflect actual usage of the incentives. somewhat important in their hiring decisions. The estimates based on the surveys might be imprecise because of the sampling error associated with estimate. that their employees lived out of the zones or that they did not know about the credit. Primary methods and data used “Citizen Participation in the North Delta Mississippi Community Development Block Grants, Empowerment Zones and Enterprise Communities.” Planning Practice and Research 13, no. 4 (1998): 443-51. EC program than in the community development block grant program. generalized to ECs that were not studied. Citizen involvement was more extensive and more meaningful (i.e., participation in strategic planning and decision making) in the EC program than in the community development block grant program. participants in each program were representative of the larger population living in the target areas, and therefore, to what extent key factors, such as socioeconomic status, might have influenced citizen participation. In addition to those individuals named above, Jonathan Altshul, Susan Baker, Daniel Blair, Mark Braza, Emily Chalmers, Patricia Farrell Donahue, David Dornisch, DuEwa Kamara, Terence Lam, Alison Martin, Grant Mallie, John McGrail, John Mingus, Marc Molino, Gretchen Maier Pattison, Minette Richardson, and Michael Simon made key contributions to this report. Aigner, Stephen M., Cornelia B. Flora, and Juan M. Hernandez. “The Premise and Promise of Citizenship and Civil Society for Renewing Democracies and Empowering Sustainable Communities.” Sociological Inquiry 71, no. 4 (2001): 493-507. Chaskin, Robert J. and Clark M. Peters. Governance in Empowerment Zone Communities: A Preliminary Examination of Governance in Fifteen Empowerment Zone Communities. (Chicago, IL: University of Chicago, Chapin Hall Center for Children, 1997). Community Partnership Center. Rural Empowerment Zones/Enterprise Communities: Lessons from the Learning Initiative. Findings and Recommendations of the Community Partnership Center EZ/EC Learning Initiative. (Knoxville, TN: Community Partnership Center, University of Tennessee, 1998). Gittell, Marilyn, and others. “Expanding Civic Opportunity: Urban Empowerment Zones.” Urban Affairs Review 33 no. 4 (1998): 530-58. Gittell, Marilyn, Kathe Newman, and Francois Pierre-Louis. Empowerment Zones: An Opportunity Missed: A Six City Comparative Study. (New York, NY: The City University of New York, The Howard Samuels State Management and Policy Center, 2001). Hebert, Scott and others. Interim Assessment of the Empowerment Zones and Enterprise Communities (EZ/EC) Program: A Progress Report, prepared for the U.S. Department of Housing and Urban Development. (Washington, D.C.: November 2001). Nathan, Richard P. and others. Investing in a New Future: Special Report on Community Development Financing in Selected Empowerment Zone/Enterprise Community Sites. (Albany, NY: The Nelson A. Rockefeller Institute of Government, 1997). ——Empowerment Zone Initiative: Building a Community Plan for Strategic Change: Findings from the First Round of Assessment. (Albany, NY: State University of New York, The Nelson A. Rockefeller Institute of Government, 1997). Reid, J. Norman and Karen Savoie Murray. “Empowering Rural Communities: A Perspective at the Five-Year Point.” Paper presented at the annual meeting of the Rural Sociological Society, Washington, D.C., August 2000. U.S. General Accounting Office. Community Development: Businesses’ Use of Empowerment Zone Tax Incentives. GAO/RCED-99-253. (Washington, D.C.: September 30, 1999). Wang, Fahui, and Joseph A. Van Loo. “Citizen Participation in the North Delta Mississippi Community Development Block Grants, Empowerment Zones and Enterprise Communities.” Planning Practice and Research 13, no. 4 (1998): 443-51. Community Development: Businesses’ Use of Empowerment Zone Tax Incentives. GAO/RCED-99-253 (Washington, D.C.: September 1999). Community Development: Progress on Economic Development Activities Varies Among the Empowerment Zones. GAO/RCED-99-29 (Washington, D.C.: November 1998). Community Development: Information on the Use of Empowerment Zone and Enterprise Community Tax Incentives. GAO/RCED-98-203 (Washington, D.C.: June 1998). Community Development: Identification of Economically Distressed Areas. GAO/RCED-98-158R (Washington, D.C.: May 1998). Economic Development Activities: Overview of Eight Federal Programs. GAO/RCED-97-193 (Washington, D.C.: August 1997). Rural Development: New Approach to Empowering Communities Needs Refinement. GAO/RCED-97-75 (Washington, D.C.: March 1997). Community Development: Status of Urban Empowerment Zones. GAO/RCED-97-21. (Washington, D.C.: December 1996). | Congress established the Empowerment Zone and Enterprise Community (EZ/EC) program in 1993 and the Renewal Community (RC) program in 2000 to provide assistance to the nation's distressed communities. To date, Congress has authorized three rounds of EZs, two rounds of ECs, and one round of RCs. The Community Renewal Tax Relief Act of 2000 mandated that GAO audit and report in 2004, 2007, and 2010 on the EZ/EC and RC programs and their effect on poverty, unemployment, and economic growth. This report describes (1) the features of the EZ/EC and RC programs, (2) the extent to which the programs have been implemented, and (3) the methods used and results found in evaluations of their effectiveness. Both the EZ/EC and RC programs were designed to improve conditions in distressed American communities; however, the features of the programs have changed over time. Round I and II EZs and ECs received different combinations of grant funding and tax benefits, while Round III EZs and RCs received mainly tax benefits. To implement the programs, federal agencies have, among other things, designated participating communities and overseen the provision of program benefits. Since 1994, the Department of Housing and Urban Development (HUD) and the Department of Agriculture (USDA) have designated a total of 41 EZs and 115 ECs, and HUD has designated 40 RCs. Available data show that Round I and II EZs and ECs are continuing to access their grant funds and IRS data show that businesses are claiming some tax benefits. However, the Internal Revenue Service (IRS) does not collect data on other tax benefits and cannot always identify the communities in which they were used. Also, efforts by HUD to obtain these data by survey were limited to Round I designees, and EZ and RC officials have had difficulty obtaining such information directly from businesses. The lack of tax benefit data limits the ability of HUD and USDA to administer and evaluate the programs. The few evaluations that systematically collected and analyzed data on EZ/EC program effectiveness used a variety of research methods to study different aspects of the program. The most comprehensive of these studies--the HUD Interim Assessment--found that employment of Round I EZ residents had increased from 1995 to 2000, that larger businesses were more likely to use tax benefits than smaller businesses, and that resident participation in EZ or EC governance has been uneven, among other things. |
The Robert T. Stafford Disaster Relief and Emergency Assistance Act of 1988 (Stafford Act) generally defines the federal government’s role during the response and recovery after a major disaster. It establishes the programs and processes through which the federal government provides disaster assistance to state and local governments, tribes, certain nonprofit organizations, and individuals. FEMA has steady-state and emergency organizational structures. Under a steady-state when FEMA is not in active response to a disaster, FEMA employees conduct activities that “strengthen the Homeland Security Enterprise” and perform functions that align with the Quadrennial Homeland Security Review (QHSR) goals, which include strengthening capacity to withstand hazards, and improving preparedness in all levels and segments of society. However, when a disaster declaration is requested by a Governor and approved by the President, FEMA executes its emergency organizational structure as discussed below. The Stafford Act establishes the process for states to request a presidential major disaster declaration. Once a declaration has been declared by the President, FEMA may provide disaster assistance pursuant to the authorities in the Stafford Act. In order to request that the President issue a major disaster declaration, a Governor submits a declaration request certifying that the damage requires resources beyond the state’s capability. The request must also include an estimate of the amount and severity of damage and losses and preliminary estimates of the types and amount of disaster assistance needed, among other things. Once a disaster is declared, FEMA provides assistance primarily through one or more of the following three assistance programs: Individual Assistance, Public Assistance, and Hazard Mitigation. Not all programs are activated for every disaster. The determination to activate a program is based on the needs identified during the assessment conducted as part of the declaration request. The Disaster Relief Fund is the major source of federal disaster recovery assistance when a disaster is declared. The Disaster Relief Fund is appropriated no-year funding which allows FEMA to direct, coordinate, manage and fund response and recovery efforts associated with domestic major disasters and emergencies. Under the Stafford Act, FEMA has the authority to augment its permanent full-time staff with temporary personnel when needed, without regard to the appointment and compensation provisions governing Title 5 appointments of permanent full-time staff. Permanent full-time employees manage FEMA’s day-to-day activities, and a portion of these employees are expected to deploy when a disaster is declared. The DAE is one type of temporary, on-call employee. See appendix II for a detailed description of categories of disaster workforce employees. DAEs comprise the largest portion of the disaster workforce employed under FEMA’s emergency organizational structure. As of February 2012, there were 9,981 DAEs. DAEs are activated to perform disaster activities directly related to specific disasters, emergencies, projects, or activities of a non-continuous nature.paid when they are deployed (including per-diem), and do not receive any Federal benefits with the exception of sick leave, holiday pay, and administrative leave. They are assigned to one of 23 functional disaster cadres. For example, the Individual Assistance cadre provides referrals and guides individuals through the FEMA assistance process, while the Hazard Mitigation cadre assists in educating the public and local governments on methods to reduce the risk of loss of property and life from a future disaster. See appendix III for a description of each cadre and their primary duties. DAEs serve two-year appointments, are only FEMA’s organizational structure is decentralized and comprised of headquarters and ten regional offices. FEMA’s Administrator, in accordance with the Post-Katrina Emergency Management Reform Act of 2006 (Post-Katrina Act), appoints a Regional Administrator to head each regional office.tribal governments, and other nongovernmental organizations—provide emergency management within their respective geographical area. See appendix IV, FEMA’s organizational chart, and figure 1 for a map of FEMA’s regions. FEMA has taken steps to enhance its management of the program, but has not developed or updated policies and procedures that align with the day-to-day management of the DAE program. FEMA has not provided guidance for how regional cadre managers should undertake their duties in the management of DAEs. Furthermore, FEMA could better monitor both its regions’ implementation of DAE policies and DAEs’ implementation of FEMA’s disaster policies and procedures in order to reduce the risk of inconsistent application. In addition, FEMA does not have policies and procedures for how it communicates with DAEs when they are not deployed. FEMA has not yet developed guidance for cadre managers that outlines how they should manage DAEs in their cadre such as guidance for understanding and handling reserve pay and benefits, the deployment process, training procedures, and evaluation techniques. Specifically, 14 of 16 regional cadre managers we interviewed said that they have not seen or are not aware of documented guidance for their duties as cadre manager such as hiring, training, and developing DAEs from headquarters, and 10 of 16 stated that having written guidance would be beneficial to their job. For example, one regional cadre manager said that there are inconsistencies across regions with how cadre managers hire, train, and utilize their DAEs. Another cadre manager added that inconsistent hiring processes affect morale among DAEs. Instructions on how to manage are handed down from experienced colleagues, but are not documented for consistent use, according to another cadre manager. FEMA stated in 1999 that it planned to establish guidelines and requirements for cadre management functions and intended this guidance to be applied consistently at headquarters and in the regions, but this effort was not completed.manager’s handbook; however, the handbook was not finalized or officially adopted across FEMA. The director of IWMO stated that he did not know why the cadre manager’s handbook had not been completed since 2008. In February 2012, during the course of our review, FEMA In 2008, FEMA officials drafted a cadre began its Disaster Workforce Transformation.effort includes creating a National Disaster Reservist Program intended to overhaul the current DAE program and examine issues such as cadre management. Further, IWMO officials stated that in fiscal year 2012, they intend to develop a new cadre management handbook, revise FEMA DAE policy, conduct regularly scheduled meetings and conference calls with cadre managers, and conduct a national conference designed to educate cadre managers on their roles. However, FEMA does not have time frames or milestones for completing and disseminating cadre manager guidance as part of its Disaster Workforce Transformation and related activities. According to FEMA, this In the absence of cadre manager guidance, IWMO officials stated that FEMA Instruction 8600.1, issued in 1991, is the best source for information and guidance on the roles and responsibilities of cadre managers. The document outlines DAE policy for recruitment and hiring, reappointment, appraisals, and benefit eligibility. However, many of its sections are obsolete or inoperative. For example, FEMA Instruction 8600.1 states that the office directors are responsible for the recruitment, selection, training, use, and management of their DAE cadres. According to FEMA, the regional cadre managers currently have these responsibilities; however the 8600.1 policy is not updated to reflect this change in responsibility. In addition, 9 of 16 regional cadre managers we interviewed stated that FEMA Instruction 8600.1 was either outdated, in need of revision, or not applied consistently across the organization. One regional cadre manager said that he would like anything from headquarters with respect to guidance, but all that he has seen is FEMA Instruction 8600.1, which is outdated. This manager added that it was unclear whether any steps have been taken to ensure that FEMA Instruction 8600.1 is applied consistently across regions. Another regional manager said when they have to give new hires a copy of FEMA Instruction 8600.1, they amend the document to reflect recent policy changes. Of the remaining cadre managers we interviewed, 2 said that they were not sure whether FEMA Instruction 8600.1 was applied consistently across regions, and 5 did not mention the issue. A 2010 DHS OIG report recommended that FEMA review and update key DAE program benefit policies, procedures, and guidance to eliminate conflicts and inconsistencies between interim policies and permanent overall guidance. In response to the IG’s report, FEMA officials stated that FEMA Instruction 8600.1 was under revision. In March 2012 during the course of our review, FEMA officials stated that the revision of FEMA Instruction 8600.1 has been placed on hold pending the results of FEMA’s fiscal year 2012 workforce transformation initiative to ensure all issues that result from the transformation effort are identified. According to standard practices for program management, an organization should develop a program schedule that establishes the timeline for program milestones and deliverables.previous efforts to create guidance for cadre managers, establishing time frames and milestones could help FEMA ensure accountability for completing and disseminating the cadre manager handbook and a revised FEMA Instruction 8600.1. FEMA’s decentralized structure allows for flexibility in responding to disasters; however, FEMA does not monitor how the regions implement DAE policies, and how DAEs implement disaster policies and procedures. Without such a mechanism, it will be difficult for FEMA to provide assurance that both its regions and DAEs implement DAE policies and disaster policies and procedures consistently. For example, DAEs in focus groups we conducted and regional cadre managers we interviewed expressed concerns about the inconsistency across regions in interpreting FEMA policy. Specifically, they raised concerns about inconsistencies across regions or cadres in how supervisors interpret both DAE administrative policies and/or cadre-specific disaster policies. For example, one focus group participant said that although there are standard policies and procedures, each disaster is different, with different supervisors that interpret these policies differently. Another focus group participant reiterated this point, stating that the regions and cadres direct their DAEs on how to approach disaster tasks differently, which lead to inefficiencies in providing disaster assistance. Participants in the public assistance cadre, for example, raised concerns about the variability that exists in how supervisors and managers interpret public assistance policy on documenting damage assessments, leading to differences in how well the worksheets are prepared. One participant stated that there are inconsistencies across regions when preparing the project worksheets used to document disaster damage and provide cost estimates and plans for repair. Specifically, this focus group participant stated that in certain regions DAEs are instructed to focus on the number of worksheets passed through the system. Although it is not referred to as a quota, the participant stated that if a DAE does not achieve this number, he or she will be sent home before completing his deployment. Conversely, in other regions, supervisors and other managers do not apply a goal for the number of worksheets to be completed and are concerned with quality rather than quantity. These variations can lead to inconsistencies in how the worksheets are completed. Another focus group participant stated that the inefficiencies and inconsistencies that run across the board were problematic adding that when determining eligibility for public assistance, sometimes things are made eligible in one state that are not eligible in another state. Moreover, a 2007 Booz Allen Hamilton preliminary report entitled Restructuring and Enhancement of the Intermittent Disaster Workforce System also identified inconsistencies in the application of policies and standard operating procedures across regions and cadres. In March 2012 during the course of our review, FEMA officials stated that the agency intends to establish a centralized management structure responsible for the development of FEMA disaster assistance policies and procedures. FEMA policy states that headquarters is responsible for developing the agency’s policies and procedures for disaster assistance and the regional offices are responsible for the implementation of these policies and procedures. We recognize that FEMA’s decentralized structure allows for flexibility in handling disasters as each region can encounter different types of disasters and the regional structure can facilitate disaster assistance. In a February 2012 FEMA town hall meeting, FEMA’s Administrator acknowledged that there are inconsistencies across the FEMA regions, and noted that due to differences in how regions operate, it is problematic to deploy someone based in one region to another during a disaster.cadre manager we interviewed cited inconsistency in policy application saying, “there is an ongoing problem of the right hand not knowing what the left hand is doing with respect to when policies are implemented or are in conflict with one another”. Without routinely monitoring how disaster policies and procedures are being implemented across regions by DAEs and how the regions implement DAE policies, FEMA lacks reasonable assurance that it is administering its disaster assistance consistently across regions in accordance with its mission. Standards for Internal Controls in the Federal Government call for an organization’s controls to be designed to assure that ongoing monitoring occurs in the course of normal operations and that it includes regular management and supervisory activities, comparisons and Moreover, according to FEMA’s Capstone Doctrine, reconciliations. which describes FEMA’s mission, purpose, and defines the agency’s principles, FEMA advocates the practice of consistent decision making by those with authority to act.monitoring of the regional implementation of DAE policies and procedures, as well as how DAEs implement disaster policies, could help provide FEMA with reasonable assurance that disaster assistance is being implemented by DAEs in accordance with policy and consistently across regions. FEMA does not have policies and procedures for how it will communicate cadre-specific information to DAEs when not deployed. Most DAEs do not have access to cadre-specific information when not deployed, although FEMA has recently taken steps to increase communication. The majority of the cadre-specific information for DAEs is housed on FEMA’s internal website and is not accessible by DAEs when they are not deployed. This is because when DAEs are not deployed, they do not have access to their FEMA-issued equipment such as laptops, as well as their FEMA e-mail accounts. As a result, DAEs are not able to access information directly, including changes in policies and procedures that may occur while they are not deployed, and may not immediately be prepared to provide assistance to survivors during a disaster. Once DAEs are deployed to a disaster, they are typically provided equipment, such as laptops, and FEMA e-mail addresses, which are used to receive policy and procedural updates. However, DAEs in the focus groups we conducted raised concerns about their inability to access this type of information prior to being deployed to a disaster. For example, one focus group participant told us that it is difficult to keep up with changes as they happen when they are not deployed because they receive very little information when they are not deployed. Another focus group participant told us that they cannot access policy changes because they do not have access to information behind FEMA’s firewall. We also heard from one DAE that because she was not provided program information related to her job, it was difficult for her to feel comfortable representing FEMA to disaster victims without access to information such as materials related to applicant services. Thirteen of 16 regional cadre managers we interviewed said that they communicate policy and procedural updates to DAEs when they are not deployed via personal e-mail accounts, however, not all cadre managers believe that it is their responsibility to convey policy updates to their DAEs when they are not deployed. For example, one cadre manager who is responsible for 180 DAEs told us she believes that policy changes should come from FEMA headquarters and should be posted on FEMA.gov. Consequently, this cadre manager does not forward policy changes to personal e-mail accounts. According to another cadre manager, communicating policies and procedures to DAEs when they are not deployed is difficult because DAEs are completely disconnected from the mechanisms typically used to share information with FEMA staff during non-deployment. Further, he added that this proved to be a problem during disasters in 2010 where DAEs that had not been deployed for a while were unfamiliar with FEMA’s recent policy updates. The manager said this situation was problematic because management did not always have the time to walk these DAEs through policy changes. Ultimately, this lack of access to information among DAEs had an impact on DAE readiness because it extended their learning curve and potentially created delays in providing service in some cases. According to an official from FEMA’s Office of the Chief Information Officer, cadre managers have developed their own strategy for communicating with DAEs when they are not deployed. In addition, officials from IWMO told us that cadre managers are best suited to determine the precise information and content that will meet the needs of their respective cadre. Therefore, cadre managers are encouraged by IWMO to develop informative resource pages for their DAEs. For example, we found that the Hazard Mitigation cadre has developed a platform to communicate and share a vast array of resources with DAEs without access to FEMA’s internal website. Specifically, Hazard Mitigation’s disaster workforce resources are available on both FEMA’s internal website as well as the Hazard Mitigation Disaster Workforce portal on the Homeland Security Information Network (HSIN), which can be accessed via any Internet connection with a login and password. The portal provides resources for each of the different functional areas of Hazard Mitigation, including web links, contact information, tasks books, job aids, policies, publications, and training materials for the Hazard Mitigation workforce. However, as of March 2012, these tools were limited to the Hazard Mitigation Cadre. According to IWMO, other cadres, including Alternate Dispute Resolution, Community Relations, Individual Assistance, and Environmental & Historic Preservation have internal websites that contain programmatic policies and procedures. However, these sites are not readily available to DAEs who do not have access to FEMA’s internal website. In a budget-constrained environment, leveraging existing mechanisms can help agencies achieve efficiencies. While our prior work has identified issues with the HSIN platform, it could be used to provide DAEs greater access to FEMA resources. According to FEMA’s Office of the Chief Information Officer, HSIN would be an appropriate tool for DAEs to use to stay connected to FEMA because it would allow them access to pertinent information from anywhere and would not represent an additional cost to FEMA. Inconsistent access to information among non-deployed DAEs, in addition to inconsistent communication strategies with DAEs among regional cadre managers, may hinder FEMA’s mission of providing assistance to disaster survivors, by extending the amount of time it takes DAEs to familiarize themselves with the most current cadre-specific policies and procedures when they are deployed. However, FEMA has taken some steps to improve DAEs’ access to information when they are not deployed. For example, part of FEMA’s Disaster Workforce Transformation includes plans intended to increase communication to DAEs. FEMA stated that it plans to have consistent, two-way communication with DAEs even when they are not deployed. According to FEMA, this communication will include sending weekly e-mails about agency activities to each of the personal e-mail addresses it has on file for its entire workforce and developing a dedicated employee-focused website accessible to all of its employees. However, the employee- focused website contains a minimal amount of cadre-specific information. Since FEMA will rely on its cadres to provide their own content, the extent to which FEMA’s new centralized employee-focused website will include cadre-specific policies and procedures that DAEs need to perform their duties while deployed, such as those provided by the Hazard Mitigation cadre via its HSIN portal, is not clear. For example, as of March 2012, FEMA’s publicly available website for its employees included an Employee Information and Resource Center that houses general information such as travel policies, newsletters, and information related to the FEMA’s Disaster Workforce Transformation and FQS. Unlike the Hazard Mitigation portal on HSIN, FEMA’s employee website did not include cadre-specific information such as Concept of Operations documents that describe how specific cadre efforts are conducted in the pre- and post-disaster environment, or field office guides and Go Kits which contain cadre-specific guidance, which are resources that DAEs can access to better prepare themselves for future disasters while they are not deployed. As part of FEMA’s Disaster Workforce Transformation efforts, it developed an employee-focused website; however, according to FEMA, the new employee website was not intended to be a long-term solution, nor was it intended to replace FEMA’s internal website used to communicate with its workforce. FEMA has not developed a plan with milestones for how it will communicate not only general information but cadre-specific information to DAEs when they are not deployed. According to FEMA, the agency is examining other solutions that would allow the agency’s entire workforce to have access to all information, but a specific time frame has not been determined. According to standard practices for program management, an organization should develop a program schedule that establishes the timeline for program milestones and deliverables. As FEMA implements its Disaster Workforce Transformation, developing a plan with time frames and milestones for how it will better communicate cadre-specific policies, procedures, and other information to DAEs when they are not deployed would provide FEMA with a roadmap to help ensure that it is providing DAEs the tools they need to be prepared for disaster deployments. FEMA has not established standardized hiring or salary criteria to help ensure that basic qualifications are met by prospective DAEs, and that regional managers consistently determine initial DAE salaries and award promotions. Moreover, FEMA’s performance appraisal system for DAEs does not adhere to internal control standards, which would help ensure that managers have information to better inform performance management decisions. FEMA has not established standardized hiring criteria for prospective DAEs, and FEMA headquarters provides limited guidance to regions on which to base DAE salary determinations. According to FEMA Instruction 8600.1 of 1991, the primary document outlining DAE program policies, regional cadre managers are responsible for the recruitment, selection, use, and management of their respective DAE cadres. Our review of policies and interviews with regional cadre managers as well as officials in FEMA headquarters indicate that DAEs are hired by regional cadre managers without being assessed against established criteria to determine their qualification for the position. Regional cadre managers make the initial hiring selection, and then send a hiring package with the individual’s qualifications and a proposed salary to the Office of the Chief Component Human Capital Officer (OCCHCO). OCCHCO officials stated that they then review the individual’s package to verify that the selected individual is qualified for the position and that the proposed salary is appropriate based on experience and skills described in the individual’s resume. However, criteria used by OCCHCO officials in assessing the qualifications and pay of a DAE applicant are not documented; rather, OCCHCO officials stated that these decisions are based on general knowledge. An OCCHCO official who reviews the hiring package containing the applicant’s paperwork said that she believes that regional cadre managers do not always use the same criteria for evaluating qualifications and selecting a DAE candidate as the OCCHCO official uses in approving the proposal. In addition, a regional cadre manager from Hazard Mitigation said that the national cadre manager at headquarters provides guidance for hiring. In contrast, another regional manager said that there is no written guidance available, and that regional cadre managers are on their own in making hiring decisions. According to OCCHCO, the agency’s hiring criteria for DAEs is contained within FEMA Instruction 8600.1. This policy states that “consideration should be given to the specific job functions, the qualifications required to perform those jobs, and Equal Employment Opportunity requirements.” However, the policy does not provide explicit information on the qualifications for different cadres or positions, such as the relevant experience, education, or skills. For example, there is no FEMA-wide guidance on the preferred skills and experience of prospective DAEs for a given position in the IA cadre or the PA cadre, which focus on different aspects of assistance, and thus, require different expertise. OCCHCO officials agreed that it would be useful to have a list of bulleted skills and qualifications that are desired by each cadre for making hiring decisions. An OCCHCO official who reviews hiring and salary recommendations said there is no specific guidance provided to regions related to hiring criteria, other than FEMA Instruction 8600.1, because they believe that the regions have competent people hiring DAEs. The 2007 preliminary report by Booz Allen Hamilton on FEMA’s disaster workforce stated that the lack of standardization in recruitment standards, interviewing processes, and hiring practices led to a wide disparity in the qualifications of DAEs across the regions, which the report noted may impair FEMA’s ability to effectively respond to a disaster. Moreover, one regional cadre manager said that morale is lowered when unqualified DAEs are hired, and another said that many DAEs complain that there is significant variation across regions in terms of the skills required for different positions. In addition, a DAE who participated in our focus group stated that if FEMA had asked the right questions, he would not have been hired, since he did not have the necessary technological skills to use the laptop, GPS, and digital camera that FEMA provided to him. Standards for Internal Controls in the Federal Government call for agencies to identify appropriate knowledge and skills needed for various jobs. According to FEMA, when FQS is implemented in 2012, position- specific training and position task requirements will be defined for each of the 322 positions to provide more specificity; however, FEMA could not provide details about how or if this will translate into better hiring criteria By standardizing hiring criteria, FEMA would be for prospective DAEs. better positioned to hire people with the requisite skills and have reasonable assurance that hiring decisions are being made consistently across regions. In addition, FEMA headquarters provides limited guidance for regions to use to make DAE salary determinations. According to a FEMA official, in addition to FEMA Instruction 8600.1, the “Grant C. Peterson Memo” (Peterson Memo) of 1992 put forth guidance for pay levels and promotions. This guidance outlines five pay grades (A through E), as well as the three levels within each pay grade and relates these pay grades to their approximate GS or GM federal grade level. FEMA could not provide details about how FQS will translate into better hiring criteria for prospective DAEs in addition to identifying the requisite training and skills needed by newly hired DAEs to become qualified under FQS. Peterson Memo states that all DAEs will be given a tentative pay grade at the time of the initial appointment, and within 90 days a decision will be made as to whether or not that tentative grade is appropriate or should be changed to a different grade. The Peterson Memo lists position titles that would be assigned to Grades A through E, but it does not clarify how a DAE is to be assigned to one of the three levels within each grade. In addition, the memo does not establish criteria on which to base initial salary decisions or reconsiderations within the 90-day window. An OCCHCO official stated that it is possible that cadre managers have developed their own criteria for placing DAE hires in certain pay categories. For example, the OCCHCO official noted that some cadre managers bring everyone in on a C-1 level (approximately $21/hour) until they are able to “learn about the organization,” a process which is not quantified or measured. We noted variation among regional cadre managers with respect to pay determinations, with some managers proposing pay according to the candidate’s experience, and others basing pay determinations solely on the job title. For example, six regional cadre managers said that pay determinations depend on the candidate’s experience, education, and background and one added that individuals with the same job title could be paid differently depending on their experience. In contrast, three different regional cadre managers said that pay is based on the job title or position the DAE is hired to fill; for example, one said that a data entry DAE would start in the A or B pay grade, while construction managers would be assigned to the C pay grade. Variation in salaries across regions and cadres can lower morale among DAEs who are deployed in multiple regions and notice DAEs that are paid more despite having less responsibility, according to two regional cadre managers. A senior FEMA official in a recent “town hall meeting” acknowledged that there have been issues with the pay and promotion system for DAEs for many years, and that leadership will be looking at the issue. In addition, the Assistant Administrator for Response said that there is currently no consistency with pay determinations or raises, and that changes to the pay system will be a part of FEMA’s Disaster Workforce Transformation. Additionally, 8 of 16 regional cadre managers we interviewed stated that they do not receive guidance or would like to receive more guidance related to salary determinations, including the criteria used by headquarters. FEMA headquarters could clarify what kind of professional experience gained prior to joining FEMA is considered relevant for different positions and cadres or to what extent disaster-specific responsibilities may factor into salary determinations. Ten of 16 regional cadre managers said that headquarters has previously denied pay determinations proposed by the region, and two of these regional cadre managers responded by asking the applicant to revise his or her resume and re-send it to headquarters. One of these regional cadre managers noted that he did not know what headquarters was looking for when making decisions regarding whether to place a candidate in pay Grade B or C. During recent town hall meetings between agency leadership and employees, a FEMA official acknowledged that pay grade distribution and pay raise inconsistencies are an issue in the DAE program. The Assistant Administrator for Response noted that more than 90 percent of DAEs are in the C category or above, and as a result there are DAEs in higher pay grades performing work that should be done by lower-paid DAEs. According to FEMA officials, they will be looking into these issues as part of FEMA’s Disaster Workforce Transformation. In addition, FEMA officials noted that FQS will institutionalize pay determinations for DAEs based on job title, but as of March 2012, they could not provide details regarding this effort. Standards for Internal Control in the Federal Government state that good human capital policies and practices should include establishing appropriate practices for compensating and promoting personnel. We have reported that agencies may abide by these standards by basing compensation on achievements and performance. By establishing standardized criteria for making DAE salary and promotion determinations, FEMA could increase transparency around salary determinations and reduce unnecessary variation across regions. FEMA’s performance appraisal system for DAEs is not consistent with internal control standards, which would help ensure that managers have information to better inform performance management decisions. Standards for Internal Control in the Federal Government state that agencies should establish appropriate practices for evaluating, counseling, and disciplining personnel. In addition, these standards state that effective management of an organization’s workforce include identifying appropriate knowledge and skills needed for various jobs and providing candid and constructive counseling, and performance appraisals. We have previously reported that agencies could adhere to these internal control standards through a number of actions, such as ensuring that: promotions and compensation of employees are based on periodic employees are provided with appropriate feedback and given suggestions for improvement; or that employment is terminated when performance is consistently below standards. Performance appraisal systems are intended to provide agencies with information related to the effectiveness of employees and serve as a mechanism to identify and improve performance deficiencies. FEMA’s performance management system for DAEs is based on a performance appraisal form that is not consistent with internal control standards, which state that counseling should be candid and constructive. According to FEMA Instruction 8600.1, supervisors are required to complete a performance appraisal form for DAEs at the end of each DAE’s deployment. As shown in figure 2, all DAE reservists are rated on seven elements, and supervisors are rated on an additional seven elements.addition, there is a narrative portion of the performance appraisal form where supervisors are required to include written comments. For each element, a DAE may be given an “S” for Satisfactory, “U” for Unsatisfactory, or “N/A” if a supervisor had no opportunity to observe the DAE’s performance; these ratings are essentially a pass/fail system. However, it is unclear what constitutes successful completion of each element, and FEMA headquarters has not provided any written guidance to regions for assigning ratings. For example, FEMA lacks criteria that can be used to make the determination that a DAE should receive an S or a U for a given element. FEMA Instruction 8600.1 addresses what cadre managers should do with the appraisal form, but not specifically how to assign a rating and what content managers should include in the narrative portion. Eleven of 16 regional cadre managers we interviewed stated that These regional cadre managers DAEs are not given honest appraisals.stated that ratings are not always an accurate reflection of performance because currently there is a conflict of interest because supervisors (who are also DAEs) must evaluate subordinate DAEs who could be their supervisors in the next deployment. According to the Director of IWMO, when FQS is implemented in fiscal year 2012, DAEs will continue to supervise other DAEs in the field. This official said that the qualification requirements under FQS will help ensure that the supervising DAEs have the professionalism to manage other DAEs. However, given the fact that DAEs may continue to serve at levels below the one for which they are qualified, the conflict of interest could continue. While we recognize that ensuring supervisors provide candid ratings can be challenging for agencies, strengthening the controls in place for developing performance ratings could help FEMA provide both managers and DAEs more meaningful performance information. In addition, the performance appraisal system could be more transparent by providing managers with additional information to use when making performance management decisions. For example, because the appraisal form usually provides little information to managers regarding a DAE’s performance during a disaster, one regional cadre manager noted that branch directors contact the regions and let them know of any problems with their cadre members. The manager added that instead of or in addition to reviewing the performance appraisal forms, supervisors and managers must make phone calls and send e-mails to give a picture of a DAE’s performance and areas for improvement. In addition, it is not clear how performance appraisals are utilized in decisions related to reappointment, performance deficiencies, pay, and promotions for DAEs. FEMA headquarters has not provided guidance to regions to clarify these issues, according to 13 of 16 regional cadre managers and OCCHCO. According to an IWMO official, the office previously known as Disaster Reserve Workforce Division had been actively involved in redesigning the performance appraisal process, including improving the appraisal form and maintenance of performance records. However, he said that when the office was revamped and realigned into IWMO, the effort languished. IWMO and OCCHCO officials noted in March 2012 that performance management is a critical component of the supervision of DAEs and stated that it must be improved in fiscal year 2012 during FEMA’s Disaster Workforce Transformation effort. However, FEMA does not currently have specific plans to revamp the performance appraisal system. We have previously reported that one of the key practices for effective performance management is making meaningful distinctions in performance, including providing management with the objective and fact- based information it needs to recognize top performers and providing the necessary information and documentation to deal with poor performers. Similarly, we have previously reported that performance appraisals should provide meaningful distinctions in performance for staff, which is difficult to accomplish with a pass/fail system. We also reported that a limited number of performance categories may not provide managers with the information they need to reward top performers and address performance issues, as well as deprive staff of the feedback they need to improve. In addition, 13 of 16 regional cadre managers stated that the appraisal process could be improved in various ways, such as implementing a rating scale instead of a pass/fail rating. Specifically, using multiple rating levels provides a useful framework for making distinctions in performance by allowing an agency to differentiate, at a minimum, between poor, acceptable, and outstanding performance. We have reported that two-level rating systems by definition will generally not provide meaningful distinctions in performance ratings, with possible exceptions for employees in entry-level or developmental bands. Similarly, a 2007 preliminary report by Booz Allen Hamilton on the DAE program found that there was a lack of standardization and fairness in the performance review system, specifically that the system was not managed evenly and did not distinguish between levels of performance. The report noted that an inadequate performance review system affects the development and assignment of DAEs, as well as their contribution to FEMA’s overall response to disasters. Taking steps to establish a more rigorous performance management system that addresses the weaknesses we identified could help provide FEMA with more information regarding how effectively DAEs are performing and a mechanism to identify and improve any performance deficiencies. By providing clear criteria and guidance for assigning ratings, as well as how the ratings are to be used, FEMA could help to ensure that DAEs’ performance appraisals better reflect actual performance and provide managers with information to better inform performance management decisions. FEMA’s DAE training is not consistent with key attributes of effective training and development programs that could help to ensure that its training and development investments are targeted strategically. FEMA does not have a plan to ensure that all DAEs receive required training under FQS, which would ensure accountability for qualifying DAEs. In addition, FEMA does not track how much it spends on DAE training, which hinders FEMA’s ability to plan for future training. FEMA does not have a plan with time frames and milestones to ensure DAEs receive training, including required training for its new credentialing program, FQS. FEMA provides the majority of its training to DAEs in the field during disasters. Under FQS, DAEs must complete required training and demonstrate successful performance in specific areas in order to be qualified in their job title. Therefore, DAEs’ career track will be aligned to their deployments, and subsequently tied to their opportunities to participate in field training. Regional cadre managers and DAEs we spoke with had concerns about the amount of training DAEs received during disasters as well as FEMA’s reliance upon on-the-job training for new DAEs due to limited training opportunities. Thirteen of 16 regional cadre managers said that they would like more opportunities for DAEs to receive training. For example, one Human Resource cadre manager said that required training courses were not available the past year, and that some courses, such as those developed for human resource managers, had not been offered for 3 or 4 years. In addition, one DAE said that the amount of training they received was insufficient and added that it was a disservice to the applicants for FEMA assistance because DAEs may not know how to properly assist the public. Another DAE, who also holds a management position, told us that half of the DAEs deployed in Community Relations in his current disaster did not have any training other than on-the-job training. Furthermore, IWMO officials said some regions provide general pre-deployment orientation materials, such as instructions on completing certain administrative tasks; otherwise, it is up to the cadre manager to provide DAEs information pertinent to their assignment prior to their deployment. Therefore, the extent to which a DAE receives orientation depends on the cadre, the region, and the timing of deployments. Under FQS, DAEs will be assigned job titles, and each DAE will either be designated as a trainee or qualified for that job title. For a DAE to become qualified, they must complete required training and meet the minimum number of deployments and various deployment experiences. According to FEMA, approximately 20 percent of the current DAEs (2,005 of 9,981) are considered trainees and will need training and future deployments to become qualified. However, according to FEMA, as of March 2012, 136 courses were not available because they were being revised or not yet developed. In addition, FEMA stated that of the 136 courses, 83 are in various stages of pilot testing and they have developed a schedule to revise or develop courses through the end of fiscal year 2012. Officials said that if a course will not be developed in the foreseeable future, exemptions can be made for the DAE to be fully qualified if they have completed the remaining requirements. According to key attributes for federal training programs, agencies should have agency planning documents such as training plans, and training and development design and evaluation documents, which focus on identifying targeted performance improvements and report on progress in achieving results. As previously mentioned, successful organizations should also establish timelines for program milestones and deliverables. According to FEMA officials, the agency has begun an initiative intended to identify the number of personnel, by position, needed to respond to and manage various incidents. It is also intended to determine the number of training courses they will need based on the number of open position task books. In fiscal year 2012 FEMA plans to implement this initiative as well as FQS in order to develop a plan to train DAEs, according to the agency. However, FEMA officials also said that qualifying all DAEs under FQS will depend on each DAE’s commitment to making themselves available for deployments and the level of disaster activity. DAEs are required to update their availability for deployments at least every 30 days, and must be available for deployments for at least 60 days a year. FEMA does not have a plan or time frames in place to ensure that all DAEs are qualified under FQS and receive required training; instead, FEMA is depending on DAEs to commit to be deployed. A plan with time frames and milestones for how and when it will train all of its DAEs will provide FEMA with a roadmap and ensure accountability for qualifying DAEs under FQS. FEMA does not track how much of the Disaster Relief Fund is spent on training for DAEs while deployed to JFOs. As a result, FEMA does not have a comprehensive picture of costs and expenses, and other financial information related to training and development activities. All expenses incurred at a JFO, including training costs, are funded by the Disaster Relief Fund. Comptrollers at the JFO are responsible for approving and monitoring all the funds used at a JFO; however, they are not required to track the training costs. The Disaster Field Training Operations cadre is responsible for developing a training plan based on the training needs of the DAEs deployed to a particular JFO. The training plan then must be approved by the Federal Coordinating Officer. FEMA’s Deputy Director for Field Operations said the plan does not include the costs associated with the recommended courses unless the training is being provided by a contractor. Costs associated with training—such as travel expenses, per diem for the instructors, and copy materials—are all included in the administrative costs of the JFO. FEMA’s Deputy Director for Field Operations further stated that there is no accounting code specific to training costs, therefore, the agency does not currently have the needed information to identify those costs specific to completed courses. The official added that FEMA maintains a few codes that have some relationship to training, such as a code for training-related office supplies and printing costs. The official noted that it may be possible to accumulate all of the training-related codes that are currently in existence and come up with an estimate of the total cost associated with training; however, this figure would not provide a complete picture of training costs. FEMA’s Disaster Readiness and Support account is part of the Disaster Relief Fund. It funds generalized, non-disaster specific initiatives such as training that provides disaster readiness and preparedness support across FEMA. In fiscal year 2011, the Disaster Readiness and Support account totaled $304.7 million, of which $9 million was dedicated to disaster-related training for all FEMA employees, including DAEs. Of the $9 million, $3 million of this is dedicated to pay for the salaries and benefits of DAEs while they are deployed solely for training. According to FEMA, the amount of the Disaster Readiness and Support account is determined by working with FEMA offices annually to review their requirements. A spend plan is created and then reviewed and approved by FEMA’s Deputy Administrator, DHS and the Office of Management and Budget before transmittal to Congress. Prior to fiscal year 2012, the Emergency Management Institute was responsible for managing the $9 million in disaster specific training funds. This responsibility is now with IWMO; however, according to IWMO officials, they are still coordinating their efforts with the Emergency Management Institute. According to the Emergency Management Institute, it cannot separate how much of the Disaster Readiness and Support account is spent on DAE training, except for the $3 million allocated for salaries and benefits. According to IWMO officials, in fiscal year 2013 they will begin funding the majority of training courses in JFOs using the Disaster Readiness and Support account rather than the more general Disaster Relief Fund. As of March 2012, the fiscal year 2012 spend plan and projected future costs had not been finalized. However, IWMO officials said that the proposed fiscal year 2012 budget for FQS is $7.8 million, which was based on the training budget of prior years’ training as well as future needs. According to key practices for training management, agencies should have accounting, financial, and performance reporting systems that produce credible, reliable and consistent data on agency activities, including training and development Since FEMA does not know how much money it historically programs.has spent on training at the JFOs using the Disaster Relief Fund, it does not have a complete picture of the total cost to train DAEs both at the Emergency Management Institute and at the JFOs each year. Further, FEMA does not have reasonable assurances that the proposed fiscal year 2013 FQS budget is at an appropriate level to cover the total training costs. Without a systematic process to track training costs, FEMA does not have a complete picture of training, including its total costs. Developing a systematic process to track such training costs would provide FEMA with additional information to inform decisions about allocating future funding for training and assist it in doing so effectively. On April 17, 2012 FEMA announced plans to transform the DAE program. Among the changes, FEMA will change the name to the FEMA Reservist Program. According to FEMA, as of June 1, 2012, the agency will begin offering DAEs the opportunity to seek new appointments in the Reservist Program by applying for specific incident management positions within FQS. The Reservists selected at the end of the application process will be assigned to nationally managed cadres, which will replace all regionally- based cadres by the end of 2012. FEMA announced that as of July 1, 2012, DAEs who transition to the Reservist Program before the end of 2012 will have their pay “grandfathered” into the new program and therefore be exempt from the new rules regarding having pay determined based on their FQS position. In addition, FEMA stated it will establish a goal and policy to deploy all Reservists at least once per year with the length of the deployment depending on operational needs, which is intended to ensure that all Reservists have the current incident response experience and demonstrated performance required by FQS. Furthermore, FEMA stated that it will begin providing Reservists required FQS training by utilizing a portion of annual deployment days and allowing Reservists to complete some mandatory training from home. Moreover, FEMA announced that it would be issuing Reservists mobile communication and computing equipment upon their first deployment, to ensure that they are mission ready immediately upon checking into a disaster and that they have continuous access to the FEMA network and FEMA e-mail, if they choose, regardless of deployment status. These efforts, if implemented effectively should address a number of the challenges we identified with FEMA’s management of the DAE program. However, FEMA has not identified specifics to these broad plans that allowed us to evaluate the effectiveness of its planned actions. Therefore, it is too soon to determine whether the planned actions will be implemented as stated and whether they will fully address the problems we identified. FEMA relies heavily upon DAEs to respond to disasters. The agency has taken steps to improve the program, such as establishment of a credentialing program, FQS, and a planned transformation of the DAE program; however, it is too soon to assess the extent to which these efforts will address the challenges we identified with FEMA’s management of the DAE program, the workforce, and training. For example, while FEMA intends to provide guidance to cadre managers, including a revised FEMA Instruction 8600.1 by the end of 2012, FEMA has experienced difficulty in the past in completing similar efforts, such as the 2008 cadre management handbook that was never finalized. Thus, establishing time frames for completing deliverables such as the revised FEMA Instruction 8600.1 and a cadre manager handbook for DAE management would help ensure accountability for completing initiatives. Furthermore, FEMA’s decentralized structure allows for flexibility; however, establishing a mechanism to ensure ongoing monitoring of regional implementation of DAE policies and procedures and DAEs’ implementation of FEMA’s disaster policies and procedures can assist management in ensuring that disaster assistance is conducted in accordance with policy and consistently applied across regions. In addition, establishing policies and procedures for how FEMA will communicate with DAEs and developing a plan with time frames and milestones for how it will better communicate policies and procedures and cadre-specific information to DAEs when not deployed would help ensure that it is providing DAEs with the tools they need to be prepared for disaster deployments. Further, FEMA’s human capital controls do not adhere to internal control standards for hiring, compensation, and performance appraisals. By standardizing criteria for hiring and salary determinations, FEMA would have greater assurance that DAEs have the necessary skills and qualifications, as well as ensure consistency across regions. In addition, taking steps to establish a more rigorous performance management system would provide FEMA with more information regarding how effectively DAEs are performing and provide a mechanism to identify and improve any performance deficiencies. Moreover, FEMA’s management of DAE training is not consistent with training key practices for planning and tracking training costs. Establishing a plan with milestones for training DAEs would provide FEMA with a roadmap to train its DAE workforce and ensure accountability for qualifying DAEs under FQS. Finally, developing a systematic process for capturing training costs would provide FEMA with additional information to inform its decisions about allocating future funding for training and assist it in doing so effectively. To help DHS improve the management of DAEs and build on some of the actions taken to date, we recommend that the Secretary of Homeland Security direct the Administrator of FEMA to take the following seven actions: 1. Establish timelines for development and dissemination of DAE cadre management guidance and revisions to FEMA Instruction 8600.1; 2. Establish a mechanism to monitor both its regions’ implementation of DAE policies and procedures and DAEs’ implementation of FEMA’s disaster policies and procedures to ensure consistency. 3. Develop a plan with time frames and milestones for how it will better communicate policies and procedures and cadre-specific information to DAEs when they are not deployed; 4. Establish standardized criteria for hiring DAEs that include defined qualifications and skill sets to make hiring decisions and salary determinations; 5. Establish a more rigorous performance appraisal system that includes criteria and guidance to serve as a basis for performance ratings, as well as how ratings could be used, and a process to address performance deficiencies; 6. Establish a plan with milestones to ensure all DAEs have opportunities to participate in training and are qualified; and 7. Develop a systematic process to track training costs. We provided a draft of this report to DHS for comment. We received written comments from DHS on the draft report, which are summarized below and reproduced in full in appendix VIII. DHS concurred with the recommendations and indicated that FEMA has taken or is taking steps to address them. The actions DHS reported are important first steps; however, FEMA’s implementation plans do not fully address one of the seven recommendations, as discussed below. Moreover, insufficient detail is provided related to FEMA’s plans for three of the recommendations; thus it is not clear to what extent these plans will fully address the three recommendations. In regards to the first recommendation, that FEMA establish time frames for development and dissemination of DAE cadre management guidance and revisions to FEMA Instruction 8600.1, DHS agreed and stated that FEMA Instruction 8600.1, which is now called the FEMA Reservist Program Directive, was revised and, as of May 11, 2012, is in FEMA’s Office of the Chief Counsel for final review. Furthermore, DHS stated that the estimated timeline for approval and publishing of this instruction is June 1, 2012. In addition, DHS stated that the Cadre Manager’s Handbook, the FEMA Reservist Program Manual, the Reservist Pay Directive, and the Reservist Handbook are being developed with an estimated timeline for development, approval, and dissemination approximately 90 days after the signing of the FEMA Reservist Program Directive. It will be important that the FEMA Reservist Program Directive align with the planned Disaster Workforce Transformation. These actions, if implemented effectively, would address the intent of the recommendation. In reviewing the draft of the second recommendation that FEMA establish a mechanism to monitor disaster policies and procedures to ensure consistency, FEMA officials requested clarification, stating that the recommendation was too broad as it focused on FEMA’s disaster policies rather than DAEs. We agreed and modified the recommendation to more clearly state that FEMA should monitor how the regions implement DAE policies and procedures and how DAEs implement disaster policies and procedures. DHS agreed with our revised recommendation and discussed several actions it has taken or has underway to address the recommendation. Specifically, it stated that (1) in December 2011, the FEMA Administrator directed the agency to identify, review, and centrally post all agency doctrine, policies, and directives, (2) all documents were posted to their respective locations on April 13, 2012, and (3) the agency’s policies guiding DAEs are now available on the FEMA intranet. Moreover, FEMA stated that it has also established and is working to improve a number of mechanisms through which it validates compliance with the agency policies and standards. FEMA also stated that communication with reservists on disaster policies and procedures will be initiated from FEMA headquarters to ensure consistency. FEMA has taken actions to make policies and procedures readily available to reservists; however, FEMA did not provide details about the mechanisms it has established for its regions to monitor DAE policies and procedures or DAEs’ implementation of FEMA’s disaster policies. Thus, it is not clear to what extent these actions will fully address the recommendation. In regards to the third recommendation, that FEMA develop a plan with time frames and milestones for how it will better communicate policies and procedures and cadre-specific information to DAEs when they are not deployed, DHS agreed and stated that the FEMA Reservist Program Directive requires Headquarters, Regional, and National Cadre Management leadership to provide consistent two-way messaging to all Reservists, deployed or not, through e-mail, websites, webinars, and other outreach, and estimates that these efforts will be completed by September 30, 2012. However, DHS did not provide details on the types of information that it will be providing DAEs. Thus, it is not clear to what extent FEMA’s planned actions will fully address the recommendation. To fully meet the intent of the recommendation, FEMA needs to ensure that it is communicating both cadre-specific and administrative information to DAEs. In regards to the fourth recommendation, that FEMA establish standardized criteria for hiring DAEs that include defined qualifications and skill sets to make hiring decisions and salary determinations, DHS agreed and stated that the FEMA Qualification System (FQS) Position Task Books define specific qualifications and skills for each required position and will be the basis for establishing standardized criteria for hiring Reservists, including pay. Currently, Position Task Books are used to document and record tasks performed by the trainees, in order to become qualified under FQS. It will be important for FEMA to define skills and/or necessary experience applicants must have prior to being hired for each position, and how if at all, any prior experience will impact salary determinations. Without doing so, DHS will not fully address the intent of the recommendation. In regards to the fifth recommendation, that FEMA establish a more rigorous performance appraisal system that includes criteria and guidance to serve as a basis for performance ratings, as well as how ratings could be used, and a process to address performance deficiencies, DHS agreed. DHS stated that upon implementation of the FEMA Reservist Program Directive and the publishing of various supporting directives and handbooks, FEMA’s Incident Workforce Management Office will coordinate with FEMA’s Office of the Chief Component Human Capital Office to develop a more robust Reservist performance appraisal system that will, among other things, establish performance standards, identify successful task completion, and improve performance deficiencies. These actions, if implemented effectively, would address the intent of the recommendation. In regards to the sixth recommendation, that FEMA establish a plan with milestones to ensure all DAEs have opportunities to participate in training and are qualified, DHS agreed and stated that as part of the changes in the DAE program through the Disaster Reservist Program, FEMA will ensure that all DAEs have opportunities to participate in training and are qualified to serve in a primary disaster-specific job title on the basis of FEMA’s Force Structure requirements. Furthermore, FEMA plans to complete this by September 30, 2013. However, DHS did not provide details on how it plans to ensure that DAEs will become qualified by September 2013, including when it will complete the FEMA Force Structure which had not been finalized as of April 2012. It will be important for FEMA to develop intermediate milestones to provide a roadmap for how it will qualify its workforce. Thus, it is not clear to what extent FEMA’s plans will fully address the intent of the recommendation. In regards to the seventh recommendation, that FEMA develop a systematic process to track training costs, DHS agreed and stated that FEMA has combined all funding for FQS supportive training into a single account to ensure a process for tracking training costs, course offerings, and force structure requirements. DHS also stated that it will include all of this information in the Incident Qualification Certification System— intended to be the primary FQS tracking system—to track all FQS-related training costs. In addition, DHS stated that it should be completed by October 1, 2012. These actions, if implemented effectively, would address the intent of the recommendation. DHS also provided technical comments that we incorporated, where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of the Department of Homeland Security and the Administrator of the Federal Emergency Management Agency. The report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. The objectives of this report were to determine (1) to what extent does the Federal Emergency Management Agency (FEMA) have policies and procedures in place to govern the Disaster Assistance Employee (DAE) program; (2) to what extent are FEMA’s human capital controls over the DAE workforce consistent with internal control standards; and (3) to what extent does FEMA’s DAE training incorporate key attributes of effective training and development programs. In addition, we describe FEMA’s initiative to transform the DAE program announced in April 2012 as it relates to the three questions above. We addressed each objective by reviewing relevant FEMA documents. To determine the extent to which FEMA has policies and procedures in place to govern its DAE program; and to determine the extent to which FEMA’s human capital management controls over the DAE workforce are consistent with internal control standards, we analyzed relevant documents on FEMA’s organizational structure as well as both program- specific and human capital-related guidance, policies, and procedures produced by FEMA headquarters and regional offices. We also compared FEMA’s human capital controls with criteria in Standards for Internal Control in the Federal Government. The Fiscal Year 2010 Department of Homeland Security Appropriations Act required FEMA to submit a report of quarterly obligations of funds against the Disaster Readiness and Support (DRS). training attended by DAEs in a JFO. We compared FEMA’s management of DAE training with key attributes of effective training and development programs to determine the extent to which they are aligned. To address all three objectives, we reviewed previous Department of Homeland Security Inspector General Reports, and a FEMA sponsored study conducted by Booz Allen Hamilton on FEMA’s disaster workforce. We found the conclusions and recommendations drawn in each report to be sufficient based on the methodologies used. In addition, we conducted interviews with FEMA officials in headquarters and in the regions. We interviewed officials in the following offices in FEMA headquarters: Office of Response and Recovery, Incident Workforce Management Office (IWMO), Office of the Chief Component Human Capital Officer (OCCHCO), Emergency Management Institute, Office of Policy, Planning, and Analysis (OPPA), Field Based Operations, Training Exercise and Doctrine (TED), Office of the Chief Information Officer (OCIO) and national cadre managers. In addition to interviews with officials in FEMA headquarters, we conducted site visits to four FEMA regions. We selected regions that were geographically dispersed and had a Joint Field Office with Individual and Public Assistance programs operating as of September 2011. In each of the four selected regions, we interviewed the Regional Administrator and Regional Cadre Managers. In addition, we visited one JFO in each of the selected regions. In each selected JFO, we interviewed the Federal Coordinating Officer and Branch Chiefs from selected cadres. We focused our interviews on the following DAE cadres: (1) Individual Assistance (IA); (2) Public Assistance (PA); (3) Hazard Mitigation; (4) Disaster Field Training Operations; (5) Human Resources (HR); and (6) Community Relations. We focused on IA, PA, Hazard Mitigation, and CR because these cadres are responsible for administrating the disaster assistance program and interacting with the public. In addition, we chose HR and Disaster Field Training Operations because they are responsible for the management and training of DAEs. In addition, we interviewed officials from the state emergency management agency, for the state in which the JFO was located. Table 1 lists the FEMA regions, JFO locations, and State Emergency Management Agencies we visited. To obtain the views of DAEs on issues related to all three of our objectives, we conducted 16 focus group sessions with a total of 125 DAEs at the four selected JFOs. These sessions involved structured small-group discussions designed to gain more in-depth information about issues DAEs face. Discussions were guided by a moderator who used a list of discussion topics to encourage participants to share their thoughts and experiences as DAEs. Specifically, discussion topics included the hiring process, training, policies and procedures, FQS and communication by regional managers; however, not all topics were discussed in each group. Each focus group involved 5 to 12 DAE participants. There were four types of focus groups based on job titles: IA and PA supervisors, IA and PA non-supervisors, and supervisors and non-supervisors from other cadres other than IA and PA. We completed written summaries of each focus group, and used content analysis software to categorize responses and identify common themes across the focus groups, using appropriate checks to ensure accuracy. The results of the focus groups are not generalizable. However, the views we obtained from them provided us with valuable examples of DAE experiences. In addition, to obtain further perspectives from regional management on hiring, training, deployments, policies and procedures and FQS, we conducted follow-up interviews with 16 regional cadre managers we interviewed during our site visits. In addition, we reviewed FEMA’s April 2012 memorandum announcing the transformation of the DAE program, but did not assess its planned actions to transform the DAE program because the agency is in the early planning stages. We conducted this performance audit from April 2011 through May 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Categories of Disaster Workforce Employees Description Stafford Act federal employees who work on an on-call intermittent basis “forming the major workforce for FEMA in times of emergency or disaster.” They are also known as reservists. DAEs are temporary personnel appointed and compensated without regard to the provisions of Title 5, United States Code, governing appointments in competitive service. They are activated in direct response to a disaster declaration to support the work of FEMA at the disaster site. FEMA appoints DAEs in 2-year cycles, as intermittent employees who are deployed as needed for emergencies and/or disasters. Federal employees hired under the authority of the Stafford Act on a temporary full-time basis for 2- and 4-year terms. These terms are renewable if there is ongoing disaster work and funding available. Similar to DAEs, COREs are temporary personnel appointed and compensated without regard to the provisions of Title 5, United States Code, governing appointments in competitive service. Temporary federal employees covered by Title 5 provisions. They do not have specified appointment periods. Federal Coordinating Officers (FCO) are included in this employment group. Permanent federal employees hired in accordance with Title 5, United States Code. Staff locally hired under the authority of the Stafford Act for an initial period of 120 days. This period of time is renewable. Local hires augment the reservist workforce. They are hired for positions that “do not require FEMA-specific expertise, or when limited advance training or minimal on-the-job orientation or training is sufficient.” In certain instances local hires may convert to DAEs. . OIG: FEMA’s Management of Disaster Assistance Employee Deployment and Payroll Processes, Appendix C Director’s Policy 1-99, March 1999. There are 23 functional disaster cadres excluding the Disaster Generalist Group, which was created to augment the External Affairs, Individual Assistance, and Public Assistance cadres and provide surge staff when required. In addition to the contact named above, Leyla Kazaz, Assistant Director, managed this assignment. Martene Bryan, Landis Lindsey, Lauren Membreno, Aku Pappoe and Michelle Su made significant contributions to the work. Cynthia Saunders assisted with design and methodology. Tracey King provided legal support and analysis. Linda Miller and Debbie Sebastian provided assistance in report preparation. Robert Robinson developed the report graphics. | Since fiscal year 2007 FEMA has obligated $33 billion in disaster assistance payments. FEMA relies heavily upon its cadre of DAEs, a reserve workforce who interact with disaster survivors. GAO was asked to review the management and training of DAEs. Specifically, this report addresses the extent to which (1) FEMA has policies and procedures in place to govern the DAE program; (2) FEMAs human capital controls over the DAE workforce are consistent with internal control standards; and (3) FEMAs DAE training incorporates key attributes of effective training and development programs. In addition, GAO describes FEMAs initiative to transform the DAE program announced in April 2012. GAO reviewed management documents such as program-specific and human capital-related guidance, interviewed FEMA officials, and conducted 16 focus group sessions with DAEs in four selected joint field offices chosen to provide geographic dispersion, among other factors. The results of the focus groups are not generalizable, but provide valuable insight into DAE experiences. The Federal Emergency Management Agency (FEMA) has taken steps to enhance its management of the Disaster Assistance Employee (DAE) program, such as through the establishment of a credentialing program, the FEMA Qualification System (FQS); however, management controls and training could be strengthened. For example, FEMA does not monitor how the regions implement DAE policies and how DAEs implement disaster policies across regions to ensure consistency. FEMAs Administrator noted that due to differences in how regions operate, it is problematic to deploy someone based in one region to another during a disaster. Establishing a mechanism to monitor both the regional implementation of DAE policies and procedures and DAEs implementation of disaster policies could help provide FEMA with reasonable assurance that disaster assistance is conducted in accordance with policy and implemented consistently. FEMAs human capital controls could be strengthened. FEMAs regional DAE managers are responsible for hiring DAEs, but FEMA has not established hiring criteria and has limited salary criteria. By establishing standardized criteria for making hiring and salary decisions, FEMA would be better positioned to hire people with requisite skills and better ensure consistency across regions. Likewise, FEMAs performance appraisal system for DAEs is not consistent with internal control standards. FEMA does not have criteria for supervisors to assign DAEs satisfactory or unsatisfactory ratings. Thirteen of 16 regional DAE managers GAO interviewed stated that the appraisal process could be improved, such as implementing a rating scale instead of a pass/fail rating. FEMA officials noted that performance management is a critical component in DAE supervision and must be improved in fiscal year 2012. Establishing a more rigorous performance management system that includes criteria for given performance elements as well as guidance could help FEMA ensure that DAEs performance appraisals more accurately reflect performance and provide needed information to managers. FEMAs DAE training is not consistent with key attributes of effective training and development programs, such as a plan for training staff. FEMA does not have a plan to ensure DAEs receive necessary training such as FQS requirements. Further, 13 of 16 regional DAE managers GAO spoke to said that they would like more opportunities for DAEs to receive training. A plan to ensure that all DAEs have opportunities for training and completing FQS requirements with related milestones would provide FEMA with a roadmap and ensure accountability for qualifying DAEs under FQS. In addition, FEMA does not track how much of the Disaster Relief Fund is spent on training for DAEs. Developing a systematic process to track training costs could provide FEMA with information to help it determine whether it is allocating its resources effectively. In an April 2012 memo, FEMA announced plans to transform the DAE program beginning in June 2012; however, this effort is still in the early stages and as a result, it is too soon to evaluate the effectiveness of FEMAs planned actions. GAO recommends, among other things, that FEMA establish a mechanism to monitor both its regions implementation of DAE policies and DAEs implementation of disaster policies; criteria for hiring and compensating DAEs; and a plan to train DAEs within a set time frame. DHS concurred with the recommendations. |
Burma, with a population of over 56 million people, is located in Southeast Asia between Bangladesh, India, China, Laos, and Thailand, and borders the Andaman Sea and the Bay of Bengal (see fig. 1). The country consists of seven divisions, seven states, and one union territory. Burma is an ethnically diverse country with 135 officially recognized ethnic groups. From 1962 until 2011, Burma was under military rule, with leaders routinely restricting freedom of speech, religion, and movement and committing other serious human rights violations against the Burmese people, according to State documents. Further, the military government, at times, condoned the use of forced labor and took military action against ethnic minorities living within the country, according to State. Through legislation and executive orders, political and economic sanctions were imposed on Burma’s military government in response to its violent suppression of the Burmese people. In May 1997, President Clinton declared a national emergency with respect to Burma. Beginning with this executive order, the United States prohibited new investment in the country and later also imposed broad sanctions to prohibit the exportation of financial services, certain imports, and transactions with senior Burmese officials and others and provided limited assistance to the country. The sanctions were developed through laws, such as the Burmese Freedom and Democracy Act of 2003 and the Tom Lantos Block Burmese JADE (Junta’s Anti-Democratic Efforts) Act of 2008, as well as through presidential executive orders. In 2011, the Burmese government began a transformation to a more open and democratic society. In March 2011, the State Peace and Development Council (SPDC), which had been in power since 1988 and had restricted freedom of speech and committed human rights violations, formally dissolved itself and transferred power to a semicivilian government known as the Union Government, headed by President Thein Sein. President Thein Sein, with the support of Burma’s Union Parliament, implemented a number of political and economic reforms. See figure 2 for a time line of significant events since 2011. In response to the reforms made by the Burmese government starting in 2011, the U.S. government adopted a new policy of greater engagement while maintaining existing sanctions. On April 4, 2012, the United States announced the reestablishment of the USAID mission in Burma to support further political and economic reforms. According to U.S. officials, U.S. democracy assistance aims to deepen Burma’s political and economic transition, strengthen human rights, ensure that reform benefits everyday people, and support the development of a stable society that reflects the diversity of the country. In November 2015, Burma held nationwide parliamentary elections, from which the NLD, Aung San Suu Kyi’s political party, emerged with an absolute majority in both chambers of Burma’s Union Parliament. Using its majority in both houses of parliament, the NLD elected Htin Kyaw, Aung San Suu Kyi’s close advisor and long-time NLD supporter, according to a Congressional Research Service report, as president. Burma’s first civilian government after more than 5 decades of military dictatorship was sworn into office in March 2016. On October 7, 2016, President Obama issued an executive order that ended the national emergency with respect to Burma that had been in effect since 1997 and revoked five other executive orders that had imposed, enforced, or waived economic sanctions on Burma. In addition, the executive order waived the financial sanctions contained in the 2008 JADE Act, as allowed for in the act. Burma has made progress in its transition to a democratically elected civilian government, according to U.S. officials, but the new government still faces significant challenges, given the country’s history of corruption, repression, human rights abuses, armed conflict, and isolation. There is broad agreement among the international community that Burma’s opening constitutes the most significant opportunity to advance democracy and national reconciliation in the country in more than 60 years, according to USAID. However, the new government will need to address many issues to continue its democratic transition. A recent Congressional Research Service report identified several challenges facing Burma’s new government, including managing relations with the military, ending the ongoing civil war, dealing with internally displaced minority ethnic groups, and releasing its political prisoners. U.S. officials in Burma also identified many of these same issues. Specifically, the main challenges identified were the following: managing relations with Burma’s military, known as the Tatmadaw. Burma’s new government will need to work with the military to get any reforms passed. By order of the Burmese constitution, the military occupies 25 percent of the seats in parliament, giving it the ability to block constitutional amendments. ending the civil war. For Burma’s new government to be successful, it must bring about peace. For nearly 70 years, the Burmese government and various ethnic armed organizations have engaged in periods of active fighting and times of relative peace under negotiated ceasefire agreements. The most recent ceasefire was signed in October 2015, but not all ethnic armed organizations were signatories. ending the persecution of minority groups. Burma’s new government faces international pressure to end widespread persecution of minority groups. Burma has been plagued by the continued persecution of minority ethnic groups, especially the Rohingya, a Muslim group located in Rakhine State. resettling internally displaced people. Burma’s new government also faces international pressure to develop a solution that allows for the safe resettlement of tens of thousands of internally displaced people. In addition to the estimated 100,000 Rohingya located in resettlement camps in Rakhine State, Burma has tens of thousands of other internally displaced persons, mostly in Kachin State and Shan State, the result of ongoing fighting between the Tatmadaw and several ethnic militias. releasing political prisoners. Lastly, Burma’s new government needs to release all remaining political prisoners or risk facing increased international scrutiny and pressure. The Burmese Assistance Association for Political Prisoners asserts that as of July 31, 2016, at least 83 political prisoners remained in jail, along with 202 activists awaiting trial for political actions. In addition, U.S. officials have also cited a low level of capacity as a major challenge for Burma’s new government. USAID officials told us that many of the members of the new government have little to no experience governing. As a result, there is a need for capacity building. U.S. democracy assistance in Burma is primarily provided by USAID’s Mission in Burma Office of Democracy and Governance (USAID/DG), USAID’s Bureau of Democracy, Conflict, and Humanitarian Assistance’s Office of Transition Initiatives (USAID/OTI), and State’s Bureau of Democracy, Human Rights, and Labor (State/DRL). USAID/DG and USAID/OTI maintain staff at the U.S. embassy in Burma, while State/DRL manages its projects in the country from its headquarters in Washington, D.C., and through the human rights officer at the embassy, consistent with State/DRL practice. USAID/DG: Supports U.S. foreign policy in Burma by promoting democracy and respect for the rule of law and human rights, building transparent and accountable governance systems, supporting independent media, and fostering a vibrant, tolerant civil society. USAID/OTI: Supports U.S. foreign policy objectives by promoting stability, peace, and democracy through fast, flexible, short-term assistance targeted at key political transition and stabilization needs. USAID/OTI works to enhance the ability of key stakeholders to engage in the peace process, support civil society to advance reforms, and reduce the influence of drivers of intercommunal conflict. State/DRL: Supports U.S. foreign policy by promoting democracy, protecting human rights and international religious freedom, and advancing labor rights globally. The Burma Democracy Strategy developed by USAID and State in 2015 includes five strategic goals: 1. Develop the capacity of influential entities to employ principles of a well-governed democratic state that is inclusive, accountable, and responsive to its people. 2. Support and strengthen civil society, and strengthen societal foundations and institutions at all levels to reflect the will, concerns, and participation of the Burmese people. 3. Encourage responsible investment and greater respect for human rights by the private sector. 4. Support Burma’s peace process, while engaging the military on human rights issues. 5. Promote tolerance and support legitimate and sustainable processes, which enable domestic stakeholders to pursue national reconciliation and the establishment of a stable, inclusive democratic union. USAID and State also rely on two other, broader U.S. strategies when developing their democracy projects for Burma, according to U.S. officials. The Burma Integrated Country Strategy: An interagency, multiyear, overarching strategy that encapsulates U.S. policy priorities and objectives and the means by which foreign assistance, among other things, will achieve these priorities. USAID’s Strategy on Democracy, Human Rights, and Governance: A framework to support the establishment and consolidation of inclusive and accountable democracies. See appendix III for more information on how these strategies align with the Burma Democracy Strategy. USAID and State have obligated over $113 million in funding for 34 democracy projects in Burma, according to agency officials, since 2012, when the USAID Mission in Burma reopened. Specifically, USAID/DG and USAID/OTI obligated about $104 million from fiscal years 2012 through 2016, while State/DRL obligated about $9 million over the same period. See table 1 for a breakout of USAID and State obligations for democracy projects in Burma. USAID/DG has initiated eight democracy projects in Burma since 2012, with obligations totaling more than $60 million. Total obligations for each project have ranged from less than $1.2 million to $17.6 million, and the projects have had an average duration of 3-1/2 years. According to USAID officials, USAID/DG projects have focused on civil society participation, particularly on the elections held in November 2015, and strengthening democratic institutions. For example, one project focuses on strengthening core democratic institutions at different governmental levels to address capacity limitations. USAID/OTI has initiated two projects that included more than 400 democracy activities in Burma since 2012, with obligations totaling more than $43 million, according to USAID/OTI officials. USAID/OTI officials said that the activities generally have lasted for 3 to 6 months, with some lasting up to a year. The projects have primarily focused on finding opportunities to bring government and civil society together and supporting the ongoing peace process, according to USAID officials. For example, USAID/OTI provided assistance to an implementing partner for a human rights defenders’ skill-building forum and assisted another implementing partner with three workshops in three cities in the Mandalay region during the International Day of Peace 2016. State/DRL has initiated 24 democracy projects in Burma since 2012, according to State/DRL officials, with obligations totaling more than $9 million. Obligations for these projects have averaged approximately $500,000, and the projects typically have lasted 12 to 15 months, according to State/DRL officials. Current State/DRL priorities in Burma include addressing communal violence, inclusive economic growth, and corruption and public financial management, according to a State/DRL official. For example, one State/DRL project’s goal is to reduce ethnic conflict and build social cohesion by bringing together influential people of diverse backgrounds and training them in conflict resolution. As of September 30, 2016, USAID and State had 13 active democracy projects in Burma (6 USAID/DG projects, 1 USAID/OTI project, and 6 State/DRL projects). See table 2 for information on the projects, including the responsible office or bureau and the projects’ total estimated cost. Appendix IV provides additional information on all 13 currently active USAID and State projects. In reviewing the scopes of work of USAID/DG’s, USAID/OTI’s, and State/DRL’s 13 active democracy projects, we found that either the purpose or objectives of each project support the strategic goals of the Burma Democracy Strategy. The following are examples: USAID/DG’s “Accountable to All: Strengthening Civil Society and Media in Burma” project supports the strategy’s goal of strengthening civil society. USAID/OTI’s “Burma Transition Initiative-II” project supports the goal of developing the capacity of key individuals to employ good governance principles. State/DRL’s “Multi-Religious Networks Promoting Religious Diversity and Tolerance” project supports the goal of promoting tolerance and supporting national reconciliation. The U.S. embassy in Burma’s Assistance Working Group (AWG), the primary mechanism for coordinating agencies’ democracy projects in Burma, according to USAID officials, consists of representatives from each agency located at the embassy, including USAID, State, and the Department of Defense. Agencies submit all potential projects to the AWG, which is co-chaired by the embassy’s Deputy Chief of Mission and the USAID Mission Director. The AWG meets biweekly to review and approve assistance projects and coordinate assistance among U.S. agencies. The AWG ensures that all democracy projects align with relevant strategies and address the legal and policy restrictions on U.S. assistance, including projects that propose working with the government of Burma, according to embassy officials. According to embassy officials, the embassy does not directly include State/DRL in AWG proceedings because its policies allow only entities (including agencies, bureaus, or offices) with staff at the embassy to participate. Embassy officials told us that entities without embassy-based staff, including State/DRL, are not directly included in the AWG because of the high number of project proposals and the logistical difficulties of coordinating meeting times with entities in different locations around the world. Instead, embassy officials noted, the embassy assigns each entity without embassy-based staff, including State/DRL, an embassy representative who presents the entity’s project proposals at the AWG. Embassy officials also noted that the human rights officer at the embassy serves as the representative for State/DRL in the AWG and that State/DRL channels all proposal documents for embassy feedback to its representative through the Burma desk in Washington, D.C., and has contact with its representative through monthly phone calls and quarterly visits to Burma. However, while the documents that State/DRL submits for project proposals to the AWG contain all the information required by the AWG for review, there is no formal mechanism for State/DRL to present its analysis of the proposals as part of the AWG review, according to State/DRL officials. Further, because State/DRL channels all proposal documents through the Burma desk rather than directly to the State/DRL embassy representative, the representative has not always had all the necessary knowledge or information to fully represent State/DRL’s proposals to the AWG, according to a State/DRL official. State/DRL also lacks the opportunity to provide direct input on other agencies’ democracy projects at the AWG, according to State/DRL officials. Moreover, State/DRL does not always have the opportunity to provide input into projects led by the embassy that are developed and implemented quickly. State/DRL officials told us that the bureau has requested to participate directly in AWG meetings via teleconference but that the embassy has denied those requests. According to embassy officials, representatives from any entity without an embassy presence, including State/DRL, may attend the AWG if they are in Burma for official duty. In addition, according to the embassy, other agencies and offices providing democracy assistance at the embassy work to coordinate with State/DRL outside of the AWG process through one-on-one consultations with State/DRL officials and by allowing State/DRL to review technical proposals prior to AWG review. USAID officials told us that, while the AWG is the primary coordination mechanism for U.S. agencies’ democracy projects in Burma, the agencies conducting these projects use other methods to coordinate with State/DRL. For example, according to USAID officials, USAID participates in coordination meetings with State/DRL during the latter’s regular visits to Burma, joins monthly calls between the embassy and State/DRL, solicits and receives State/DRL input on democracy project designs, and conducts ad hoc meetings and calls with State/DRL. However, because State/DRL cannot regularly attend AWG meetings, officials noted that they have not always received feedback on AWG decisions on their projects. Embassy officials told us that for approved projects, they do not provide additional information. For projects that the AWG does not concur with, the embassy provides written feedback to the Burma desk officer to share with State/DRL. A State/DRL official told us that State/DRL had not received this feedback in the past. State officials told us that starting in fiscal year 2016, this feedback has been provided verbally to State/DRL as part of State/DRL’s project review process. In addition, in May 2017, officials from the embassy, the Burma desk, and State/DRL stated that they had recently initiated a process to identify more efficient and inclusive methods for coordinating with and obtaining State/DRL’s input on future democracy program decisions, partly as a result of our review. We assessed the embassy’s AWG in relation to key features of interagency collaboration that GAO previously identified and found that the AWG generally displayed six of the seven features. For example, we found that the AWG had clearly defined roles, responsibilities, and leadership, outlined in documents that were circulated to, and approved by, all participating bureaus and offices within the embassy. However, prior to the recent actions taken to improve coordination, the AWG had not adequately ensured that relevant participants were included in collaborative efforts. If State’s recent efforts to improve coordination with State/DRL are properly implemented, these efforts could potentially address this feature of effective collaboration for the AWG. Several annual appropriations acts have stated that assistance for Burma shall be made available for certain types of activities. For example, the Consolidated Appropriations Act, 2016, states that appropriated funds for assistance for Burma shall be made available to strengthen civil society organizations in Burma, including as core support for such organizations; for projects to promote ethnic and religious tolerance, including in Rakhine and Kachin states; and for the implementation of the Burma Democracy Strategy. Other annual appropriations acts from 2012-2015 contained similar specified purposes for Burma assistance funding, with some variations. USAID and State take several actions to ensure that their democracy projects support these purposes. USAID officials stated that as new project activities are designed, annual appropriations act language for Burma is taken into account as part of the project design process. A State/DRL official told us that State/DRL chooses the projects it pursues in Burma based on overall State/DRL policy objectives, which are in general alignment with the purposes found in annual appropriations acts for the types of projects that shall receive funding. In addition, in reviewing proposed assistance projects, the AWG works to ensure that USAID and State projects address these purposes. Through our analysis of project documents, we found several examples of USAID and State democracy projects where the stated objectives of the project generally addressed one of the specified purposes in the Consolidated Appropriations Act, 2016. See table 3 for examples. Further, the Consolidated Appropriations Act, 2014, required that the Burma Democracy Strategy include support for civil society, former prisoners, monks, students, and democratic parliamentarians. We reviewed the Burma Democracy Strategy and determined that it includes language regarding supporting all of the specified groups, as shown in table 4. Annual appropriations acts have included provisions that state U.S. democracy assistance may not be provided to certain categories of prohibited entities and individuals. We found that USAID/DG, USAID/OTI, and State/DRL inform their implementing partners about these restrictions and include due diligence requirements in their award agreements to address these restrictions. However, some partners indicated that they could benefit from additional guidance on how to conduct these activities, and USAID/DG and State/DRL do not review implementing partners’ due diligence procedures. We found that some implementing partners do not conduct due diligence, while others use a range of approaches and expressed concerns about whether they are meeting their responsibilities. Standards for Internal Control in the Federal Government call for management to periodically review policies, procedures, and related control activities for continued relevance and effectiveness in addressing related risks. Providing only some guidance on how to conduct due diligence to partners unclear on appropriate procedures for undertaking such procedures, and not conducting reviews of partners’ due diligence processes, may limit USAID’s and State’s ability to avoid making U.S. democracy assistance available to prohibited entities and individuals. Annual appropriations acts have included provisions that state that U.S. assistance cannot be provided to certain categories of entities and individuals. These restrictions have varied over the years in number, breadth, and applicability. Depending on the project’s purpose, funding year, and source of funds for each democracy project, different restrictions apply. The active democracy projects we reviewed were funded by funds from a variety of fiscal years and accounts to which different restrictions apply. However, as an illustrative example, the Consolidated Appropriations Act, 2016, states that assistance may not be made available for budget support for the Government of Burma; to any successor or affiliated organization of the SPDC controlled by former SPDC members that promotes the repressive policies of the SPDC; to any individual or organization credibly alleged to have committed gross violations of human rights, including against Rohingya and other minority groups; to any organization or individual the Secretary of State determines and reports to the appropriate congressional committees that advocates violence against ethnic or religious groups and individuals in Burma, including such organizations as Ma Ba Tha. USAID/DG and USAID/OTI have taken various steps to inform all implementing partners of the restrictions, including by providing a description of certain categories of entities and individuals that are prohibited from receiving assistance in all of their award agreements and responding to questions from implementing partners about the restrictions, according to USAID officials. Specifically, USAID/DG and USAID/OTI have both included language regarding the restriction against providing assistance to any individual or organization credibly alleged to have committed gross violations of human rights in all of the project awards for the active projects in 2016. In addition, the USAID Mission’s legal advisor held several training sessions on the restrictions, including on the need to conduct due diligence, during events such as post-award conferences and standalone training sessions. According to a State/DRL official, for programmatic reasons, State/DRL started including information on the restriction against providing assistance to any individual or organization credibly alleged to have committed gross violations of human rights in its project awards in 2016, in response to language in the Consolidated Appropriations Act, 2016. State did this even though the projects utilized funding for which this restriction was not applicable. Three of the five awards signed in 2016 included information specifically about the need for conducting due diligence to identify potential recipients alleged to have committed gross violations of human rights. In addition, one award signed prior to 2016 included an amendment in September 2016 that added similar information on the restriction and the need to conduct due diligence. This project utilized funding for which the restriction against providing assistance to any individual or organization credibly alleged to have committed gross violations of human rights was applicable. State/DRL officials told us that they are considering the bureau’s approach going forward on due diligence requirements for implementing partners. According to USAID and State, the restrictions in the annual appropriations acts can change from year to year, and their applicability can vary depending on, for example, the funding account or purpose of the program. USAID and State/DRL said that they review the legal restrictions each year and inform implementing partners of any changes in the restrictions. USAID officials also said that they update their training materials with current information as part of their annual review or in response to partner questions. We found that all 13 of USAID/DG’s, USAID/OTI’s, and State/DRL’s current implementing partners were generally aware of the current restrictions and the need for them to conduct due diligence on potential assistance recipients, where restrictions were applicable. USAID and State have provided some guidance to their implementing partners on how to conduct due diligence to address appropriations acts restrictions. However, USAID/DG and State/DRL do not review partners’ due diligence processes, according to agency officials. USAID/OTI communicates frequently with its partner about conducting due diligence. According to USAID/OTI officials, they are in daily contact and hold three weekly meetings with the partner about activity design, which involves partner selection for the activities and conducting due diligence. In addition, USAID/OTI officials stated that they conferred with the partner regarding due diligence procedures as appropriate during weekly “Senior Management Team” meetings. USAID/DG officials stated that they used trainings and individual meetings with partners to provide guidance and answer questions on conducting due diligence. In the USAID Mission’s March 2016 training session, the USAID legal advisor provided information on, for example, using U.S. government websites to check for prohibited entities and individuals. However, training on how to conduct due diligence to identify parties alleged to have committed gross violations of human rights was limited to instructing partners to “search public sources” and providing several search terms. Since the March 2016 training, USAID/DG has not provided additional written information to its implementing partners on how to conduct due diligence. Similarly, State/DRL has provided some guidance to only a few implementing partners on conducting due diligence. We found that three of the four State/DRL awards we identified earlier as including information on the restriction against providing assistance to any individual or organization credibly alleged to have committed gross violations of human rights stated that a review of open-source documents must be conducted, with nothing further on due diligence. The fourth award included more detailed language on the due diligence process, including requirements to check recipients against existing sanctions lists and conduct searches in both Burmese and English. State/DRL did not provide any other written guidance to its implementing partners on how to conduct due diligence, according to agency officials. Moreover, for those projects active in 2016, USAID/DG and State/DRL did not review their implementing partners’ due diligence procedures, leaving it to the partners to design and conduct their own procedures, according to USAID/DG and State/DRL officials. These officials told us that implementing partners report the results of due diligence only when they have found derogatory information about an individual or organization, as is required of the partners by the agreements for those projects that were active in 2016; the AWG then makes a final determination of that individual’s or organization’s eligibility for assistance. Standards for Internal Control in the Federal Government states that management should periodically review policies, procedures, and related control activities for continued relevance and effectiveness in addressing related risks. By providing only some guidance on how to conduct due diligence and by not reviewing partners’ procedures, USAID/DG and State/DRL miss opportunities to help partners strengthen their due diligence efforts. Further, the agencies limit their ability to address any risks that assistance may be provided to prohibited entities and individuals. Representatives of the USAID/DG, USAID/OTI, and State/DRL implementing partners active in 2016 told us they use a variety of procedures to conduct due diligence, and some expressed concerns about the due diligence process. These partners ranged from large, international nongovernmental organizations to smaller, more locally based groups. One USAID implementing partner stated that their organization did not conduct any due diligence, despite a requirement to do so. Representatives of the other partners reported using a variety of due diligence procedures. Examples included conducting open-source searches, using personal networks to check on individuals’ credentials, or using their organization’s established due diligence procedures that are used in all relevant countries. Implementing partner representatives also expressed concerns about the adequacy of their organizations’ due diligence procedures and about how to conduct due diligence in certain situations, such as the following: One representative told us they had requested, but had not received, additional information beyond the training session held by the USAID Mission’s legal advisor. The representative noted that the term “substantive checking” was not explained in USAID due diligence procedures and that USAID had not responded to a request for an explanation. Another representative said that they were unsure whether their organization’s due diligence procedures were sufficient to satisfy their obligations under their award from USAID and expressed a desire for more guidance. Another representative said that their organization lacked the capacity to conduct Internet searches in Burmese because it does not employ local staff. Another representative observed that many people in Burma have the same name or have multiple names, which can complicate the due diligence process, and expressed uncertainty about the correct procedure in those situations. Several representatives told us about instances in which their organizations received the attendee list for a project activity, such as a training class, just prior to the event. The representatives said that their organizations lacked the ability to quickly conduct due diligence on these attendees and instead had to complete due diligence after the event, if at all. Similarly, another representative said that the need to assist a local project can arise quickly, allowing little time for his organization to conduct due diligence on potential recipients. In addition, USAID Mission officials stated that conducting due diligence to identify those credibly alleged to have committed gross violations of human rights is burdensome and creates a large compliance issue for implementing partners while syphoning off resources from the projects. Since 2012, the U.S. government has committed increasing diplomatic and financial resources to helping Burma transition to a democratically elected civilian government, obligating over $113 million for 34 democracy projects. USAID and State have taken some actions to ensure that their democracy projects support the specified purposes of annual appropriations act provisions regarding assistance to Burma. However, some partner representatives are concerned about the sufficiency of guidance they have received from State and USAID regarding how they should conduct due diligence to ensure that U.S. assistance is not provided to prohibited entities and individuals. Further, the lack of USAID/DG and State/DRL review of partners’ due diligence procedures limits the agencies’ ability to help strengthen these efforts. Additional agency involvement in the due diligence process could better ensure that implementing partners do not provide U.S. democracy assistance to prohibited entities or individuals. The importance of this is highlighted by the history of human rights abuses in Burma and the reported clashes between Burma’s military and various ethnic groups in Burma. To better ensure that sufficient due diligence is undertaken by implementing partners of U.S. democracy assistance in Burma, where appropriate, to help ensure that assistance is not made available to prohibited entities or individuals, we are making the following two recommendations: We recommend that the Administrator of USAID direct the Mission in Burma to review its procedures and practices regarding due diligence for democracy projects to determine whether additional guidance or reviews of implementing partners’ due diligence procedures would be appropriate. We recommend that the Secretary of State direct the Bureau of Democracy, Human Rights, and Labor to review its procedures and practices regarding due diligence for Burma democracy projects to determine whether additional guidance or reviewing implementing partners’ due diligence procedures would be appropriate. We provided a draft of this report to State and USAID for comment. In their written comments, reproduced in appendix V and VI, both State and USAID concurred with our recommendations. USAID disagreed with our characterization of their due diligence guidance, stating that it has provided extensive training to implementing partners on legal restrictions and due diligence requirements. We found that all USAID implementing partners we spoke with were aware of the current restrictions and the need for them to conduct due diligence. However, we note that several USAID partners were uncertain as to whether their actions to conduct due diligence were adequate. Further, USAID noted that some of the awards included in our assessment do not require implementing partners to conduct due diligence. We reported that USAID and State officials told us that due diligence is not a legal obligation under the annual appropriations acts but rather is a tool that is used to further implementation of legal restrictions or for risk mitigation. Moreover, we found that all current USAID projects included in this review contained language in the award agreements establishing a restriction against providing assistance to any individual or organization credibly alleged to have committed gross violations of human rights. State and USAID also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of State, and the Administrator of the U.S. Agency for International Development. If you or your staff have any questions about this report, please contact me at (202) 512-3149, or [email protected] . Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. The U.S. Agency for International Development (USAID) and the Department of State (State) use, among other things, written reports and site visits to monitor their democracy projects in Burma. We found that USAID’s implementing partners met all monitoring report requirements specified in the agreements, while State’s Bureau of Democracy, Human Rights, and Labor’s (State/DRL) implementing partners generally met the reporting elements. USAID and State require evaluative components in partners’ final reports, which partners provided. In addition, USAID and State have conducted some external evaluations. USAID Burma Mission’s Office for Democracy and Governance (USAID/DG), USAID’s Bureau for Democracy, Conflict, and Humanitarian Assistance’s Office of Transition Initiatives (USAID/OTI), and State/DRL stated that they use written reports, site visits, and frequent informal communications (such as e-mail, phone calls, meetings, and weekly reports in some cases) as parts of their monitoring efforts. USAID/DG, USAID/OTI, and State/DRL rely on quarterly reporting by their implementing partners, as stipulated in their award agreements, as a component of monitoring projects in Burma. We reviewed the award agreements for all 13 active USAID/DG, USAID/OTI, and State/DRL projects and found that they all called for quarterly reporting. The reporting elements varied, as follows: USAID/DG: All six award agreements included reporting elements focused on progress or results and problems or challenges encountered in the implementing partners’ quarterly reports. The other elements varied based on the individual award agreement. USAID/OTI: The agreement calls for a summary of the country situation; political updates; program highlights, achievements, and major activities; budget information; a summary of grant implementation and appraisal; and problems encountered and proposed remedial actions. State/DRL: Four of the six award agreements we reviewed requested that the same elements be included in the quarterly reporting. The other two agreements differed only slightly in what was to be included in the reports. All agreements called for a discussion of sustainability efforts. Agency officials from all three entities told us that they work with implementing partners to develop monitoring and evaluation plans that include the use of indicators agreed upon by the respective entity and the implementing partner. In addition, USAID/DG officials in Burma stated that the Mission conducts biannual portfolio reviews, and USAID/OTI officials stated that they conduct internal assessment processes on average every three months. USAID implementing partners reported on all requested elements in their monitoring reports, and State/DRL partners provided most of the requested elements. We reviewed all relevant fiscal year 2016 monitoring reports for the six active USAID/DG projects and found that all implementing partners had submitted their required reports, including a narrative report as well as progress toward their agreed-upon indicators. We also found that all reporting met the requirements as set out in the agreements. In addition, we reviewed all relevant fiscal year 2016 monitoring reports for the six active State/DRL projects and found that all partners had submitted the required reports, including both the narrative report and documentation showing progress against the required indicators. We also found that while implementing partners included most of the information requested, but not required, in the award agreements, implementing partners did not provide information on how the recipient was pursuing sustainability. However, a State/DRL official told us that sustainability efforts are discussed during site visits or in other channels of communication, and if the program officer is satisfied with the answer, he or she will not necessarily ask the implementing partner to go back and update their written reports to record that sustainability was in fact being considered. Further, State/DRL officials told us that sustainability is an important aspect of all State/DRL projects in Burma; not only does State/DRL request updates on sustainability efforts in the implementing partners’ reporting, but sustainability is also one of seven criteria against which all DRL project proposals are reviewed. USAID/DG, USAID/OTI, and State/DRL officials stated that conducting numerous site visits and communicating frequently with implementing partners were other components of monitoring their projects in Burma. USAID/DG: Officials reported conducting 28 site visits in 2016, with each project getting at least 1 visit. A trip report from one of these site visits included information such as a discussion of observations from the USAID staff, as well as recommendations for project adaptations. Three of the six active projects received multiple site visits in 2016. The projects that only received one visit in 2016 were primarily projects located in difficult to reach or conflict-prone areas, such as Rakhine State. In addition to site visits, all active implementing partners reported frequent informal communication with USAID staff, and two reported providing informal weekly reporting to USAID. USAID/OTI: Officials reported that USAID/OTI staff conducted seven official site visits in 2016. In addition, USAID/OTI officials reported that they hold three weekly in-person meetings with the implementing partner: an ideas meeting where new project ideas are raised and discussed, a management meeting where any administrative issues that need to be handled are discussed, and a portfolio review meeting where the ongoing activities for the project are discussed. In addition, USAID/OTI staff reported discussing activities with the implementing partner on a daily basis via email, telephone calls, and in-person meetings. State/DRL: Officials reported that they typically make three trips to Burma each year and attempt to visit each project at least once a year. In 2016, State/DRL made three site visits—one in February, one in July, and one in October. The trip reports from the October visit included high-level problems or concerns that needed to be brought to the attention of the front office, summaries and observations from program and site visits conducted during the trip, and any action items to follow up on after the trip. In addition to site visits, all active implementing partners reported frequent, informal communication with State/DRL staff. USAID/DG, USAID/OTI, and State/DRL award agreements require that the final reports submitted by the implementing partners include several evaluative elements. Our review of final reports submitted shows that partners had provided these elements in their reports. USAID/DG: The agreements we reviewed called for the final performance report to include, among other things, an overall description of the activities under the project and the significance of these activities; and results toward achieving the project objectives and the performance indicators, as well as an analysis of how the indicators illustrate the project’s impact. USAID/DG conducted a total of 8 democracy projects in Burma since 2012, of which 3 had been completed, as of December 2016. We reviewed the final reports for two of the completed projects. Both of the final reports met all requirements specified in the agreements regarding evaluative components. Tools used by the various projects to evaluate the assistance included analysis of progress using both quantitative and qualitative data, surveys of participants, interviews, and focus group discussions. USAID/OTI: The award agreement we reviewed required a final report that includes, among other things, program highlights, achievements, and major activities; a summary of grant implementation and an appraisal; and problems encountered and how they were solved. USAID/OTI’s initial project in Burma was completed in mid-2016. Upon completion, a final report was submitted. We found that the report met all the required elements as specified in the initial award agreement. Tools used to evaluate the assistance included analysis of progress using both quantitative and qualitative data. State/DRL: The agreements we reviewed called for the final performance report to include, among other things, an in-depth impact assessment and/or project evaluation. State/DRL conducted a total of 24 democracy projects in Burma since 2012, of which 7 had been completed and had submitted final reports, as of August 2016. In reviewing the seven final reports, we found that all seven met the requirements specified in the award agreements. Tools used by the various projects to evaluate the assistance included analysis of progress using indicators, formal surveys of participants, semistructured interviews, and focus group discussions. According to USAID policy, not all USAID/DG award agreements are required to have a final external evaluation. Plans for evaluations, if they are to be conducted, are developed in the project design phase as part of the monitoring and evaluation plan. We found that one final external evaluation had been completed as of December 2016. In addition, although not required in the agreements, USAID/DG completed a midpoint evaluation of one project in July 2015, and two USAID projects were undergoing midterm evaluations that were scheduled to be completed by April 2017. The USAID/OTI award agreement we reviewed stated that within 3 months prior to close-out, USAID/OTI may organize, and the implementing partner will cooperate with and contribute to, a final external evaluation of the program. This final evaluation will include an assessment of the impact or results of activities managed by the implementing partner. An external evaluation was completed at the end of the first USAID/OTI award agreement. According to a State/DRL official, external final evaluations were not required for any of the Burma projects because of the short-term nature and fairly small budgets of projects (for example, budgets less than $350,000). According to officials, under State/DRL guidance, an external evaluation is not a requirement for State/DRL projects, but proposals that include one tend to be rated more competitively by State/DRL. We found that two external evaluations had been completed as of December 2016. State/DRL does not typically conduct midterm evaluations, again because of the relatively short average length of the projects. Both USAID and State officials told us that the nature of democracy projects makes measuring effectiveness difficult. Issues cited by officials included the long-term nature of the programs where results are often seen years later, and the difficulty in measuring the impact of democracy projects, particularly relative to other influencing factors. We have previously reported on the difficulties in evaluating democracy assistance. Our objectives were to examine (1) U.S. Agency for International Development (USAID) and Department of State (State) democracy projects, including coordination of those projects; (2) steps USAID and State have taken to ensure that U.S. democracy projects and the U.S. Strategy for the Promotion of Democracy and Human Rights in Burma (Burma Democracy Strategy) address and support the specified purposes and groups, respectively, for Burma assistance funding; and (3) USAID and State efforts to ensure that U.S. democracy assistance is not provided to prohibited entities and individuals. We also provide information about USAID’s and State’s monitoring and evaluation of their democracy projects in appendix 1, and appendix IV includes details on USAID and State democracy projects, including implementation and results. We conducted fieldwork in Hpa’ An, Mandalay, Mawlymyaing, Naypyidaw, and Rangoon, Burma, in October 2016 and November 2016. To describe USAID and State democracy projects in Burma, we reviewed project award documents, including relevant modifications, for 13 projects funded by USAID and State’s Bureau of Democracy, Human Rights, and Labor (State/DRL) that were active as of September 30, 2016. We also obtained agency funding data on obligations for all USAID and State democracy projects that were active between fiscal years 2012 and 2016. To assess the reliability of those data, we interviewed knowledgeable agency officials and sent them a set of questions that asked about data collection, validation, and related issues. We determined thathese data are sufficiently reliable for the purpose of reporting USAID’s and State’s obligations to Burma between 2012 and 2016. In addition, we reviewed the award documents for the 13 projects that were active as of September 30, 2016, and created a table containing the total project values from them. These values were the initial award amounts and do not reflect any modifications that were made subsequently. We also interviewed officials from USAID and State in Washington, D.C., and Burma about their active projects. Additionally, we compared the purposes and objectives of USAID’s and State’s democracy projects with the goals of the Burma Democracy Strategy. To examine how the projects are coordinated, we reviewed agency documents and interviewed agency officials about coordination and, in particular, the embassy in Burma’s Assistance Working Group (AWG). We reviewed the AWG as a coordination mechanism against GAO’s Key Considerations for Implementing Interagency Collaborative Mechanisms. We analyzed information collected from our document review, including the terms of reference for the AWG, and interviews with agency officials and compared this information against the seven key considerations listed in a prior GAO report. To assess the extent of interagency coordination, we compared the evidence we collected against the key considerations, which include outcomes and accountability, bridging organizational cultures, leadership, clarity of roles and responsibility, participants, resources, and written guidance and agreements. Two analysts independently completed this analysis, identifying characteristics, practices, or other evidence that generally aligned with the criteria for each key consideration. They then compared their responses and resolved any initial differences. They only judged a consideration to be lacking if no evidence of generally aligning with that characteristic was identified by either analyst. To examine steps USAID and State have taken to help ensure that U.S. democracy projects and the Burma Democracy Strategy address and support the specified purposes and groups, respectively, for Burma assistance funding, we reviewed the specified purposes for Burma assistance funding in the Consolidated Appropriations Act, 2016, as well as the specified groups for the Burma Democracy Strategy in the Consolidated Appropriations Act, 2014. We also interviewed USAID and State officials in Washington, D.C., and at the embassy in Rangoon, Burma. We then analyzed the Burma Democracy Strategy to identify whether it supported each of the five specified groups: civil society, former prisoners, monks, students, and democratic parliamentarians. We also identified the purposes for which funds shall be made available in the Consolidated Appropriations Act, 2016. We analyzed project award documents for projects active as of September 30, 2016, that received fiscal year 2016 funding to which the purposes applied, and compared the stated purpose and objectives of the projects to the purposes for which funding shall be made available in the act. From this process we found examples, which we included in our report, where the project’s purpose and objectives were broadly consistent with the purposes specified in the act. To examine USAID and State efforts to help ensure that U.S. democracy assistance is not provided to prohibited entities and individuals, we identified restrictions applicable to assistance funding for Burma in the Consolidated Appropriations Act, 2016, for illustrative purposes. We reviewed USAID documents, including presentations and emails, used to train implementing partners on the restrictions. We reviewed USAID’s and State’s active project awards for information related to the restrictions or the need to conduct due diligence to identify prohibited entities and individuals. We also interviewed officials from USAID and State to discuss, among other things, whether and how they provide information to implementing partners about the restrictions and the due diligence process. Further, we interviewed all 13 USAID and State implementing partners about their due diligence processes and their interactions with agency officials related to conducting due diligence for prohibited entities and individuals. To examine USAID’s and State’s monitoring and evaluation of their democracy projects in Burma, we reviewed documents and interviewed agency officials in Washington, D.C., and at the U.S. embassy in Burma. We also interviewed all current implementing partners in Burma. We reviewed award agreements for 13 ongoing democracy projects as of September 30, 2016—6 USAID/DG projects, 1 USAID/OTI project, and 6 State/DRL projects—to identify any monitoring and evaluation requirements contained in the agreements. We then reviewed all fiscal year 2016 monitoring reports for those projects and analyzed them against the monitoring requirements stipulated in the agreements, to assess compliance. We also reviewed final reports from all USAID and State democracy projects that had been completed since 2012. We analyzed these reports against the requirements that we identified from the individual award agreements for each project, to assess compliance. These requirements include elements such as an overall description, results summary, and description of problems encountered. We had two analysts independently assess whether these elements were present or absent in the final reports and then meet to reconcile any initial differences. We did not make any attempt to evaluate the quality of the information included in the monitoring or final reports. Additionally, we interviewed agency officials about their use of other tools to monitor and evaluate their democracy projects, and reviewed trip reports from site visits, where relevant. We also discussed agency monitoring and evaluation efforts with the implementing partners from the 13 ongoing projects in-country to verify that their experiences matched up with what agencies reported doing. To examine how the Burma Democracy Strategy aligns with other U.S. strategies, we analyzed and compared the goals of the Burma Democracy Strategy to the objectives of the Burma Integrated Country Strategy and the objectives of the USAID Strategy on Democracy, Human Rights, and Governance. We conducted this performance audit from May 2016 through July 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Consolidated Appropriations Act, 2014, required the Secretary of State, in consultation with the Administrator of the U.S. Agency for International Development (USAID), to submit a comprehensive strategy for the promotion of democracy and human rights in Burma, which became the Strategy for the Promotion of Democracy and Human Rights in Burma (Burma Democracy Strategy). To examine how the Burma Democracy Strategy aligns with other relevant strategies, we reviewed its objectives and compared them to objectives contained in other relevant strategies (i.e., the Burma Integrated Country Strategy and USAID’s Democracy, Human Rights, and Governance Strategy). We also interviewed USAID and Department of State (State) officials in Washington, D.C., and at the U.S. embassy and USAID Mission in Rangoon, Burma. To continue moving Burma toward a well-governed democratic state that is inclusive, accountable, and responsive to its people, the Burma Democracy Strategy includes five strategic goals: 1. Develop the capacity of influential entities to employ principles of a well-governed democratic state that is inclusive, accountable, and responsive to its people. 2. Support and strengthen civil society, and strengthen societal foundations and institutions at all levels to reflect the will, concerns, and participation of the Burmese people. 3. Encourage responsible investment and greater respect for human rights by the private sector. 4. Support Burma’s peace process, while engaging the military on human rights issues. 5. Promote tolerance and support legitimate and sustainable processes, which enable domestic stakeholders to pursue national reconciliation and the establishment of a stable, inclusive democratic union. We found that the Burma Democracy Strategy aligns with broader U.S. strategies on Burma and democracy promotion. For example, the strategic goals of the Burma Democracy Strategy align with objectives included in the U.S. embassy’s Burma Integrated Country Strategy. The Integrated Country Strategy is a multiyear, overarching strategy that encapsulates U.S. policy priorities and objectives and the means by which foreign assistance, among other things, will achieve these priorities in a country. U.S. embassy officials said that the requirements for the Burma Democracy Strategy in the Consolidated Appropriations Act, 2014, were already included in the Integrated Country Strategy. The current Burma Integrated Country Strategy runs through 2017, and embassy staff are revising the Integrated Country Strategy concurrent with the revision of the Burma Democracy Strategy. Our analysis of the Burma Democracy Strategy and the Integrated Country Strategy found that each goal of the Burma Democracy Strategy aligns with at least two objectives from the Integrated Country Strategy. For example, the first goal of the Burma Democracy Strategy aligns with the fourth and fifth objectives of the Integrated Country Strategy, as shown in table 5 . In addition, we found that the five goals of the Burma Democracy Strategy align with the development objectives in USAID’s Strategy on Democracy, Human Rights, and Governance (USAID’s DHRG Strategy). USAID’s DHRG Strategy provides a framework to support the establishment and consolidation of inclusive and accountable democracies and lays out USAID’s vision to support democracy, human rights, and governance as essential to achieving the agency’s broader social and economic development goals. USAID officials said that the Burma Democracy Strategy is derived from USAID’s broader, agency-wide DHRG Strategy. Our analysis found that each goal of the Burma Democracy Strategy aligns with at least two development objectives in USAID’s DHRG Strategy. For example, the second goal of the Burma Democracy Strategy aligns with three objectives from USAID’s DHRG Strategy, as shown in table 6 . The following is a summary of information on 13 democracy projects in Burma that were active as of July 1, 2016 — 6 U.S. Agency for International Development (USAID) Mission in Burma Office of Democracy and Governance (USAID/DG) projects; 1 USAID Office of Transition Initiatives project; and 6 Department of State (State) Bureau of Democracy, Human Rights, and Labor (State/DRL) projects. We reviewed the award agreements, modifications and obligation data as well as the fiscal year 2016 fourth quarter progress reports submitted by the implementing partners. We present an illustrative sample of activities reported in those progress reports for each project. This award agreement totals $20,000,000 and runs from September 25, 2014, through September 24, 2018 (see table 7). This award agreement totals $1,171,781 and ran from September 16, 2014, through February 16, 2017 (see table 8). This award agreement totals $1,999,999 and runs from September 30, 2013, through October 15, 2017 (see table 9). This award agreement totals $5,000,000 and runs from January 4, 2016, through July 3, 2018 (see table 10). This award agreement totals $15,956,101 and runs from October 1, 2013, through September 30, 2018 (see table 11). This award agreement totals $23,200,000 and runs from July 1, 2016, through June 30, 2021 (see table 12). This award agreement has a ceiling total of $31,204,695, of which $5,750,000 has been obligated, and runs from March 2016 through March 2018 (see table 13). This award agreement totals $396,039 and runs from August 11, 2015, through August 31, 2017 (see table 14). This project’s obligated funding totals $792,078 and runs from September 3, 2013, through December 31, 2017 (see table 15). This project’s obligated funding totals $817,500 and ran from September 19, 2013, through November 30, 2016 (see table 16). This award agreement totals $371,287 and runs from May 13, 2016, through November 30, 2017 (see table 17). This project’s obligated funding totals $469,874 and ran from August 14, 2015, through April 30, 2017 (see table 18). This award agreement totals $517,028 and runs from August 14, 2015, through August 31, 2018 (see table 19). In addition to the contact named above, Leslie Holen (Assistant Director), Michael Maslowski (Analyst-in-Charge), Christopher Hayes, Julio Jebo Grant, and Debbie Chung made key contributions to this report. In addition, Martin de Alteriis, Mark Dowling, Reid Lowe, Sarah Veale, and Alex Welsh provided technical assistance. | U.S. policy toward Burma has been to promote the establishment of a democratically elected civilian government that respects the human rights of the Burmese people, according to State. Since 2011, Burma has been in transition from military, authoritarian rule toward parliamentary democracy. Congress included a provision in statute for GAO to review U.S. democracy programs in Burma. This report examines (1) USAID and State democracy projects, including coordination of these projects; (2) steps USAID and State have taken to help ensure that U.S. democracy projects and the U.S. Strategy for the Promotion of Democracy and Human Rights in Burma (Burma Democracy Strategy) address and support the specified purposes and groups, respectively, for Burma assistance funding; and (3) USAID and State efforts to ensure that U.S. democracy assistance is not provided to prohibited entities and individuals. GAO reviewed relevant agency documents; conducted fieldwork in Burma; and interviewed officials in Washington, D.C., and Burma. The U.S. Agency for International Development (USAID) and the Department of State (State) have funded 34 democracy projects in Burma since 2012, including efforts to strengthen the country's civil society and democratic institutions. These projects are primarily coordinated by the interagency Assistance Working Group (AWG) at the U.S. embassy in Burma, which approves all U.S. agencies' activities in Burma. However, State's Bureau of Democracy, Human Rights, and Labor (State/DRL) is not directly included in AWG proceedings because it does not have an embassy presence, and embassy policy limits participation in the AWG to those located at the embassy. As a result, the AWG has made decisions about State/DRL's projects without direct input from State/DRL and without State/DRL always receiving feedback. State officials said that they had recently begun an effort to identify more inclusive methods for coordinating with State/DRL and obtaining its input, which, if implemented properly, could improve coordination. Dollars in millions USAID and State take several steps to help ensure that their projects and the Burma Democracy Strategy address the specified purposes for Burma assistance funding. When designing projects, USAID and State both consider purposes for which assistance shall be made available. For example, GAO found that several current projects include objectives addressing purposes in the Consolidated Appropriations Act, 2016. Also, the Burma Democracy Strategy—an interagency strategy for promoting democracy in Burma—includes language supporting civil society, former prisoners, monks, students, and democratic parliamentarians, as required by the Consolidated Appropriations Act, 2014. USAID and State make some efforts to ensure that U.S. democracy assistance is not provided to prohibited entities and individuals specified in law. USAID and State/DRL provide information to implementing partners on prohibited entities and individuals and the need for partners to conduct due diligence. However, USAID's Mission in Burma Office of Democracy and Governance (USAID/DG) and State/DRL only provide some guidance to partners on how to conduct due diligence and do not review partners' procedures. Partners GAO interviewed either did not conduct due diligence or expressed concerns about their due diligence procedures. Standards for internal control in the federal government call for management to review procedures and controls for relevance in addressing risks. Without providing more guidance and reviewing partner due diligence procedures, USAID/DG and State may miss opportunities to better ensure that U.S. assistance is not provided to prohibited entities and individuals. GAO recommends that USAID and State review their procedures and practices to determine whether additional guidance or reviews of implementing partners' due diligence procedures are needed. USAID and State both concurred with our recommendations. |
VBA provides benefits to about 2.7 million veterans and about 579,000 surviving spouses, children, and parents. Some of these benefits and services include disability compensation and pension, education, loan guaranty, and insurance. VBA employs about 5,000 examiners, and they represent about 40 percent of the agency’s entire workforce. Most examiners are located at 57 regional offices and are responsible for reviewing and processing veterans’ disability claims. Typically, they begin service at GS-5 or GS-7, grades that have starting salaries for 2003 of about $23,400 to $29,000. Examiners can be promoted to GS-10. Between 1998 and 2001, VBA hired about 2,000 new examiners (see figure 1). According to VBA officials, this was the first time VBA had the authority to hire significant numbers of examiners. These examiners were hired in anticipation of a large number of future retirements. For example, in 2000, VBA was expecting the retirement of 1,100 experienced examiners in the next 5 years. In addition, the hiring of these new examiners coincided with a growth in the backlog of claims awaiting decisions. Between 1998 and 2001, the backlog increased by 74 percent from about 241,000 to about 420,000. VBA has since implemented an initiative to reduce this backlog. According to VBA, it takes 2 to 3 years for a newly hired examiner to become fully productive. After being hired, new examiners receive a combination of formal training in a central location and on-the-job training in one of VBA’s regional offices. Once on the job, these workers perform a variety of critical tasks, including compiling medical evidence, assessing the extent of the disability, determining the level of benefit, handling payment, and considering appeals. Workforce planning is a key component to maintaining a workforce that can carry out the tasks critical to an agency’s mission. Strategic workforce planning focuses on developing and implementing long-term strategies— clearly linked to an agency’s mission and programmatic goals—for acquiring, developing, and retaining employees. Collecting data on attrition rates and the reasons for attrition are one part of conducting workforce planning. Other types of data that can be used in workforce planning include size and composition of the workforce, skills inventory, projected retirement rates and eligibility, and feedback from exit interviews. This data can be analyzed to identify gaps between an agency’s current and future workforce needs, which can in turn become the basis for developing strategies to build a workforce that accommodates future needs. In fiscal year 2000, the attrition rate for new examiners at VBA was about 15 percent, more than twice as high as the 6 percent rate for all employees who left that year. About 15 percent of the new examiners hired in fiscal year 2000 left the agency within 1 year of being hired. VBA calculates attrition by counting employees who leave the agency and comparing that number to either total employees or a sub-group of total employees. The methods VBA uses to calculate attrition are consistent with those used by OPM and other federal agencies. Attrition rates for new VBA examiners were generally higher than those for all VBA examiners and other employees. As shown in table 1, in fiscal years 2000 and 2001, overall attrition rates for VBA examiners and other VBA employees ranged from about 4 percent to about 6 percent. However, among all new examiners hired in fiscal year 2000, about 15 percent left the agency within 12 months, as shown in figure 2. These attrition rates reflect all types of attrition—including resignation, retirement, and termination. However, for new hires, attrition consists predominantly of resignations. According to human capital experts, in general, new employees tend to leave at higher rates than all other employees. This has been the experience for federal agencies historically and, according to our analysis of OPM’s data, is generally the case governmentwide. Attrition rates for all federal employees, both new hires and senior staff, were about 7 percent in fiscal year 2000. However, for all new federal employees—those hired in fiscal year 1998, 1999, and 2000—as many as 17 percent left within 12 months of being hired. VBA officials acknowledge that, in certain regional offices, attrition has been high for newly hired examiners. For example, VBA found attrition rates of 38 percent to 49 percent for new examiners hired over a 3-year period at four regional offices—Baltimore (38 percent), Chicago (39 percent), Newark (41 percent), New York (49 percent). By contrast, some offices—such as Phoenix, Arizona; Louisville, Kentucky; Huntington, West Virginia; and Wichita, Kansas—experienced no attrition among new examiners hired during this period. The two basic methods VBA uses to calculate attrition are consistent with methods used by OPM and other federal agencies. Both methods, the “annual calculation” and the “cohort calculation,” compare employees who leave the agency to either total employees or a sub-group of total employees. They provide different ways of looking at attrition trends. The annual calculation indicates broad attrition patterns from year to year. In contrast, the cohort calculation tracks attrition over a period of time for a specific group, and the timeframe and group can vary to suit the needs of the analysis. Using this method, VBA reported attrition rates similar to those found by GAO. The following are the two methods VBA uses: Annual calculation. This method calculates attrition by dividing all employees who left in a given year by an average of employees working at the agency at the beginning of the year and at the end of the year. These attrition rates represent employees at all federal agencies except VA. Cohort calculation. This method calculates attrition by tracking a specified group or “cohort” of employees. The cohort can be defined as all those hired (new hires only) during a specific timeframe. These new hires are tracked for selected intervals (3 months, 6 months, etc.). This method can be adapted by defining the cohort differently (for example, to track attrition among a subgroup of new hires) and by using different timeframes for the tracking (e.g., 12 months, 18 months, etc.). This calculation differs from the annual calculation in that it does not take an average of the total workforce. VBA used this method to determine the attrition rate of certain newly hired examiners for a presentation in 2001 and for additional, more comprehensive calculations in 2002. VBA plans to use this method to calculate attrition rate for new examiners at least annually starting in 2003. According to OPM officials, the annual method is a generally accepted method used to calculate attrition by federal agencies. OPM officials also recognized the value of the cohort method for calculations that require specific time frames or groups of employees, and added that tracking the attrition of new employees is an important practice. OPM does not mandate the use of a particular method for the calculation of attrition, but officials stated that any method used should be clearly explained. While VBA has descriptive data on how employees separate from the agency (whether through resignation, termination, retirement, or transfer), it does not have adequate analytic data on the reasons why employees, particularly new employees, leave the agency. VBA collects some data on the reasons for attrition in exit interviews. However, these data are not systematically collected in a consistent manner and not compiled or analyzed. Furthermore, VBA has not performed the types of analysis on its data that would help the agency determine whether it can reduce its attrition rate. VBA is taking steps to ensure that attrition data will be available to guide its workforce planning. While VBA systematically collects descriptive data on how employees leave the agency, the data on the reasons employees leave is not systematically collected or analyzed. As at other federal agencies, when employees leave VBA, a standard federal “Form 52” is filled out. This form records whether the employee is leaving due to a resignation, termination, retirement, or transfer. Because this information appears on the form in discrete fields, VBA human resources staff can easily enter it into the agency’s computer system to aggregate information on the types of separations. The Form 52 also includes a blank space for narrative comments on the reasons for leaving. This space is primarily intended to be used in the case of resignation and its use is optional on the part of the employee. However, according to VBA officials, this area is frequently left blank. When this area is filled out, it is up to a human resources employee to decide how to label an employee’s reason for leaving in the computer system. Several “quit codes” exist to help in this labeling process. For example, reasons for leaving can be coded as relating to pay and benefits, supervisory relationship, opportunity for advancement, or personal reasons, including family responsibilities, illness, or household relocation. All forms are sent to one of four human resource centers to be entered into the agency’s computer system. Human resources employees in these centers are instructed to code the reasons for leaving to the best of their ability. However, these staff members cannot clarify reasons when the information is blank or ambiguous because they do not have access to either the separated employee or the regional human resources staff who actually processed the employee’s separation. Therefore, VBA officials do not consider the Form 52 to be a complete or reliable source of information on the reasons employees resign from VBA. While VBA conducts exit interviews to collect information on the reasons employees resign, it does not have a standard process for these interviews, nor are they conducted consistently for all separating employees, according to VBA officials. Exit interviews with separating employees are conducted at regional offices. However, no standard process exists for such interviews, according to the results of an internal VA assessment. VBA officials state that the downsizing of human resources staff in regional offices is at least partly responsible for the inconsistency with which exit interviews are conducted. In addition, the data from the interviews that are conducted are not forwarded to national headquarters to be aggregated and analyzed. Despite VBA’s inconsistent use of exit interviews, VA policy recognizes the importance of exit interviews for determining the reasons an employee leaves. Some offices and staff members within VBA have made special efforts to compile or collect information on the reasons examiners leave the agency by producing special studies or reports. These include the following: High-Performing Young Promotable Employees (HYPE). In September 2002, a group of employees, representing six regional offices, prepared a report based on 72 exit interviews conducted at seven regional offices. The exit interviews had been conducted over 3 fiscal years: 1999, 2000, and 2001. Loss of New Hires in Veterans Service Centers. At the request of the head of VBA, the newly organized Office of Performance Analysis and Integrity (OPAI) issued a report in September 2002 that examined new hire attrition rates for regional offices individually. The report also looked at reasons for leaving, based on interviews with the directors of two regional offices. Review of attrition data at certain regional offices. At least two regional offices have investigated the reasons for attrition on their own initiative. For example, in October 2002, senior management at the Newark regional office compiled information on the attrition of examiners over a 3-year period and the reasons given for why these examiners left. This study was prompted by concern about high attrition rates at the Newark office. Portland did a similar review in September 2001. These special efforts had several common findings. For example, three reported that inadequate opportunity for training was one of the reasons examiners left VBA. Two reported workload as a reason for leaving. Two also identified instances in which examiners resigned as a result of pending termination for poor performance or conduct. Reports associated with these efforts touched on other reasons for resignation, including inadequate opportunity for full utilization of skills, insufficient pay, and various personal reasons. The other source of information on reasons examiners left VBA was anecdotal information provided by regional and other senior human resources officials. For example, senior human resources officials stated that reasons for leaving included factors such as inadequate work space and computer equipment as well as insufficient pay. In addition, these officials reported that some newly hired examiners left when they discovered that the job tasks were not what they had expected. According to a VBA official, certain regional offices are aware of the types of employers with whom they are competing. For example, some regional offices report losing employees to a range of employers in both the public sector, including other federal agencies (such as SSA and DOL), and the private sector, including firms in the information technology sector. VBA has begun to address some of the findings from these special studies or reports. For example, the HYPE report included several recommendations. The report recommended that the agency develop a comprehensive strategic plan that addresses attrition and retention; the report also recommended that the agency improve and centralize its exit interview process. Both of these recommendations are in the process of being implemented at VBA. In addition, according to a VBA official, certain regional offices have taken steps to offer job candidates opportunities to observe the work place before being hired. This effort was undertaken partly in response to information about employees’ expectations of their duties and work environment. VBA has not performed the types of analysis on its data that would help the agency determine whether it could reduce attrition or identify the extent to which an attrition problem may exist. To better understand its own attrition, an agency can take advantage of a range of analyses. These include the following: Comparisons. To understand the degree to which its attrition is a problem, an agency can compare its own attrition to the attrition of other federal agencies, especially to the attrition of agencies with employees who do similar work. While one of VBA’s special reports did some broad comparisons of VBA’s attrition to the attrition at other federal agencies, VBA has not compared, as we have done, the attrition of newly hired examiners to the attrition of employees in other parts of the federal government with comparable job series. Attrition modeling. To understand the degree to which attrition is a problem, an agency can estimate the attrition rates it expects in the future, providing a baseline against which to measure the actual attrition it experiences. This allows officials to determine if attrition rates are higher or lower than expected. While VBA has projected retirement rates for planning purposes, according to VBA officials, there was no formal or informal process to estimate the expected attrition rates of the examiners who joined the agency since 1998. In 2002, VA projected future attrition trends for examiners in a restructuring plan submitted to the Office of Management and Budget, and officials expect to compare these projections to actual attrition rates for examiners in the future. Cost analysis. To understand the degree to which attrition is a problem, an agency can estimate the cost of recruiting and training new employees who leave and their replacements. While VBA’s human resources office conducted a partial estimate of attrition costs in 2001, this estimate did not include all associated costs (including one of the most important and potentially expensive, the investment lost when a trained employee leaves). Labor market analysis. To understand the degree to which its attrition is a problem, an agency can evaluate labor market conditions in locations where it operates. Such an evaluation can provide context for understanding if an attrition rate is higher than might be expected in those locations. Using general labor market data, VBA has identified several locations where it faces significant competition from other employers, both public and private. This information could be used to better understand its attrition rate in those locations in the future. However, this information is not based on the actual employment plans of separating employees, and VBA does not routinely collect or document this information. According to a VBA official, collecting data on where VBA’s separating employees find employment after VBA would be useful for developing a more accurate understanding of the employers with whom VBA is competing. VBA is taking steps to ensure that attrition data will be available to guide workforce planning. First, VBA intends to develop a workforce plan, following a workforce policy approved by VA in January 2003. In a related document, VA stated its expectation that, in the current economy, attrition among examiners may stabilize. Continued monitoring of attrition rates and improved data on reasons for attrition would allow VBA to test that assumption. Second, VBA has recently designated an official to head strategic planning efforts. While these efforts will include human capital issues, and according to VBA officials, will address attrition, VBA’s human resources office is expected to assume primary responsibility for human capital issues and to coordinate with the strategic planning office. Obtaining better attrition data and conducting adequate analysis of attrition and the reasons for attrition could help VBA target future recruitment efforts and minimize attrition. For example, VA’s new automated exit survey, which VA officials expect to be available in spring 2003, has the potential to aid VBA in its attrition data gathering and analysis. Separating employees will be able to answer a series of questions about the reasons they decided to leave the agency. The survey will provide confidentiality for the employee, potentially allowing for more accurate responses. It will also facilitate electronic analysis that could be broken down by type of job and region. VBA’s ability to effectively serve veterans hinges on maintaining a sufficient workforce through effective workforce planning. While attrition data are just one part of workforce planning, the data are important because they can be used to anticipate the number of employees and the types of skills that need to be replaced. The agency currently lacks useful information on the reasons new employees leave and adequate analysis of its staff attrition. In addition, some offices experience much higher or lower rates. Continuing monitoring of attrition data by region may point to regions that need special attention. Sustained attention to both the reasons for attrition and attrition rates, particularly for new employees, is needed so VBA can conduct effective workforce planning. Understanding the reasons for attrition could help the agency minimize the investment in training lost when a new employee leaves. Furthermore, the new workforce planning efforts under way at VBA offer an opportunity to improve data collection on the reasons for attrition and attrition rates. For future contacts regarding this statement, please call Cynthia A. Bascetta at (202) 512-7101. Others who made key contributions to this statement are Irene Chu, Ronald Ito, Grant Mallie, Christopher Morehouse, Corinna Nicolaou, and Gregory Wilmoth. | By the year 2006, the Veterans Benefits Administration (VBA) projects it will lose a significant portion of its mission-critical workforce to retirement. Since fiscal year 1998, VBA has hired over 2,000 new employees to begin to fill this expected gap. GAO was asked to review, with particular attention for new employees, (1) the attrition rate at VBA and the soundness of its methods for calculating attrition and (2) whether VBA has adequate data to effectively analyze the reasons for attrition. To answer these questions, we obtained and analyzed attrition data from VBA's Office of Human Resources, calculated attrition rates for VBA and other federal agencies using a government-wide database on federal employment, and interviewed VBA officials about their efforts to measure attrition and determine why new employees leave. About 15 percent of new examiners hired in fiscal year 2000 left VBA within 12 months of their hiring date, more than double the 6 percent rate of all VBA employees who left in fiscal year 2000. In general, new hire attrition tends to exceed the rate for all other employees, and VBA's 15 percent rate is similar to the attrition rate for all new federal employees hired between fiscal years 1998 and 2000, when as many as 17 percent left within 12 months of being hired. VBA does not have adequate data on the reasons why employees, particularly new employees, choose to leave the agency. VBA has descriptive data on how employees leave the agency (whether through resignation, retirement, or transfer), but VBA does not have comprehensive data on the reasons employees resign. While VBA collects some data on the reasons for attrition in exit interviews, these data are limited because exit interviews are not conducted consistently, and the data from these interviews are not compiled and analyzed. Without such data, VBA cannot determine ways to address the reasons employees are leaving. Furthermore, VBA has not performed analysis to determine whether it can reduce its staff attrition. Improved collection and analysis of attrition data, including data on the reasons for attrition, could help the agency minimize the lost investment in training, particularly when new employees resign. A forthcoming report will explore options for improving VBA's collection and analysis of attrition data. |
About a decade ago we implemented a campus recruitment program to increase GAO’s visibility on campuses and help us attract highly-qualified and diverse candidates. The key elements of this program are (1) ongoing relationships with many colleges and universities and (2) the use of senior executives and other staff to develop and maintain those relationships. We supplement this program through additional activities designed to help others learn about GAO. We have established ongoing relationships with many colleges and universities across the country. While we advertise all of our new positions publicly, currently we have relationships with about 70 colleges and universities, including private and public colleges and universities, Historically Black Colleges and Universities, Hispanic-serving institutions, and other minority-serving institutions. These targeted schools have academic programs relevant to our skill needs (e.g., public policy, accounting, business or computer science) and that prepare students well for success at GAO. Our relationship-building over the years has been based primarily on visiting many of these schools to participate in on- campus events. We use senior executives and other staff from across the agency to develop and maintain these relationships. Serving as “recruiters,” these executives and staff help faculty, career placement officials, and students at the colleges and universities we visit understand the work we do and the skills required for that work. Senior executives, who serve as Campus Executives, have specific schools for which they are responsible. Other staff—often alumni of those schools—support the executives by setting up and participating in campus events, such as information sessions, class presentations, or career fairs. Our staff often seek opportunities to communicate about our mission and their experiences at GAO to interested parties, as they view recruitment opportunities as part of their institutional stewardship responsibility. In addition to our targeted campus outreach, we conduct a variety of activities to help potential candidates and officials from colleges and universities learn about GAO or the type of work we do. For example, our analyst staff often works with students enrolled in masters’ programs in public policy or administration on projects in which GAO acts as a “client.” Groups of students are assigned an issue or evaluation topic, then advised by our staff as they proceed with their research, which culminates in a report to us as the client. These projects provide students “real world” experience in conducting public policy analysis. We also address classes or groups of students and host visits from groups to hear about our work and GAO’s impact. Since 2001, we have held a yearly Educators’ Advisory Panel, which includes deans and professors from schools we visit as well as selected others. Through this panel, we have obtained advice and provided feedback about ways schools can refine and strengthen their curricula to make their graduates more successful. Finally, we conduct outreach to professional organizations and groups. We attend and/or make presentations at various conferences or invite representatives of these groups to address GAO staff. The groups we have networked with in the past include those whose members have relevant backgrounds (e.g., the American Economic Association), as well as other groups with members that traditionally have been underrepresented in the federal workforce, including the American Association of Hispanic CPAs, the National Association of Black Accountants, or the Federal Asian Pacific American Council. Our approach has been extremely effective in developing strong partnerships with many colleges and universities and professional organizations. Our brand recognition has grown tremendously on campuses and in the public policy arena. This, among other reasons, has contributed to our receiving thousands of high-quality applicants each year for our advertised positions. As part of overall efforts to focus more attention on our strategic human capital management, we have taken proactive steps to improve our recruitment program. Specifically, we (1) established stronger linkages between our recruitment efforts and organizational workforce needs, (2) increased diversity of and enhanced supports for our staff serving as recruiters, and (3) instituted stronger program management and accountability processes. We have seen positive outcomes from these efforts. Consistent with our recommendations to other agencies, we have established stronger linkages between our recruitment efforts and our workforce needs identified through our annual workforce and strategic planning processes and in our annual Workforce Diversity Plans. To accomplish this, we adopted a recruiting framework that has allowed us to better address our skill gaps and enhance the diversity of our workforce, such as hiring more Hispanics, individuals with disabilities, and veterans. This framework was particularly critical this year, as we needed to identify how to address our needs despite significant fiscal constraints. Using this framework, we made decisions to discontinue certain efforts or initiate new ones to meet our needs and better allocate our resources. For example, we customized our interactions with campuses so that we could devote the appropriate level and type of resources needed to meet our needs. While we continue to believe that developing and maintaining strong relationships with college and university campuses is critical, on-site visits are less necessary given workforce and technological changes. As a result, based on an analysis of our workforce needs, school characteristics (e.g., student demographics, academic programs, and proximity of the campus to GAO offices), and our history with the campuses (e.g., number of applicants, applicant experience, and hires), our efforts now include a range of both on- site and virtual activities. The benefit of this approach is that we can adjust it at any time based on our needs. We also made critical decisions about how to best supplement our campus outreach efforts to support our workforce needs in the most cost-effective manner. We considered our costs to participate in various events, results from past participation, and the anticipated future benefits in order to set our future priorities about what organizations and events we would centrally support. For example, we supported participation in the Careers and the disABLED Expo and the Association of Latinos in Public Finance and Accounting Conference to help enhance the diversity of our workforce as well as to attract candidates with needed skills. We also partnered with the Hispanic Association of Colleges and Universities, as well as the Public Policy and International Affairs Fellowship Program, to hire 10 qualified student interns. We also determined how we could cost- efficiently use other approaches to meet our needs. As a result, we have advertised in those journals targeting individuals with disabilities, African- Americans, or critical skill areas (e.g., economists) to expand our outreach. We also utilized low-cost mechanisms such as electronically notifying hundreds of colleges and universities and relevant organizations about vacancies, revamping our external careers web site, and updating our recruitment materials to provide better information about GAO’s worklife, programs, and values. These efforts are important ways to inform any interested candidate about GAO and available opportunities. Given the important role our recruiters serve in our campus recruitment program, we have taken steps to have a recruitment cadre that is diverse and well-trained. We solicited recruiters from throughout GAO and asked representatives from our numerous employee groups to serve as recruiters. Our recruitment cadre is diverse—representing staff from various GAO offices, teams, locations, job levels and positions. We also required that each employee interested in becoming a recruiter obtain senior management approval and attend our training workshop to learn about GAO-wide workforce needs and improve his or her ability to provide accurate, consistent, and timely information about our operations, programs, worklife, and values. This training has helped to ensure that all recruiters understand their responsibilities. We also have developed additional support for our recruiters to ensure consistent and timely dissemination of information. This support has included a slide presentation that describes GAO’s core values, business operations, and impact; a tip sheet that helps recruiters understand how to work with prospective applicants who may need to be accommodated; and a listing of specific types of activities recruiters can undertake at targeted campuses. While we have not identified a single “best practice,” some of our efforts that that have proven successful include: sending recruitment brochures/supplies to campus contacts; researching and contacting appropriate campus-based groups that have a diverse membership, as well as professional associations and relevant academic programs; establishing strong relationships with career placement staff; conducting information sessions with appropriate audiences; participating in career fairs, when appropriate; making class presentations that illustrate the nature of our work; getting our work incorporated into program curricula; and serving on advisory boards or as adjunct faculty with colleges and universities. To further assist our recruiters, we have provided real-time information and suggestions to help them better leverage their time. Specifically, we have kept recruiters apprised of the status of hiring announcements and shared data on the number of individuals hired for different positions— including the names of individuals hired from their specific schools—so recruiters could see the results of their efforts. Given additional budget constraints, we have suggested ways for our recruiters to more cost- effectively maintain strong campus relationships such as asking various academic programs within the same college or university to schedule joint presentations or visits by GAO; asking recent interns/hires to serve as informal ambassadors; and using local GAO staff to attend events at various campuses. In order to be able to better manage our campus recruitment program and assess program outcomes, we instituted a number of structural and administrative changes over the last several years. We placed overall program responsibility in our Human Capital Office and created three senior-level advisory boards to provide insight on our program operations and results. To obtain more robust information on recruiter activity, as well as create more program accountability, we have developed a standardized template to document recruiters’ strategies for working with individual schools and organizations. Through this template, recruiters must provide information on the members of the recruitment team, planned activities at the school or organization, and estimated costs. This strategy document must be completed by the recruiters, submitted to, and approved by Human Capital Office staff before funding is authorized and activities can begin. When an event is completed, recruiters report what occurred and assess the outcome. This information is summarized and subsequently shared with our recruiters in the form of best practices or lessons learned. The template also serves as the basis for data collection on agency-wide recruitment activity, including number of campuses and organizations visited, number and type of events conducted, costs associated with each event, and recruiters’ views on the effectiveness of various events. We also have instituted more rigorous data collection and analysis of applicant and hire information. For example, for fiscal year 2010 vacancies, we analyzed information on the background and diversity of our applicants and hires (e.g., degree level obtained, school attended, years of work experience, ethnicity, race, gender) and the information applicants provided on why they applied for the job. Through analysis of these data, we were able to gain insight on how our program activities related to our hiring outcomes. For example, we identified the percentage of applicants and hires that came from the colleges and universities we targeted, or that applied because of an interaction with GAO. While not perfect, this analysis has helped us to learn what is working, or what changes we need to make to enhance our recruitment approach. While we have made great strides in using data to inform and assess our campus recruitment program operations, we continue to explore how to judge the effectiveness of our recruitment efforts. For example, it is challenging to define a specific benchmark when assessing whether the number of applicants and hires from the schools or organizations we target is sufficient given our expenditures. Additionally, it is difficult to specifically identify those factors beyond our outreach—such as our mission, recognition as one of the best places to work, or informal communications—that affect an individual’s decision to apply to GAO. To that end, we plan to gather more detailed information from our applicants about the role of factors beyond our outreach efforts that have influenced their decisions to apply to GAO. This information can inform our future recruitment efforts. Our efforts have led to positive outcomes. We have achieved the institutional focus we were seeking by ensuring that our recruitment efforts are both driven by and support organizational needs. We also have gained efficiencies by adopting approaches that allow us to be more agile in responding to changing workforce needs and budget constraints. We continue to be an employer of choice and we received thousands of applications for our open positions in fiscal year 2010. As an example, we received about 20 applications for each of our GAO Graduate Analyst Intern positions filled in fiscal year 2010. Even more, representation of African-Americans, Hispanics, and Asian-Americans in the pool of qualified applicants and hires for the intern and entry-level positions filled in fiscal year 2010 exceeded the established benchmarks. Along with attracting and hiring high-quality, diverse staff, we have implemented programs and policies to support new staff once they arrive at GAO. The support for our entry-level staff comes predominantly through their participation in our highly regarded, 2-year, Professional Development Program (PDP). This program provides new employees with the foundations to be successful because it teaches them about our core values, how we do our work, and the standards by which we assess our performance. All entry-level analyst or analyst-related new hires are assigned advisers to assist in their development and provide support, although staff are also strongly encouraged to take an active role in their own career development by crafting Individual Development Plans and assessing their own strengths and growth areas. Staff receive multiple assignments while in the program so they can gain firsthand experience with the wide range of our work. They also receive a rigorous regimen of classroom and on-the-job training to learn about our work processes and requirements. Staff in the PDP program also receive formal feedback every 3 months and twice-yearly performance appraisals that can result in salary increases. In addition, actions of our senior leaders as well as several policies and other programs help our new hires make a successful adjustment to GAO. For example, various agency leaders, including the Comptroller General; Chief Human Capital Officer; Managing Director, Office of Opportunity and Inclusiveness; General Counsel; and our Chief Learning Officer participate in new-hire orientation. In addition, the Comptroller General and others meet with new employees during their first few months to answer any questions about GAO or our relationship with Congress. Other senior managers, including Managing Directors and directors in each GAO team, are encouraged not only to meet with new staff but take an active role in their development and day-to-day work environment. We also have policies in place to foster an inclusive and supportive work environment and help all staff balance work and life. For example, we support flexible scheduling, including telework and part-time arrangements, as allowed, given work responsibilities. We also have a student loan repayment program to help eligible staff defray educational costs. PDP staff, as all staff at GAO, can take advantage of a mentoring program to assist staff in becoming effective leaders, managing their work environments, and developing their careers. These programs and policies have helped make GAO a great place to work, as evidenced by our employees’ decisions to stay with GAO and results from our employee feedback surveys. GAO’s overall attrition rate has generally been below 10 percent for the last 5 years, and it was 6 percent in fiscal year 2010. About 90 percent of analyst and analyst-related staff hired in fiscal year 2008 are still with us. Feedback from newly hired staff show high levels of overall job satisfaction, as well as high levels of satisfaction regarding the on-the-job training they receive and staff development opportunities they are provided. Overall employee satisfaction levels contributed to GAO being named as the second best place to work in the federal government in both 2009 and 2010. Chairman Akaka, Ranking Member Johnson, and Members of the Subcommittee, this concludes my prepared remarks. I will be happy to answer any questions you or other members of the Subcommittee may have. For more information about this testimony, please contact Carolyn M. Taylor, Chief Human Capital Officer, at (202) 512-5811 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony included Lori Rectanus, Assistant Director; Harriet Ganson, Assistant Director; Cady Panetta, Senior Analyst; and Susan Aschoff, Senior Communications Analyst. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses GAO's campus recruitment program. As an organization committed to having a high-performing, diverse workforce, GAO places great importance on attracting, hiring, training, and retaining employees with the skills needed to support GAO's mission to serve Congress and the American public. GAO has a multi-disciplinary workforce, with most staff having backgrounds in public policy, public administration, law, business, computer science, accounting, social sciences, or economics. While our current and future hiring will be shaped by today's constrained budget environment, over the past 5 years, on average, GAO has hired about 300 employees each year. The majority of these hires were for analyst and analyst-related positions at the entry level. GAO also has a robust paid student intern program each year. Many of these interns return as entry-level analysts. Having a strong campus recruitment program has played a key role in attracting highly qualified candidates for our permanent and intern positions and building our workforce. In response to congressional request, the remarks will focus on (1) the strong partnerships developed through our campus recruitment program, (2) recent actions GAO has taken to enhance the program and the positive outcomes GAO has experienced, and (3) the programs and policies we have in place to support new staff. Through our campus recruitment program, we have established ongoing relationships with many colleges and universities across the country. While we advertise all of our new positions publicly, currently we have relationships with about 70 colleges and universities, including private and public colleges and universities, Historically Black Colleges and Universities, Hispanic-serving institutions, and other minority-serving institutions. These targeted schools have academic programs relevant to our skill needs (e.g., public policy, accounting, business or computer science) and that prepare students well for success at GAO. Our relationship-building over the years has been based primarily on visiting many of these schools to participate in on-campus events. As part of overall efforts to focus more attention on our strategic human capital management, we have taken proactive steps to improve our recruitment program. Specifically, we (1) established stronger linkages between our recruitment efforts and organizational workforce needs, (2) increased diversity of and enhanced supports for our staff serving as recruiters, and (3) instituted stronger program management and accountability processes. We have seen positive outcomes from these efforts. Along with attracting and hiring high-quality, diverse staff, we have implemented programs and policies to support new staff once they arrive at GAO. The support for our entry-level staff comes predominantly through their participation in our highly regarded, 2-year, Professional Development Program (PDP). This program provides new employees with the foundations to be successful because it teaches them about our core values, how we do our work, and the standards by which we assess our performance. All entry-level analyst or analyst-related new hires are assigned advisers to assist in their development and provide support, although staff are also strongly encouraged to take an active role in their own career development by crafting Individual Development Plans and assessing their own strengths and growth areas. Staff receive multiple assignments while in the program so they can gain firsthand experience with the wide range of our work. They also receive a rigorous regimen of classroom and on-the-job training to learn about our work processes and requirements. Staff in the PDP program also receive formal feedback every 3 months and twice-yearly performance appraisals that can result in salary increases. In addition, actions of our senior leaders as well as several policies and other programs help our new hires make a successful adjustment to GAO. The Comptroller General and others meet with new employees during their first few months to answer any questions about GAO or our relationship with Congress. Other senior managers, including Managing Directors and directors in each GAO team, are encouraged not only to meet with new staff but take an active role in their development and day-to-day work environment. We also have policies in place to foster an inclusive and supportive work environment and help all staff balance work and life. We also have a student loan repayment program to help eligible staff defray educational costs. PDP staff, as all staff at GAO, can take advantage of a mentoring program to assist staff in becoming effective leaders, managing their work environments, and developing their careers. These programs and policies have helped make GAO a great place to work. |
The C-17 military transport, which is being produced for the Air Force by the McDonnell Douglas Corporation, is designed to airlift substantial payloads over long ranges without refueling. The Air Force intends the C-17 to be its core airlifter and the cornerstone of its future airlift force. The Congress had appropriated about $20.7 billion and authorized the acquisition of 40 aircraft, through fiscal year 1996, for the C-17 program. The $20.7 billion includes $5.9 billion for research and development, $14.6 billion for procurement, and $170 million for military construction. The Congress has also authorized the Department of Defense (DOD) to enter into a multiyear contract for the acquisition of the remaining 80 aircraft of the 120 aircraft C-17 program. As of July 3, 1996, 27 aircraft have been delivered. The C-17 development contract required the Air Force to conduct a 30-day evaluation of the aircraft’s compliance with RM&A specifications. The evaluation was also used to determine how much of a $12-million incentive fee the contractor was entitled to for meeting those specifications. In October 1992, the Air Force developed a draft RM&A evaluation plan that was closely tailored to the contract specifications. The plan was revised during 1994 and issued in July 1994. The 30-day RM&A evaluation was conducted between July 7 and August 5, 1995. It consisted of a 23-day peacetime segment and a 7-day wartime segment. Aircraft operations, using 12 aircraft, were conducted at 6 U.S. airfields and 1 overseas base. Table 1 shows the number of missions, sorties, and flight hours flown during the evaluation. Missions included logistics (transporting equipment, personnel, and supplies); joint operations (training with equipment and personnel from the Army); and peacetime aircrew training. The wartime logistics missions were designed to simulate long-range movement of equipment, personnel, and supplies to forward operating bases or small austere airfields. Peacetime and wartime missions included aerial refueling; equipment and personnel airdrops; formation flying; low-level operations; and operations into small, austere airfields. The wartime missions ranged from 12.5 to 26 hours, while the peacetime missions ranged from 2 to 20.5 hours. By the end of the evaluation, the C-17 fleet had logged about 13,000 total operational flying hours since initial squadron operations began in 1993. The RM&A evaluation represents about 2 percent of the 100,000 flying hours needed to meet aircraft fleet maturity. The C-17 met or exceeded 10 of the 11 RM&A evaluation contract specification requirements. (See app. I.) However, the RM&A evaluation was less demanding than originally called for in the contract specifications and the 1992 draft RM&A plan. The RM&A evaluation, based on the 1994 revised plan, decreased the ratio of sorties to total flying hours. The decrease weakened the link between the evaluation as executed and the RM&A measurement criteria. In addition, the evaluation was less demanding because the number of airdrops and landings on small austere airfields was decreased and lighter average cargo loads than called for in the contract specifications were carried. The 1992 draft RM&A evaluation plan was based on 25 C-17 mission profiles representing the aircraft’s projected peacetime and wartime usage over a 30,000-hour airframe life included in the development contract. In developing its 1992 draft plan, the Air Force conducted extensive analyses and reviews to ensure that the plan adhered to the contractual requirements. In January 1994, as part of the settlement related to the C-17 development program between DOD and the contractor, DOD directed the Air Force and the contractor to revise the C-17 RM&A evaluation plan to make it more operationally realistic. That is, to more realistically mirror the planned use of the aircraft. In addition, because of reliability problems with the C-17, the scheduled November 1995 Defense Acquisition Board was to consider the evaluation results when it decided whether to continue the C-17 program beyond 40 aircraft. As part of the 1994 revisions, the Air Mobility Command changed the mission profiles used in the October 1992 draft plan because they did not represent complete and comprehensive missions. Command officials were also concerned that the 1992 draft plan would not demonstrate the aircraft’s wartime surge utilization rates included in the C-17 Operational Requirements Document. In July 1994, the Air Force issued the revised RM&A evaluation plan. The plan included a wartime scenario representative of a major regional contingency, additional sorties to simulate complete missions, and additional flying hours to increase the aircraft’s utilization rate. The revised mission profiles in the final RM&A plan increased the total number of flying hours, number of aircraft sorties, and average wartime sortie duration, but did not maintain the proportional mix of sorties to flying hours that was based on contract specifications. The impact of these changes was longer duration wartime sorties and a reduced ratio of sorties to flying hours, resulting in less stress on the RM&A aircraft than originally planned. Longer missions with fewer cycles, such as strategic intertheater missions, place less stress on an aircraft and will result in longer aircraft life. The 1992 draft evaluation plan provided for 1,725 total flying hours. The RM&A evaluation increased the level to 2,259 flying hours, an increase of 31 percent over the draft plan. The total number of sorties flown increased 12 percent, but the average sortie time increased 17 percent. Peacetime sorties increased 34 percent, from 248 to 334, but the number of wartime sorties decreased 15 percent, from 211 to 179. Although the change in the duration of the average peacetime sortie was negligible, the average wartime sortie increased by 50 percent, from 3.99 to 5.97 hours. (See app. II.) Because the average wartime sortie significantly increased, the number of sorties in relation to the number of flying hours was less than planned in the 1992 draft RM&A evaluation plan. We estimate that if the average duration of peacetime and wartime aircraft sorties had not changed, the Air Force would have needed to fly 90 additional sorties. This represents a 15-percent increase in the number of sorties necessary to maintain the proportional mix of flying hours to aircraft sorties identified in the 1992 draft evaluation plan. (See app. III.) The ratio of flying hours to sorties specified in the contract and 1992 draft plan mission profiles was based on the profiles used in the development of selected RM&A measurement criteria. In addition to reducing the stress on the RM&A aircraft, changes to the original mission profiles weakened the link between the RM&A evaluation scenarios and the assessment criteria developed using the original profiles. The Air Force used the C-17 lifetime mission profiles in the contract specifications to develop the test profiles in the 1992 draft of the RM&A evaluation plan. These lifetime mission profiles were also used to develop a number of the C-17 RM&A growth curve parameters, such as mission completion success probability, full mission capable rate, and partial mission capable rate. The RM&A growth curves, based on total C-17 fleet flying hours, are the criteria used to measure the C-17 RM&A results. A 1981 report by the contractor noted that the operational profiles flown during the RM&A evaluation must be the same as the profiles used to develop the growth curves. Since the original mission profiles were used as a basis for developing RM&A growth curve criteria, a revision in the profiles required a corresponding adjustment in the respective growth curves. The failure to make such an adjustment affected the use of the growth curves as RM&A measurement criteria. The total number of airdrops and austere airfield landings accomplished in the RM&A evaluation were less than called for in the 1992 draft plan, thus causing less stress and wear on the C-17 aircraft and its subsystems. The total number of airdrops was reduced from 189 to 158, a 16-percent reduction. Wartime airdrops were decreased by 92 percent, from 50 to 4. Air Force officials stated that they significantly decreased the number of wartime airdrops because the 1992 Mobility Requirements Study and the 1995 Mobility Requirements Study Bottom-Up Review Update did not include airdrop as a requirement for a major regional contingency warfighting scenario. The number of C-17 small, austere airfield landings was 16 percent less than called for in the 1992 draft plan—138 instead of 164. According to Air Mobility Command officials, they reduced the number of landings from 164 to 148 because they did not believe the additional landings were needed to determine the RM&A evaluation impact and an additional 10 planned landings were not accomplished due to mechanical or environmental problems. Although the type of cargo carried during the RM&A evaluation was realistic, the average weight of the loads was less than half that projected in the mission profiles in the contract specifications. As a result, the aircraft and its subsystems experienced less stress and wear during the evaluation. Based on the mission profiles in the contract specifications, the average cargo weight per mission over the lifetime of the C-17 aircraft is 48,649 pounds. However, the aircraft only carried an average cargo weight of approximately 23,000 pounds during the RM&A evaluation. In addition, the actual average cargo weight carried during landings on small austere airfields was nearly 2.5 times less than the average cargo loads projected in the contract specifications (about 18,600 rather than 45,000 pounds). We are currently reviewing the C-17’s performance in Bosnia. This work should provide greater insights into aircraft performance when carrying heavier loads. One reason for revising the 1992 draft RM&A evaluation plan was to demonstrate the wartime surge utilization rate included in the C-17 Operational Requirements Document—that is, operate 15.2 flying hours a day per aircraft for 45 days. Aircraft utilization rate goals were met and slightly exceeded during the RM&A evaluation. However, the evaluation was not intended to provide a statistically valid basis for predicting the C-17’s ability to meet its wartime surge rate. It did not demonstrate what a mature C-17 fleet would do during 45 days of wartime surge operations. The evaluation simply demonstrated that high utilization rates could be achieved over a 48-hour period. The actual peacetime utilization rate was 4.3 hours per aircraft. The wartime sustained rate was 12.7 hours, with wartime surge rates of 16.6 and 17.1 hours demonstrated during two 24-hour periods. According to DOD and Air Force officials, it would not be economically feasible to conduct more realistic tests because of the large amount of flying hours and resources required. Moreover, while utilization rates are used as one basis for budgeting for logistics resources and mission planning, a higher utilization rate does not necessarily mean that one aircraft is a better airlifter than another. Simply stated, utilization rate is the number of hours, per aircraft, that a fleet of airplanes is in the air on a given day. More time in the air yields higher utilization rates, more time spent on the ground yields lower utilization rates. The rate is a function of the total airlift system that includes, among other things, aircraft, personnel, airfields, logistics resources, and concepts of operation. All these factors influence the attainment of a utilization rate objective, and most have little or nothing to do with an aircraft’s inherent capability. For example, utilization rates can be increased by longer mission flying times, slower airspeeds, aircrew augmentation, and ramp space availability. Conversely, a faster aircraft flying the same distance will have a lower utilization rate. The Air Force awarded the C-17 contractor $5.91 million of the maximum $12-million incentive fee. However, our review showed that amount was $750,000 more than justified under the contract. The amount should have been reduced because the C-17 aircraft were not full mission capable during the evaluation. (See app. IV for our calculation of the appropriate incentive fee.) According to the C-17 development contract, the RM&A incentive fee was to be based on the degree that the contractor met each of 11 individual RM&A parameter goals. That is, to receive the total $12-million payment, the contractor had to achieve the goals for each of the 11 parameters. If any parameter was not met, the payment was reduced by the amount for that parameter and half of the amounts for the remaining parameters. The contractor was awarded only $5.91 million because the C-17 did not meet the requirement for the built-in-test false indication parameter. In awarding the $5.91-million fee, the Air Force gave the contractor credit for meeting the full mission capable goal. In our opinion, none of the aircraft should have been considered full mission capable during the evaluation. First, the Air Force, based on the results of developmental testing, had restricted the aircraft from executing the formation personnel airdrop mission under operational conditions for safety reasons. This mission was a requirement identified in C-17 operational documents. The restriction on formation personnel airdrop existed because turbulence caused by the aircraft can cause injuries to paratroopers. As a result, the aircraft are not permitted to fly in sufficiently close formation to airdrop the required number of personnel under operationally representative conditions as required by the contract specification. Second, the aircraft were not considered effective for the aeromedical evacuation mission, which was not completely tested during the RM&A evaluation. The aircraft were reconfigured to demonstrate this capability, but not all the systems that would be needed to accomplish the mission were used. Initial operational test and evaluation testing, which included the information developed during the RM&A evaluation, identified a number of problems that prevented the aircraft from being considered able to perform the aeromedical evacuation mission. For example, the emergency oxygen supply to patient litters was defective. As a result, the Army has classified the aircraft as not functionally effective for aeromedical evacuation. We recommend that the Secretary of Defense direct the Secretary of the Air Force to initiate action to recover the $750,000 in incentive fee overpayment from the contractor. In commenting on a draft of this report, DOD partially concurred with our findings but did not concur with our recommendation. DOD stated that the 1994 plan was actually more extensive and more operationally representative than the draft 1992 plan because it increased the total flying hours, the number of sorties, wartime sortie duration, aerial refueling, and formation flying missions. However, DOD acknowledged that the 1994 plan reflected (1) a 30-percent reduction in the number of airdrop sorties, (2) a 10-percent reduction in the number of small austere airfield landings, and (3) more than a 50-percent reduction in the average cargo loads carried during the evaluation compared to the 1992 draft plan. DOD indicated that the 1992 draft plan should not be used as a benchmark, rather the contract specification, including the 1994 Settlement Agreement between DOD and the contractor, should have been used. Further, DOD stated that not adjusting the growth curves to account for the changes made in the plan would have had only minimal impact on the results of the evaluation. We did not use the 1992 draft plan as a benchmark. Rather, we pointed out that scenarios in the 1992 draft plan and the growth curves, which are the criteria used to measure the success of the evaluation, were both based on the same factors from the contract specification. The 1994 plan changed the scenarios being flown in the evaluation to make it more operationally realistic and to more closely resemble a major regional contingency. However, the growth curves were not adjusted. DOD provided no documentation to support its assertion that adjusting the growth curves would have had only minimal impact. Moreover, the C-17 contractor has stressed that the profiles flown during the evaluation must be the same as those used to develop the growth curves. DOD acknowledged that the limited wartime surge activities during the RM&A evaluation did not provide a statistical basis for predicting the C-17’s ability to meet its wartime surge rate. DOD said we questioned the value of utilization rates in this report, even though in a prior report we had indicated that utilization rates were a useful statistic when comparing aircraft. Our point in the prior report was that the value of comparing utilization rates was undermined when DOD artificially constrained the utilization rate of one aircraft while using the planned wartime surge utilization rate for another. However, to assure that our position in this report is clear, we have modified the text dealing with utilization rates. DOD disagreed with our recommendation to seek reimbursement of $750,000 from the contractor, asserting that the aircraft was properly considered full mission capable as long as all the equipment required for the mission was available and operative. The contract specification defines full mission capable as the aircraft being capable of performing all of its design missions. Since it could not perform the formation personnel airdrop and aeromedical evacuation missions, we believe that the aircraft was incorrectly listed as full mission capable. The aircraft is restricted from performing the formation personnel airdrop mission for safety reasons. While the aircraft were reconfigured to perform the aeromedical evacuation mission, some of the equipment was not tested to ensure it was operating as needed to enable the aircraft to perform the mission. Further, the aircraft was classified as not functionally effective for aeromedical evacuation as a result of initial operational test and evaluation testing because of a number of problems, including equipment problems. We, therefore, continue to believe that the aircraft should not have been considered as full mission capable and the contractor should not have been paid the incentive award fee of $750,000 for meeting the full mission capable objective. To determine the overall performance of the C-17 during the evaluation, we monitored the conduct and coordination of the RM&A evaluation from the 437th Airlift Wing, Charleston Air Force Base, South Carolina. This included the daily RM&A evaluation activities of the exercise as well as related data collection and documentation activities. We also flew on selected C-17 missions and observed ground operations at C-17 operating bases, including North Auxiliary Airfield, South Carolina; Pope Air Force Base, North Carolina; and forward operating bases at Barstow-Daggett Municipal Airport, California, and Bicycle Lake Army Airfield, California. To determine the validity of the test design, mission mix, and operational realism of the exercise, we analyzed the RM&A evaluation plan. Specifically, we reviewed its purpose, structure, preparation, and execution as well as the results of the evaluation. We also interviewed officials from the 14th Airlift Squadron; the 17th Airlift Squadron; the 437th Airlift Wing; the 315th Reserve Airlift Wing; Air Mobility Command Headquarters; the C-17 System Program Office; C-17 Site Activation Task Force; San Antonio Air Logistics Center; Air Force Operational Test and Evaluation Center; Air Force Office of Operational Test and Evaluation; Headquarters U.S. Air Force; U.S. Army Test and Evaluation Command; U.S. Army Test and Experimentation Command; Institute for Defense Analysis; and McDonnell Douglas, the C-17 contractor. We conducted our review from June 1995 to March 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services; the Subcommittee on Defense, Senate Committee on Appropriations; the House Committee on National Security; the Subcommittee on National Security, House Committee on Appropriations; the Secretaries of Defense and the Air Force; and the Director of the Office of Management and Budget. We will also provide copies to other interested parties as requested. If you or your staff have any questions concerning this report, please contact me on (202) 512-4841. The major contributors to this report are listed in appendix VI. RM&A evaluation July 7 to August 5, 1995 MTBM(i) MTBM(c) Mission capable (capable to perform at least one mission) Full mission capable (capable to perform all missions) Mission completion success probability (complete mission objectives without experiencing failure or performance degradation due to equipment problems) Mean time between maintenance-inherent (mean flight hours between unscheduled, on-equipment, inherent maintenance actions) Mean time between maintenance-corrective (mean flight hours between unscheduled corrective actions) Mean time between removal (mean flying hours between removal of any repairable equipment) Maintenance man hours per flying hour (total maintenance hours expended for each flight hour) Mean man hours to repair (the mean maintenance man hours required to complete a corrective maintenance action) Built-in-test fault detection (percentage of occurrences in which BIT correctly detects a malfunction) Built-in-test fault isolation (percentage of occurrences in which BIT correctly isolates a detected malfunction to the failed equipment item) Built-in-test false fault indication (percentage of occurrences in which BIT indicated a malfunction when none existed) Revised plan (July 1994) Original plan (October 1992) Difference between revised and original (36) (17) (32) (15) (0.16) (05) MTBM (I) MTBM (C) Noel J. Lance Dorian R. Dunbar Larry J. Bridges The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Air Force's reliability, maintainability, and availability (RM&A) evaluation of the C-17 aircraft, focusing on: (1) RM&A planning, preparation, execution, and results; and (2) whether the evaluation demonstrated the aircraft's wartime surge rate. GAO found that: (1) the Air Force reported that the C-17 aircraft met or exceeded 10 of the 11 contract requirements during its RM&A evaluation, but the evaluation was less demanding than originally planned; (2) although the revised RM&A evaluation plan increased total flying hours, the number of sorties, and average wartime sortie duration, it decreased the ratio of sorties to flying hours, which weakened the application of RM&A measurement criteria and lessened the stress on the aircraft; (3) the RM&A evaluation also had fewer airdrops and austere airfield landings than originally planned, and the aircraft flew cargo loads that averaged less than one-half the weight projected in contract specifications; (4) three years of operational testing show that the aircraft generally met RM&A requirements with the exception of those related to built-in-test parameters; (5) the RM&A evaluation was not a statistically valid test for determining C-17 fleet wartime utilization rates because the test's duration was too short; and (6) the incentive fee should have been reduced, since the aircraft could not perform the formation personnel airdrop mission under operational conditions, or the aeromedical evacuation mission. |
USPS’s financial condition has continued to deteriorate in the first 5 months of fiscal year 2009 and USPS expects its financial condition to continue deteriorating for the rest of the fiscal year, including: accelerating declines in mail volume after the first quarter, with a total decline of about 11 billion pieces; and accelerating losses after the first quarter, with a total loss of about $2 billion. USPS has updated its projections for fiscal year 2009, projecting a mail volume decline by a record 22.7 billion pieces (11.2 percent) from fiscal year 2008; a record $6.4 billion net loss, and an unprecedented $1.5 billion cash shortfall (i.e., insufficient cash to cover expenses and obligations), assuming cost-cutting targets of $5.9 billion are achieved; and plans to increase outstanding debt by $3 billion (the annual statutory limit) to $10.2 billion, or two-thirds of the total $15 billion statutory limit. USPS attributes much of its net loss this fiscal year to the economic recession that has resulted in unprecedented declines in mail volume and decreased revenues. Thus far in fiscal year 2009, First-Class Mail volume (e.g., correspondence, bills, payments, and statements) dropped about 9 percent, while Standard Mail volume (primarily advertising) dropped about 15 percent. According to USPS, the housing market downturn, the credit crisis, and lower retail sales have contributed to these volume declines. The financial and housing sectors are major mail users, mailing bills, statements, and advertising such as credit card, mortgage, and home equity solicitations. Volume declines have accelerated for both First-Class Mail and Standard Mail, as shown by quarterly data (see fig. 1) and results for January 2009 (see app. I). In addition, USPS projects its financial difficulties will continue in fiscal year 2010 and result in an even greater cash shortfall at the end of that fiscal year, despite plans for additional cost-cutting and additional borrowing of $3 billion, which would bring USPS’s total debt to $13.2 billion. Thus, USPS’s immediate problem is to generate sufficient cash to remain financially viable in fiscal years 2009 and 2010. USPS reports reducing expenses by $773 million in the first 5 months of fiscal year 2009 (compared to the first 5 months of fiscal year 2008), primarily through reductions of 50 million work hours that USPS made as it adjusted to declining mail volumes and workload. USPS reduced overtime and captured additional work hour savings as it reduced the size of its workforce through attrition and implemented other cost-saving initiatives. However, these savings and added revenue from rate increases were insufficient to fully offset the impact of declines in mail volume and rising costs from cost-of-living allowances (COLA) provided to postal employees covered by union contracts, as well as rising workers’ compensation and retirement costs. Also, although almost 8,500 employees accepted USPS’s early retirement offer during the first quarter of fiscal year 2009, the resulting savings to date have been limited because the effective dates for the majority of these retirements were December 31, 2008 or later. USPS has high overhead (institutional) costs that are hard to change in the short term, including providing 6-day delivery and retail services at close to 37,000 post offices and retail facilities. Compensation and benefits for USPS’s workforce, which included about 646,000 career employees and about 98,000 noncareer employees in February 2009, generate close to 80 percent of its costs. Collective bargaining agreements with USPS’s four largest unions include layoff protections and work rules that constrain USPS’s flexibility, as well as semiannual COLAs linked to the Consumer Price Index (CPI) and employee benefits including health and life insurance premium payments. Under these agreements, which expire in 2010 or 2011: USPS paid 85 percent of employee health benefit premiums in fiscal year 2007, about 13 percent more than the share for other federal agencies. USPS’s share is decreasing annually to 81 percent in 2011 or 80 percent in 2012, depending on the agreement. USPS pays 100 percent of employee life insurance premiums, about 67 percent more than most other federal agencies. USPS pays 100 percent of both employee health benefit premiums and life insurance premiums for its Postal Career Executive Service, which included 724 executives in fiscal year 2008. Executives at comparable grades in most other federal agencies do not receive such benefits. USPS’s financial outlook has continued to deteriorate during fiscal year 2009. USPS has increased its estimate of losses in total mail volume in fiscal year 2009 to 22.7 billion pieces (11.2 percent). As a result, USPS now projects a net loss of $6.4 billion for fiscal year 2009, despite increasing its cost-cutting target to $5.9 billion for the fiscal year. Based on these projections, USPS expects cash from operations and borrowing will be insufficient to cover expenses at the end of the fiscal year, with the shortfall projected to be $1.5 billion. This projected net loss and cash shortfall assumes USPS will meet its cost-cutting target and factors in USPS’s plans to borrow $3 billion. USPS’s Chief Financial Officer told us on March 16 that achieving USPS’s target to eliminate 100 million work hours this fiscal year will be critical to achieving its goal of reducing costs by $5.9 billion. He expressed guarded optimism that USPS can reach this ambitious cost-cutting target, explaining that the target is difficult, but achievable. He noted that USPS plans to continue efforts to reduce work hours as it responds to mail volume declines, including reductions in overtime and additional work hour savings achieved through attrition and other initiatives. Additional USPS cost-saving efforts include: Implementing a service-wide hiring freeze and reducing staffing levels for managers and other employees not covered by union agreements by 15 percent at headquarters and 19 percent at the nine Area offices. Evaluating more than 93,000 city delivery carrier routes (more than half of all city routes), eliminating about 2,500 city routes, and adjusting many other city routes, which USPS expects will result in saving about 3.2 million work hours in fiscal year 2009. An agreement between USPS and the National Association of Letter Carriers to expedite evaluation and adjustment of city delivery routes enabled this progress. Consolidating excess capacity in mail processing and transportation networks, including consolidating operations at some mail processing facilities, moving some mail processing employees from the day shift to evening hours, and streamlining transportation. Halting construction starts of new postal facilities. To increase its revenues, USPS has increased rates, including a January 2009 increase for competitive products (e.g., Priority Mail and Express Mail), and a planned May 2009 increase for market-dominant products (e.g., First-Class Mail, Standard Mail, Periodicals, and some types of Package Services). USPS has also introduced volume discounts, negotiated service agreements, and added some enhancements to competitive products since the Postal Accountability and Enhancement Act of 2006 (PAEA) was enacted in 2006. However, these products generated only about 11 percent of USPS’s revenues and covered about 6 percent of its overhead costs in fiscal year 2008. USPS is considering alternatives to try to increase First-Class Mail and Standard Mail revenues. USPS will be challenged to achieve and maintain high-quality service as it works to implement unprecedented cost-cutting measures. USPS recently reported for the first time on the service quality of many market-dominant postal products; thereby making important progress in improving transparency and meeting the requirements of PAEA. USPS has cautioned that limitations have affected the quality of new measurement data and said that it will work to improve data quality. As table 1 shows, on-time delivery of all major types of market-dominant products in the first quarter of fiscal year 2009 fell short of USPS’s targets for the full fiscal year. To put these results into context, the timeliness of mail delivery is an important part of USPS’s mission of providing affordable, high-quality universal postal services on a self-financing basis. USPS has stated that service is at the heart of its brand and the key to increasing its competitiveness and profitability. Action is needed on various options, as no single action will be sufficient for USPS to remain financially viable in the short and long term. The short- term challenge for USPS is to cut costs quickly enough to offset the unprecedented volume and revenue declines so that it does not run out of cash this fiscal year. The long-term challenge is to restructure USPS’s entire operations and networks to reflect the changes in mail volume, mailer preferences, and USPS’s capacity to cover its costs. Based on USPS’s poor financial condition and outlook, the time to take action is relatively short, and USPS’s business model and its ability to remain self- financing may be in jeopardy. A key factor in determining USPS’s financial viability is whether mail volume will rebound sufficiently once the economy improves, as volume has done in the past, so that USPS revenues will cover costs (see fig. 2). As the Postal Regulatory Commission (PRC) noted in December 2008, current pressures from declining volume and revenue do not appear to be abating, but rather, seem to be increasing. During the economic downturn, there has been accelerated diversion of business and individual mail to electronic alternatives, and some mailers have left the mail entirely. An economic recovery may not stimulate the same rebound in mail volume as in the past, because of changes in how people communicate and use the mail. Specifically: First-Class Mail volume has declined in recent years and is expected to decline for the foreseeable future as businesses, nonprofit organizations, governments, and households continue to move to electronic alternatives, such as Internet bill payment, automatic deduction, and direct deposit. USPS’s analysis has found that electronic diversion is associated with the growing adoption of broadband technology. As PRC reported, the availability of alternatives to mail eventually impacts mail volume. It is unclear whether Standard Mail will grow with an economic recovery. Standard Mail now faces growing competition from electronic alternatives, such as Internet-based search engine marketing, e-mail offers, and advertisements on Web sites. The average rate increase for Standard Mail is limited by the price cap to the increase in the Consumer Price Index, but future rate increases will likely have some impact on volume. Options to assist USPS through its short-term difficulties—some of which would require congressional action—include: Reduce USPS payments for retiree health benefits for 8 years: USPS has proposed that Congress change the statutory obligation to pay retiree health benefits premiums for current retirees from USPS to the Postal Service Retiree Health Benefits Fund (Fund) for the next 8 years. This proposal would also reduce USPS’s expenses through 2016 by an estimated $25 billion—with $2 billion in fiscal year 2009, $2.3 billion in fiscal year 2010, and the remaining annual expenses increasing from $2.6 billion to $4.2 billion over the remaining 6 years. This proposal is poorly matched to alleviate USPS’s immediate projected cash shortfalls. In addition, this proposal would reduce the Fund balance by an estimated $32 billion (including interest charges) by 2016, so that in 2017, the remaining current unfunded obligation would be an estimated $75 billion (rather than $43 billion) to be amortized for future payments. This large obligation would create the risk that USPS would have difficulty making future payments, particularly considering mail volume trends and the impact of payments on postal rates if volume declines continue. USPS’s proposal also would shift responsibility for these benefits from current to future rate payers. Reduce USPS payments for retiree health benefits for 2 years: Another option would be for Congress to revise USPS’s statutory obligation so that the Fund, not USPS, would pay for current retiree health benefits for only 2 years (fiscal years 2009 and 2010), which would provide USPS with $4.3 billion in relief. We support this option because it would have much less impact on the Fund and it would allow Congress to revisit USPS’s financial condition to determine if further relief is needed and review actions USPS has taken in 2009 and 2010 to improve its viability. Relief from retiree health premium costs is no substitute for aggressive USPS action—beyond current efforts— to dramatically reduce costs and improve efficiency. It is not clear that either of these options would be sufficient, because USPS projects it will operate on a thin margin. This means that even if such relief is provided, a cash shortfall could develop in either fiscal year 2009 and/or 2010 if USPS does not meet its ambitious cost-cutting goals, mail volume declines more than projected, or unexpected costs materialize, such as unexpected increases in fuel costs. One option that would not require congressional action would be for USPS and its unions to continue their dialogue and agree on ways to achieve additional short-term savings, such as by modifying rules to facilitate reducing work hours. Such labor-management cooperation is critical to USPS’s ability to make immediate changes in order to achieve cost reductions. Other available options, based on statutory provisions, could include (1) seeking PRC approval for an exigent rate increase and (2) increasing USPS’s annual borrowing limit. First, USPS could request PRC approval for an exigent rate increase that would increase rates for market-dominant classes of mail above the statutory price cap. Mailers have voiced strong concern about the potential impact of such a rate increase on their businesses. In our view, this option should be a last resort. It could be self- defeating for USPS in both the short and long term because it could increase incentives for mailers to further reduce their use of the mail. Second, Congress could temporarily raise the statutory $3 billion annual limit on increases in USPS’s debt, which would provide USPS with funding if needed. This option would be preferable to an exigent rate increase. However, it is unclear when USPS would repay any added debt, which would quicken USPS’s movement toward its $15 billion statutory debt limit. In our view, this option should be regarded only as an emergency stop-gap measure. Although USPS is taking unprecedented actions to cut costs, comprehensive action beyond USPS’s current efforts is urgently needed to maintain financial viability. Given the growing gap between revenues and expenses, USPS’s business model and its ability to remain self-financing may be in jeopardy. Progress in many areas will be needed so that USPS can cover operating expenses and maintain and modernize its infrastructure. I want to emphasize that action is urgently needed to streamline USPS’s costs in two areas where it has been particularly difficult—compensation and benefits and the mail processing and retail networks. We have reported for many years that USPS needs to right size its workforce and realign its network of mail processing and retail facilities. USPS has made some progress, particularly by reducing its workforce by more than 100,000 employees since 2000 with no layoffs and by closing some smaller mail processing facilities. Yet, as USPS recognizes, more needs to be done. USPS no longer has sufficient revenue to cover the cost of maintaining its large network of processing and retail facilities. Closing postal facilities would be controversial, but is necessary to streamline costs. Congress encouraged USPS to expeditiously move forward in its streamlining efforts in PAEA, and its continued support would be helpful to facilitate progress in this area. We recommended that USPS enhance the transparency and strengthen the accountability of its realignment efforts to assure stakeholders that realignment would be implemented fairly, preserve access to postal services, and achieve the desired results. USPS has taken steps to address our recommendations and, thus, should be positioned to take action. In addition, it is imperative for USPS and Congress to take informed action to review mail use, what future postal services will be needed, and what operational and statutory options are available to provide those services. Key areas with options include: Universal Postal Service: A recently completed PRC study identified options for universal service and trade-offs involving quality and costs. When USPS asked Congress in January 2009 to eliminate the long- standing statutory provision mandating 6-day delivery, it provided little information on where it would reduce delivery frequency, and the potential impact on cost, mail volume, revenue, and mail users. Because the number of delivery days is fundamental to universal service, Congress should have more complete information before it considers any statutory changes in this area. A mechanism to obtain such information would be for USPS to request an advisory opinion from PRC, which would lead to a public proceeding that could generate information on USPS’s request and stakeholder input. USPS workforce costs: USPS’s ability to control wage and benefit costs will be critical to cost-saving efforts. One option would be for USPS and its unions to negotiate changes to wages and benefits that apply to employees covered by collective bargaining agreements. USPS will begin negotiating next year with two of its major unions, whose agreements will expire in November 2010, and the following year with its other two major unions, whose agreements expire in November 2011. Retail postal service: USPS has alternatives to provide lower-cost retail services than in traditional post offices, such as contract postal facilities, carrier pick-up of packages, and selling stamps at supermarkets, drug stores, and by telephone, mail, and the Internet. USPS’s retail network has been largely static, despite the expansion of alternatives, population shifts, and changes in mailing behavior. We have reported that USPS could close unnecessary retail facilities and lower its network costs. It is important to note that large retail facilities—generally located in large urban areas where more postal retail alternatives are available—generate much higher costs than the smallest rural facilities and may, therefore, potentially generate more cost savings. Mail processing: USPS has several options for realigning its mail processing operations to eliminate growing excess capacity and associated costs, but has taken only limited action. In 2005, we reported that, according to USPS officials, declining mail volume, worksharing, and the evolution of mail processing operations from manual to automated equipment has led to excess capacity that has impeded efficiency gains. USPS has terminated operations at 58 Airport Mail Centers in recent years, but has closed only 1 of over 400 major mail processing facilities. As USPS consolidates its operations, it needs to consider how it can best use its facilities, if it is cost effective to retain ones that are underutilized, and take the actions necessary to right size its network. Transportation: Various options exist for reducing USPS’s transportation costs beyond its current streamlining efforts. For example, a joint USPS-mailer workgroup has identified a destination entry discount for First-Class Mail as an option that could reduce the need for USPS to provide long-distance transportation and some mail processing. USPS could publicly provide its analysis of the potential savings and the impact of such a discount. Delivery: USPS has various options for reducing delivery costs by continuing to realign delivery routes, implementing efficiency initiatives, and making more fundamental changes to delivery operations, such as delivering mail to more cost-effective receptacles, including cluster boxes. USPS’s business model: We will discuss options to change USPS’s business model in a report that PAEA requires us to issue by December 2011. Given USPS’s projection that it faces record losses and cash shortfalls, it is important for USPS to continue providing Congress and the public with timely and sufficiently detailed information to understand USPS’s current financial situation and outlook. Such information is essential to help congressional policymakers understand USPS actions and plans to maintain its financial viability in both the short and long term, particularly in view of proposals to give USPS financial relief from some retiree health benefit costs. Recently USPS took steps in this direction by providing monthly financial information to the PRC, which then made this information publicly available. We asked USPS to comment on a draft of our testimony. USPS generally agreed with the accuracy of our statement and provided technical comments, which we incorporated where appropriate. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or the Members of the Subcommittee may have. For further information regarding this statement, please contact Phillip Herr at (202) 512-2834 or [email protected]. Individuals who made key contributions to this statement include Shirley Abel, Teresa Anderson, David Hooper, Kenneth John, Emily Larson, Joshua Ormond, Susan Ragland, and Crystal Wesco. (Volume and revenue data in thousands) FY 2008 through Jan. 2008 Market-dominant products primarily include First-Class Mail—domestic and international single-piece mail (e.g., bill payments and letters) and domestic bulk mail (e.g., bills and advertising); Standard Mail (mainly bulk advertising and direct mail solicitations), periodicals (mainly magazines and local newspapers), some types of package services (primarily single-piece Parcel Post, Media Mail, library mail, and bound printed matter). Market-dominant revenues also include revenues from services such as post office boxes and Delivery Confirmation. Competitive products primarily include Express Mail; Priority Mail; bulk Parcel Post, which the Postal Service calls Parcel Select; and bulk international mail. The Postal Service did not report separate data for each competitive product, which the Postal Service considers to be proprietary. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | When Congress passed the Postal Accountability and Enhancement Act in December 2006, the U.S. Postal Service (USPS) had just completed fiscal year 2006 with its largest mail volume ever--213 billion pieces of mail and a net income of $900 million. Two years later, USPS's financial condition has deteriorated. Mail volume declined by a record 9.5 billion pieces (4.5 percent) in fiscal year 2008, leading to a loss of $2.8 billion--the second largest since 1971. According to USPS, this was largely due to declines in the economy, especially in the financial and housing sectors, as well as shifts in transactions, messages, and advertising from mail to electronic alternatives. Declining mail volume flattened revenues despite rate increases, while USPS's cost-cutting efforts were insufficient to offset the impact of declining mail volume and rising costs in fuel and cost-of-living allowances for postal employees. USPS's initial fiscal year 2009 budget expected that the turmoil in the economy would result in more mail volume decline and a loss of $3.0 billion. This testimony focuses on (1) USPS's financial condition and outlook and (2) options and actions for USPS to remain financially viable in the short and long term. It is based on GAO's past work and updated postal financial information. We asked USPS for comments on our statement. USPS generally agreed with the accuracy of our statement and provided technical comments, which we incorporated where appropriate. USPS's financial condition has continued to deteriorate in the first 5 months of fiscal year 2009 and USPS expects its financial condition to continue deteriorating for the rest of the fiscal year. Key results include: (1) accelerating declines in mail volume after the first quarter, with a total decline of about 11 billion pieces, and (2) accelerating losses after the first quarter, with a total loss of about $2 billion. USPS's updated fiscal year 2009 projections suggest the magnitude of the challenges it faces: (1) mail volume will decline by a record 22.7 billion pieces (11.2 percent),(2) a record $6.4 billion net loss and an unprecedented cash shortfall of $1.5 billion, assuming that cost-cutting targets of $5.9 billion are achieved, and (3) plans to increase outstanding debt by $3 billion (the annual statutory limit) to $10.2 billion, or two-thirds of the $15 billion statutory limit. In addition, USPS projects its financial difficulties will continue in fiscal year 2010 and result in an even greater cash shortfall. USPS's most immediate challenge is to dramatically reduce costs fast enough to meet its financial obligations. USPS has proposed that Congress give it financial relief of $25 billion over 8 years by changing the statutory mandate for funding its retiree health benefits. GAO recognizes the need for immediate financial relief, but prefers 2-year relief so that Congress can determine what further actions are needed. It is not clear that either option would be sufficient because USPS projects it will operate on a thin margin, risking a larger cash shortfall if it does not meet its ambitious cost-cutting goals, mail volume declines more than projected, or unexpected costs materialize, such as fuel cost increases. Although USPS is taking unprecedented actions to cut costs, comprehensive action beyond USPS's current effort is urgently needed to maintain financial viability. Given the growing gap between revenues and expenses, USPS's business model and its ability to remain self-financing may be in jeopardy. Action is needed to streamline costs in two difficult areas: (1) compensation and benefits, which generate close to 80 percent of costs and (2) mail processing and retail networks, which have growing excess capacity. Closing postal facilities is controversial, but necessary, because the declining mail volume and growing deficits indicate that USPS cannot afford to maintain such an extensive network. Information will be critical to determine what other actions are needed, including options to cut costs as well as their impact on mail volume and mail users. It is also imperative to review mail use, what future postal services will be needed, and what options are available in many areas, including universal service, workforce costs, retail services, mail processing, delivery, transportation, and USPS's business model. |
The Capitol Visitor Center (CVC) was the largest construction project on the Capitol Grounds in over 140 years. It was built to provide greater security for all persons working in or visiting the U.S. Capitol and an enhanced educational experience for visitors to learn about the Congress and the Capitol Building. The construction contract for structural and excavation work was awarded in June 2002. Subsequently, we reported on delays in the construction of the CVC and uncertainties in the estimated cost of the project until it opened in December 2008, at a total cost of $600 million, well above the original project budget of $265 million. The House report accompanying the fiscal year 2008 Legislative Branch Appropriations bill noted the long-standing and continuing lapses in AOC management practices, including the delays and escalating costs of the CVC; cost overruns and time delays on other projects; and with regard to the Capitol Power Plant utility tunnels, complete management breakdown, failure of appropriate oversight responsibilities, and total disregard for the human element. That Legislative Branch Appropriations bill included language that would have created an OIG for AOC to provide an independent office within AOC, and the AOC IG Act establishing the OIG was enacted on December 26, 2007, as part of the Consolidated Appropriations Act for fiscal year 2008. The Senate report that accompanied this legislation called for a statutory inspector general for AOC to promote integrity and efficiency in AOC programs and to detect and prevent fraud, waste, and abuse. The related House report directed that the new IG ensure that AOC is utilizing appropriate management practices and budgetary and accounting standards within the limitations of all laws applicable to AOC operations and auditing and reporting semiannually on management, operational issues, and other issues as outlined in an annual audit plan. The first statutory AOC IG was the former Assistant IG for Audit at the Corporation for National and Community Service and was appointed by the Architect in August 2008. After the first IG’s retirement in August 2013, the Architect appointed AOC’s Deputy General Counsel to become the current IG in September 2013. In addition to the IG, at the end of fiscal year 2015, the AOC OIG consisted of a Deputy IG (who also serves as the OIG’s legal counsel), Assistant IG for Audit, Assistant IG for Investigations, two auditors, two investigators, a management analyst, and an administrative officer. (See fig. 1.) AOC is responsible for the maintenance, renovation, and construction of the Capitol Hill buildings and grounds covering 17.4 million square feet of facilities and more than 587 acres, and the AOC OIG is responsible for the audit and investigative oversight of AOC. AOC carries out its mission through 10 jurisdictions with specific program responsibilities and Capitol Construction and Operations, which is made up of nine central offices, including the independent OIG. (See fig. 2.) AOC’s 10 jurisdictions manage AOC’s programs for the maintenance, operations, and preservation of the grounds and structures across Capitol Hill. (See app. II.) AOC’s Capitol Construction and Operations offices have responsibilities that range from overall planning and project management to financial and human capital in support of AOC’s mission and programs. (See app. III.) AOC’s responsibilities include the construction and restoration of key facilities. AOC classifies its largest projects for construction and restoration—those with an expected cost of over $50 million—as mega projects. Mega projects are designed by external architecture and engineering firms and constructed by external construction firms under major contracts managed by AOC. AOC’s performance and accountability reports highlight mega projects, including the Capitol Dome restoration and the Cannon House Office Building renewal. According to AOC officials, at the end of fiscal year 2015 AOC had four ongoing mega projects estimated to cost in total approximately $1.1 billion. (See app. IV.) According to AOC officials, the Cannon House Office Building renewal project is estimated to cost approximately $752.7 million, and the Capitol Dome restoration project is estimated to cost approximately $96.7 million. In addition, AOC estimated that the Refrigeration Plant Revitalization and the Capitol Power Plant Cogeneration projects will cost approximately $183.2 million and $116.6 million, respectively. As established by statute in 2007, the stated purposes of the AOC OIG are to (1) conduct and supervise audits and investigations relating to AOC; (2) provide leadership and coordination and recommend policies to promote economy, efficiency, and effectiveness; and (3) provide a means of keeping the Architect and the Congress fully and currently informed about problems and deficiencies related to the administration of programs and operations of AOC. The AOC IG is appointed by the Architect, in consultation with the Inspectors General of the Library of Congress, the Government Publishing Office, the U.S. Capitol Police, and GAO, and is to be selected without regard to political affiliation and solely based on integrity and demonstrated ability in accounting, auditing, financial analysis, law, management analysis, public administration, or investigations. The IG reports to and is under the general supervision of the Architect, who has no authority to prevent or prohibit the IG from initiating or completing any audit or investigation, issuing any subpoena during the course of an audit or investigation, issuing any report, or carrying out any other statutory duty or responsibility of the IG. In addition, the IG may be removed from office by the Architect, who must promptly communicate in writing the reasons for such removal to each house of the Congress. Subject to the laws governing selection, appointment, and employment by AOC, generally, the IG is authorized to select, appoint, and employ such officers and employees, including consultants, necessary to carry out the functions, powers, and duties of the OIG. The AOC IG Act also incorporates numerous provisions of the Inspector General Act of 1978 (IG Act), as amended, imposing responsibilities and providing authorities common among federal OIGs. Among those responsibilities, the IG must comply with Government Auditing Standards, which requires, among other things, that in all matters relating to audit work, the audit organization and the individual auditor must be free from personal, external, and organizational impairments to independence and must avoid the appearance of such impairments. The IG must also take appropriate steps to ensure that any work performed by nonfederal auditors complies with these standards. In addition, whenever the IG has reasonable grounds to believe there has been a violation of federal criminal law, it is to be reported expeditiously to the Attorney General. The IG is also required to prepare semiannual reports to inform the Architect and the Congress of any significant problems found and recommendations for corrective action made by the OIG during the reporting period. The IG’s semiannual reports are to contain updates on significant recommendations from previous reports that have not been completed; a summary of matters referred to prosecutors and any prosecutions that occurred; and monetary accomplishments for the reporting period, including the dollar value of questioned costs and the dollar value of recommendations that funds be put to better use. AOC Order 40-1, Order on the Office of Inspector General Authority and Responsibilities and Architect of the Capitol Employee Responsibilities, effective October 12, 2010, sets out the authority and responsibilities of the AOC OIG in carrying out independent audits and investigations and promoting the economy, efficiency, and effectiveness of AOC. The requirements are mostly derived from the statutory requirements and authorities of the AOC IG. These include the IG’s direct access to the Architect, as well as access to all records, reports, audits, reviews, documents, papers, or other material available to the AOC. The order also specifies the IG’s authority to undertake such investigations and reports that are, in the judgment of the IG, necessary or desirable; request information and assistance from any federal, state, or local governmental agency; and administer or take from any person an oath, affirmation, or affidavit, when necessary in performing OIG functions. In addition, CIGIE’s Quality Standards for Federal Offices of Inspector General provides requirements for OIGs when developing an appropriate planning process and for managing, operating, and conducting oversight, including audits and investigations. AOC Order 40-1 includes the requirement for OIG audits to comply with Government Auditing Standards and the responsibility to follow an audit process, starting with an audit plan at the beginning of each fiscal year. According to the order, the plans are to include audits based on risk and materiality (significance or importance), legislatively mandated audits, requests from the Congress and AOC, or other work selected for audit or evaluation. The audit process is to include written notification to AOC followed by an entrance conference with pertinent AOC representatives before the audit begins. After the audit work is complete, an exit conference is to be held with the AOC point of contact and AOC management to discuss the results of the audit. The audit team is to issue a draft audit report to AOC management for comment and then issue a final audit report to present the results of the audit with the comments of AOC management. The AOC OIG has also developed written policies in its Audit Policies Manual and its Audit Standard Operating Procedures Manual. Together, these two manuals provide general auditing policies as well as specific audit procedures for planning and conducting audits and reporting the results of audits to AOC and the Congress. They also provide guidelines for using nonfederal auditors to audit AOC’s annual financial statements. AOC Order 40-1 includes the requirement for OIG investigations to be conducted in accordance with CIGIE Quality Standards for Investigations. The order states generally that the OIG is responsible for conducting and supervising investigations to find, remedy, or prevent fraud, waste, and abuse. It also lists examples of subjects that the OIG investigates, including fraud, waste, or abuse; bribes, kickbacks, and bid rigging; conflicts of interest; credit or purchase card fraud; forgery or thefts; improper use of AOC resources or property; violations of laws, rules, or regulations; and reprisal for reporting allegations of fraud, waste, or abuse to the OIG. The order also specifies that individual discrimination or retaliation complaints, individual employee benefits and compensation issues, individual grievances, individual workplace conflicts or matters covered in the collective bargaining agreement, or complaints regarding workplace safety and health or environmental issues are to be referred to AOC offices rather than to the OIG. Policies and procedures for OIG investigations are contained in the Investigative Program Manual, which outlines the investigative conduct policy, coordination procedures, and other policies on the administration of investigations. These OIG policies also require that investigators adhere to all applicable CIGIE Quality Standards for Investigations. The CIGIE investigative standards include requirements for the qualifications of investigative personnel and the independence of investigative organizations and investigators from personal, external, and organizational impairments. In addition, the standards require the use of due professional care in the thoroughness of investigations, the application of legal requirements, and the use of appropriate investigative techniques. Due professional care also requires investigators to be impartial and objective and to provide accurate and complete documentation to support investigative reports. The CIGIE investigative standards also provide guidance on conducting investigations; using investigative plans with organizational and case-specific priorities; accurately, completely, and objectively reporting all relevant aspects of investigations; and managing investigative information. Funding for the AOC OIG is included in the appropriation available for AOC’s general administration. The OIG submits its budget requests for review to the AOC budget office, which then includes it as part of AOC’s overall budget request submitted to the Congress. The OIG’s budgets grew from approximately $2.0 million in fiscal year 2012 to approximately $2.7 million in fiscal year 2015, while the OIG’s staffing remained relatively constant, as shown in table 1. The OIG’s budget increase occurred primarily in fiscal year 2015 when the OIG was provided additional funding for support services. According to the OIG, the additional funds were intended to hire individuals with engineering expertise to assist in the audit oversight of the Capitol Dome restoration and Cannon House Office renewal projects, but instead the OIG returned $343,501 of the funds to AOC when the engineering expertise was not obtained. The OIG ultimately hired a civil engineer in fiscal year 2016. The OIG’s strategic and annual audit plans for the 4-year period we reviewed did not include an assessment of AOC’s risks and did not establish priorities for providing audit reports. In addition, the current IG eliminated all criminal investigator positions, leaving the OIG investigators with no responsibility to complete investigations of allegations of criminal wrongdoing, which the OIG now refers to the U.S. Capitol Police (USCP) for investigation. Also, although the OIG is responsible for addressing fraud, waste, and abuse under AOC policy, we found instances where other AOC offices investigated such allegations of wrongdoing within their own offices, despite the potential for conflicts of interest. The OIG’s lack of adequate audit planning, lack of criminal investigators, and reliance on AOC program offices to conduct investigations of alleged wrongdoing have contributed to a significant decline in its audit and investigative reports and reported monetary accomplishments. As a result, AOC management and the Congress may not be fully and currently informed about potential problems and deficiencies relating to the administration of programs and operations of AOC. The AOC IG Act states that the OIG’s primary purposes are to conduct and supervise audits and investigations; promote economy, efficiency, and effectiveness; and keep AOC and the Congress fully and currently informed about problems and deficiencies through semiannual reports and other means. In addition, the act requires that OIG audits comply with Government Auditing Standards, which requires audit reports to communicate the results of audits. CIGIE’s Quality Standards for Federal Offices of Inspector General provides requirements for OIGs when developing an appropriate planning process and for managing, operating, and conducting oversight, including audits. The CIGIE standards direct OIGs to develop a methodology and process for identifying and prioritizing agency programs and operations as potential subjects for audits. In addition, the standards state that because resources are rarely sufficient to meet all requirements, audit planning should include an assessment of risk and an assignment of priorities to help ensure the optimum use of OIG resources. The CIGIE standards also provide guidance for OIGs on maintaining a planning system that assesses the nature, scope, and inherent risks of agency programs and operations. According to these standards, the annual performance planning process is to identify the activities to audit and investigate, inspect, or evaluate and translate these priorities into outcome-related goals, objectives, and performance measures. The OIG’s Audit Standard Operating Procedures Manual requires a risk analysis—using input received from AOC management and the Congress, as well as audit leads developed during the past year—to identify the most viable audits based on risk and potential payback. The OIG’s annual audit plans for fiscal years 2012 through 2015 included the annual financial statement audits performed by an outside accounting firm; audits from prior years that were not completed, such as those of the Capitol Dome restoration and Cannon House Office Building reconstruction mega projects; and new planned audits, such as the audit of the Capitol Power Plant Cogeneration. However, the OIG’s strategic and annual audit plans for the 4-year period we reviewed did not include an assessment of risk. In addition, neither the OIG’s plans nor its policies included the assignment of priorities to help ensure the effective use of OIG resources in providing audit reports. While the OIG’s policies included CIGIE’s standards for investigations, the OIG has not adopted CIGIE’s Quality Standards for Federal Offices of Inspector General or developed comparable policies and procedures on planning that include both risk assessment and assigned priorities. In interviews with the IG, he explained that instead of formal plans with an assessment of risk and an assignment of priorities, the OIG relied on a process of “continuous review” defined by the IG as an effort to alert AOC and the Congress to cost overruns, delays, and other contract management issues as they occurred. In large part because of the OIG’s insufficient audit planning, the OIG provided no audit reports of AOC’s mega projects with an estimated combined cost of over $1.1 billion, and the OIG provided limited audit oversight of AOC’s jurisdictions and offices during the 4-year period we reviewed. According to OIG officials, the OIG staff performed continuous review by attending AOC’s weekly progress meetings for both the Capitol Dome restoration and Cannon House Office Building renewal mega projects. However, without audit reports developed from plans based on an independent assessment of AOC’s risks and with assigned priorities, the OIG provided little assurance that AOC’s most critical programs and contracts received adequate oversight, that audit resources were being applied to the most critical areas, and that the OIG’s efforts would fully inform AOC management and the Congress of any problems or deficiencies. In addition, the lack of sufficient planning contributed to the minimal audit coverage of the jurisdictions and offices responsible for providing AOC’s programs and support services during fiscal years 2012 through 2015. To illustrate, the fiscal year 2015 audit report of CVC’s restaurant operations was the only OIG audit of a program provided by an AOC jurisdiction during the 4-year period we reviewed. However, this audit report was not a result of the OIG’s annual audit plan for fiscal year 2015, but rather was provided in response to a request from CVC management. Additional audit reports issued by the OIG focused on procurement, human capital, and other support services provided by three of AOC’s central offices in Capitol Construction and Operations during the 4-year period. (See table 2.) Because of the current IG’s emphasis on performing continuous reviews rather than planned audit reports, the audit accomplishments reported by the AOC OIG have declined significantly in recent years. To illustrate, under the prior IG, the OIG completed a total of nine audit reports and two evaluations in fiscal years 2012 and 2013, with reported monetary accomplishments of approximately $324,000. In fiscal years 2014 and 2015, the OIG, under the current IG, completed five audit reports and three other reviews, with reported monetary accomplishments of approximately $54,000, or approximately 14 percent of the 4-year total. (See table 3.) As a result of changes by the current IG to eliminate criminal investigator positions, the OIG no longer has staff with the explicit responsibility to complete investigations of potential criminal wrongdoing and refers such cases to USCP for investigation. Also, we found instances where the OIG referred certain allegations of wrongdoing involving potential fraud, waste, and abuse to the AOC program offices for investigation. Neither USCP nor AOC program offices are subject to CIGIE’s Quality Standards for Investigations when performing investigations or AOC IG Act requirements for protecting complainants’ identities. These changes of investigative operations by the IG have contributed in part to a significant decline in the investigative reports and monetary accomplishments reported by the OIG. In addition, these changes have increased the risk that (1) criminal and other improper activities may not be detected and (2) potential cases of fraud, waste, and abuse may not be fully and independently investigated and may not be reported to AOC management and the Congress. The OIG’s policies and procedures provide that the OIG receives allegations of fraud, waste, or abuse and determines whether to initiate investigations, which are to be conducted in accordance with CIGIE’s Quality Standards for Investigations. In addition, the AOC IG Act states that in carrying out the duties and responsibilities established under the act, the IG shall report expeditiously to the Attorney General whenever the IG has reasonable grounds to believe there has been a violation of federal criminal law. The act also provides that the Attorney General may provide the AOC IG with the necessary authority for law enforcement. The prior AOC IG obtained law enforcement authority for the OIG investigators through special deputation as authorized by the Attorney General, which allowed investigators to seek and exercise warrants, make arrests, and carry firearms when performing their investigative duties. After the current IG rescinded the OIG investigators’ law enforcement authority in January 2014, the IG stated, in the OIG’s semiannual report for the first half of fiscal year 2014, that the carrying of firearms created AOC employee anxiety and was unnecessary to fulfill statutory OIG obligations. Nevertheless, the OIG investigators had passed their most recent peer review, which included a review by another OIG on the proper use of firearms. In addition, the IG was unable to provide any specific incident where the OIG investigators had exercised the inappropriate use of their law enforcement authorities. The IG informed us that in his prior position as the AOC’s Deputy General Counsel, he had become aware of AOC management’s concerns with the OIG investigators’ authority to carry firearms, and for this reason, he rescinded the OIG’s law enforcement authority. The OIG completed a job hazard analysis under the prior IG that was provided to the AOC Director of Safety, Fire, and Environmental Programs in June 2011. The OIG analysis concluded that OIG criminal investigators experienced safety hazards when conducting investigations of AOC personnel, programs, contracts, or funds when off the Capitol complex. The hazards included the surveillance of suspects, serving subpoenas, collecting physical evidence, and working with other law enforcement officials. The carrying of firearms was included among the personal protective equipment necessary to respond to these hazards. Having removed their authority to carry firearms, rather than place the OIG investigators into potentially unsafe conditions without the protection of firearms, the current IG removed the staff’s responsibility to complete criminal investigations altogether by revising the position descriptions of the investigators. The IG stated in the second fiscal year 2014 semiannual report that carrying out law enforcement duties is unnecessary to the OIG’s missions to serve AOC and the Congress. As a result, the OIG’s criminal investigators were reclassified from Office of Personnel Management (OPM) Criminal Investigating Series 1811, with responsibilities for criminal investigations, to OPM General Investigating Series 1810, which has no specified responsibilities for criminal investigations. Because it has no criminal investigators to complete investigations of potential criminal allegations, the OIG refers such allegations to USCP for investigation. According to USCP officials, USCP is the only law enforcement agency with primary law enforcement authority for the U.S. Capitol buildings and grounds. In addition, USCP statutory authority extends to the protection of congressional members, officers, visitors, and facilities, which includes performing criminal investigations relating to AOC and other Capitol Hill entities. USCP officials confirmed that USCP does not have access to AOC’s internal systems and therefore cannot develop leads for proactive criminal investigations of fraud in AOC’s program management and contracting areas without being granted such access. Instead, USCP investigations are focused on criminal allegations referred to it by the OIG. USCP officials stated that USCP personnel have extensive training in performing criminal investigations. OIGs are required to follow CIGIE’s Quality Standards for Investigations when they conduct investigations, which contain, among other things, explicit requirements for investigator independence, objectivity, and due professional care. As mentioned previously, AOC OIG has incorporated these investigative standards into its policies and procedures. Although USCP is not subject to CIGIE’s investigative standards, USCP officials stated that the requirements in CIGIE’s investigative standards are required for all of its criminal investigations and are integral to USCP directives, processes, policies, and procedures. In addition to the IG’s elimination of the OIG investigators’ responsibility to complete criminal investigations, the AOC OIG also changed its investigative operations with respect to noncriminal investigations. According to OIG investigators, shortly after the current IG took office, they were told in a meeting with the IG that senior AOC leadership would need to build its own investigative capabilities because the OIG would no longer handle many issues it previously investigated. Also, in the semiannual report for the period ending fiscal year 2014, the IG emphasized that the OIG would defer to AOC supporting offices in the absence of reasonable cause to believe complaints of alleged fraud, waste, and abuse of government resources. Consequently, the OIG may refer allegations of noncriminal wrongdoing to AOC’s program offices for investigation and rely on the investigative capabilities developed by AOC’s program offices. The review of allegations by other AOC offices is often appropriate when administrative actions can address the issues without OIG assistance. For example, AOC’s policies specify that individual complaints of discrimination or retaliation, individual employee benefits and compensation issues, individual grievances, individual workplace conflicts, matters covered in the collective bargaining agreement, or complaints regarding workplace safety and health or environmental issues are addressed by AOC offices in association with the Office of Compliance. The AOC IG Act allows the IG to exercise judgment when determining whether to conduct an OIG investigation. However, according to OIG investigators, the IG has also encouraged AOC program offices to conduct their own investigations, which can result in these offices addressing wrongdoing in areas outlined in the OIG’s policies and procedures as OIG responsibilities regarding fraud, waste, and abuse. The program offices are not subject to explicit policies requiring independence, objectivity, and due professional care, which are requirements under CIGIE’s investigative standards for OIGs. To illustrate, the OIG under the prior IG conducted investigations of alleged abuses of worker’s compensation benefits by employees who filed potentially fraudulent claims. The OIG investigations conducted under the prior IG found that these employees were not always potentially eligible for worker’s compensation benefits. However, the AOC Human Capital Management Division is responsible for investigations of worker’s compensation issues. The Human Capital Management Division awarded a 1-year contract for $150,000 in August 2015 to a private investigative firm to perform the surveillance work once done by OIG investigators and to determine whether AOC employees have filed false claims in order to collect worker’s compensation or disability benefits. Since these investigations are not being performed by the OIG, they are not subject to CIGIE standards requiring independence, objectivity, or due professional care, which are among the requirements for investigations performed by the OIG. Human Capital Management Division officials stated that any suspected criminal violations would be referred to the OIG; however, under the IG’s current investigative operations, even if such criminal referrals were made they would not be investigated by the OIG but rather referred to USCP. In another example, the OIG referred allegations of potential ethics violations to the AOC Office of General Counsel for investigation, even though such cases are consistent with the OIG’s responsibilities to investigate fraud, waste, and abuse. By referring these allegations to the Office of General Counsel for investigation, the resulting investigations are subject to neither CIGIE investigative standards for independence, objectivity, and due professional care nor the AOC IG Act, which requires the OIG to safeguard the identity of complainants. The lack of a statutory protection may hinder complainants from coming forward with information about potential wrongdoing within AOC. To illustrate, we reviewed an investigative case file provided by OIG investigators indicating that the OIG had received an allegation about a potential violation of AOC orders that prohibit using public office for private gain by the complainant’s supervisor. Such violations of abuse are specified by OIG policies and procedures as matters the OIG investigates. However, the IG determined this allegation was an ethical matter to be investigated by the AOC Office of General Counsel. Upon learning that the AOC Office of General Counsel rather than the OIG would be performing the investigation, according to the case file, the complainant withdrew the allegation because of fear of possible repercussions for moving forward with the case. Although the OIG continues to perform investigations, the IG’s changes in investigative operations have contributed in part to a significant decline in the number of investigative reports and monetary accomplishments reported by the OIG. To illustrate, under the prior IG, the OIG issued 53 investigative reports in fiscal years 2012 and 2013, compared to 23 reports in fiscal years 2014 and 2015 under the current IG, an almost 60 percent reduction. Also, as illustrated in table 4, since fiscal year 2013 the reported monetary accomplishments from investigations have declined from approximately $444,930 to approximately $7,260 in fiscal year 2015, as the current IG’s changes to investigative operations gradually became effective. In fiscal year 2015, the OIG reported monetary accomplishments that accounted for less than 1 percent of the 4-year total reported by both IGs. In addition, the OIG had previously provided management advisories, which reported internal control weaknesses identified by investigations; however, it did not issue any management advisory reports in fiscal year 2015. The AOC OIG has voluntarily agreed to be a part of the CIGIE peer review process, which includes a review of its investigative operations by another OIG. The primary purpose of CIGIE peer reviews is to determine whether OIGs have consistently applied CIGIE’s Quality Standards for Investigations. However, the OIG’s reliance on investigations by other entities will not necessarily be included in CIGIE’s peer review process. For example, the investigative operations of USCP when addressing criminal allegations and the investigations performed by AOC’s program offices, such as the Human Capital Management Division, would not be included in a peer review of the AOC OIG’s investigations. We discussed the objectives and scope of CIGIE’s investigative peer reviews with the outside OIG scheduled to perform the AOC OIG’s next review. We concluded that the issues raised in our report could be included within the scope of a peer review if expanded to include consideration of the OIG’s reliance on investigations provided by USCP, AOC’s Human Capital Management Division, the Office of General Counsel, and any additional AOC program offices that perform investigations of potential fraud, waste, and abuse. The Congress passed legislation establishing the AOC OIG, in part, to address the cost overruns and time delays of AOC projects, such as those found during the construction of the CVC, and the failure of appropriate oversight responsibilities. In addition, the stated purposes of the AOC OIG as established by the AOC IG Act include OIG audits and investigations and for the OIG to keep the Architect and the Congress fully and currently informed through semiannual reports and otherwise concerning fraud and other serious problems, abuses, and deficiencies. The Congress also intended the AOC OIG to promote economy and efficiency in AOC programs and to detect and prevent fraud, waste, and abuse. During fiscal years 2012 through 2015, the AOC OIG issued no audit reports of AOC’s mega projects, which have an estimated combined cost of over $1 billion. In addition, the current IG’s emphasis on the continuous review of specific mega projects since fiscal year 2014 contributed to this outcome. During the same 4-year period, the OIG provided only one audit report that addressed a single program among all the programs provided by AOC’s 10 jurisdictions, and reported a declining number of other audit reports accompanied by a corresponding decline in the reported amount of monetary accomplishments. As a result of the OIG’s minimal audit report coverage, the Architect and the Congress may not be fully and currently informed about the operations of AOC’s jurisdictions, offices, and major contracts. The OIG’s efforts during the 4-year period, including its continuous review of mega projects, did not have audit plans that were based on an assessment of risk or assignment of priorities. This is in part because the OIG has not adopted CIGIE’s Quality Standards for Federal Offices of Inspector General that includes these requirements for planning audits, investigations, and evaluations based on an assessment of agency risk and priority of efforts. Without policies and procedures for complete OIG plans with an assessment of AOC’s risks and established priorities to help direct its resources, the OIG can provide little assurance that it will provide future audit reports that address AOC’s most critical areas. In addition, the current IG rescinded the OIG’s law enforcement authority and removed its investigators’ responsibility to complete investigations of potential criminal allegations, resulting in these allegations being referred to USCP for investigation. Furthermore, under the IG’s changes, AOC program offices can perform their own investigations of alleged wrongdoing in areas that can include the OIG’s responsibilities under its current policies regarding fraud, waste, and abuse. USCP and AOC program offices are not subject to CIGIE’s Quality Standards for Investigations. The OIG is subject to CIGIE’s standards that require investigations to be objective, independent, and consistent with due professional care, and to AOC IG Act requirements that complainants’ identities be protected. The current IG’s changes have contributed, at least in part, to fewer investigative reports and monetary accomplishments and may hinder potential complainants from coming forward with allegations of wrongdoing. As a result, the OIG’s practices raise questions about whether the OIG is carrying out its work in a way that fulfills the Congress’s original intent regarding the oversight of AOC. We are making two recommendations to the AOC OIG regarding its (1) audit planning and (2) investigative operations. To provide increased oversight of AOC and to keep the Architect and the Congress fully and currently informed, we recommend that the AOC OIG revise and implement policies and procedures to provide audit reports that are based on planning that includes an assessment of risk and the assignment of priorities, consistent with requirements in CIGIE’s Quality Standards for Federal Offices of Inspector General. To reduce the risk that fraud, waste, and abuse and criminal activities are not detected or fully addressed, we recommend that (1) the AOC OIG work with CIGIE to obtain a peer review from another federal OIG of the AOC OIG’s overall investigative operations, including consideration of the OIG’s reliance on investigations performed by other entities, and (2) make any needed changes in its operating procedures based on the results of the review to help ensure that investigations of AOC are conducted in accordance with CIGIE standards for investigations and AOC IG Act requirements. We provided a draft of this report to the AOC OIG for review and comment. In written comments reprinted in appendix V, the AOC OIG agreed with the two recommendations and stated that it would implement them. The AOC OIG also provided information on changes made in late 2015 and 2016, which was after our review period. Although these changes were outside the scope of our work, our report acknowledges the hiring of a civil engineer in fiscal year 2016 and the issuance of two reports on AOC mega projects, also in fiscal year 2016. The OIG acknowledged our findings on audit planning and the number of audit reports on AOC’s major construction projects and jurisdictions and stated that the OIG has now moved away from an approach that appeared to simply monitor projects and will incorporate a more formal risk assessment and prioritization process into its audit planning. We also received technical comments from the AOC OIG, which were addressed as appropriate. However, the OIG disagreed with a number of the statements and findings in the report related to its investigative operations, as discussed below. The USCP General Counsel also provided technical comments on behalf of USCP, which were addressed as appropriate. In its written response, the AOC OIG stated that a CIGIE investigative program review had been scheduled to be completed in fiscal year 2016, and that without advance notice to the AOC OIG, the GAO review team requested that the designated CIGIE OIG peer review team suspend its activities pending the completion of this GAO audit. As we state in our report, our purpose in contacting the OIG peer reviewer was to determine the extent to which the AOC OIG’s investigative operations would be included in a peer review. The peer review had not yet begun when we met with the OIG peer reviewer, who provided information on CIGIE’s peer review process. We did not request that the peer review be suspended or postponed. We also discussed the ability of the peer review to include the OIG’s overall investigative operations, including consideration of the OIG’s reliance on investigations performed by other entities. In its written comments, the AOC OIG also stated that the GAO draft confuses program responsibilities with investigative responsibilities and that the OIG does not rely on agency program offices to perform OIG investigations. The OIG commented that AOC program offices do not investigate any allegations of fraud, waste, or abuse for the OIG and that the AOC OIG is the only office responsible for performing such investigations pursuant to the IG Act. The OIG added that it is the only AOC office that can guarantee confidentiality to employees and other witnesses. The OIG concluded that GAO’s factual findings in this area are mistaken and not supported. Our report discusses AOC’s policies applicable to the OIG that specify the issues that program offices can and should address, and the issues of fraud, waste, and abuse that are specifically OIG responsibilities. Our report provides examples of AOC program offices that have addressed wrongdoing in areas of OIG responsibilities regarding fraud, waste, or abuse. These examples are based on our review of documentation in the OIG’s case files. Specifically, as noted in our report, we reviewed an investigative case file provided by OIG investigators indicating that the OIG had received an allegation about a supervisor’s potential violation of AOC orders that prohibit using public office for private gain. Such violations of abuse are specified by OIG policies and procedures as matters the OIG investigates. However, the OIG determined that this allegation would be investigated by the AOC Office of General Counsel. In another example, we obtained documentation from the Human Capital Management Division of its contract for $150,000 in August 2015 for a private investigative firm to perform the surveillance work once done by OIG investigators, to determine whether AOC employees have filed false claims in order to collect worker’s compensation or disability benefits. Based on these examples, AOC program offices may have investigated matters of fraud, waste, and abuse, which are specifically OIG responsibilities. The OIG agreed with our recommendation to work with CIGIE to obtain a peer review of the AOC OIG’s investigative operations, including consideration of the OIG’s reliance on investigations performed by other entities. The AOC OIG stated that our draft report rejects the OIG’s reliance on the law enforcement authority of USCP and that this is contrary to law. Our report does not assess and therefore provides no opinion on the law enforcement authority of USCP and how these investigations were handled or resolved. Contrary to the OIG’s assertion, our report specifically addresses the OIG’s investigative process, including the referral of matters for investigation, relative to CIGIE’s quality standards. In addition, our report does not address whether the policies and procedures of USCP align with CIGIE standards on how referrals are to be handled. Also, as stated in our report, the AOC OIG had law enforcement authority, which includes the authority to carry firearms, already in place when the current IG entered office in September 2013, which he rescinded based on reasons outlined in our report. In addition, the Attorney General had previously determined under the criteria laid out in Section 6 of the IG Act that law enforcement authority was appropriate for the AOC OIG. We did not assess this determination. The AOC OIG also provided several comments about the qualifications of USCP, citing its accreditation by the Commission on Accreditation for Law Enforcement Agencies, Inc. The OIG includes the attributes for accreditation, which included among others, professionalism, independence, and objectivity. As stated in our report, our focus was on OIG investigations and specifically how OIG investigations follow quality standards provided by CIGIE, which were established, in part, to help ensure the independence of OIG investigative efforts. Based on technical comments provided by the USCP General Counsel on behalf of USCP officials, we added certain information regarding requirements of USCP’s criminal investigations. However, we did not audit USCP or the policies and procedures that apply to its investigations. AOC OIG has responsibility for ensuring that its investigations comply with applicable standards. It is for these reasons that we have recommended that the AOC OIG work with CIGIE to obtain a peer review from another federal OIG of the AOC OIG’s overall investigative operations, including consideration of the OIG’s reliance on investigations performed by other entities, and make any needed changes in its operating procedures based on the results of the review to help ensure that investigations of AOC are conducted in accordance with CIGIE standards for investigations and AOC IG Act requirements. The AOC OIG agreed to implement this recommendation. We are sending copies of this report to the Architect, the Architect of the Capitol Inspector General, and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2623 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Our audit objectives were to (1) identify the Architect of the Capitol’s (AOC) jurisdictions, offices, and major contracts subject to AOC Office of Inspector General (OIG) oversight during fiscal years 2012 through 2015; (2) determine the statutory requirements, policies, and budgetary and staffing resources of the AOC OIG during fiscal years 2012 through 2015; and (3) examine the extent to which the AOC OIG developed plans and policies for oversight of AOC’s jurisdictions, offices, and major contracts during fiscal years 2012 through fiscal year 2015, and the extent to which oversight was provided. To identify the AOC jurisdictions, offices, and major contracts subject to AOC OIG oversight during fiscal years 2012 through 2015, we obtained the AOC’s performance and accountability reports for each fiscal year. From these reports, we determined AOC’s priorities and the focus of AOC’s efforts for matters that could be considered for OIG oversight. We also obtained descriptions of AOC’s organizational units, including the administrative offices that provide operational support and the 10 jurisdictions that provide the AOC’s programs for the maintenance, renovation, and construction of the Capitol Hill buildings and grounds. We identified the largest construction and renovation contracts as reported by AOC that are classified as mega projects, with over $50 million each in estimated cost. To determine the statutory requirements of the AOC OIG, we reviewed the AOC Inspector General Act of 2007 (AOC IG Act) and obtained the associated legislative history of that law, including committee reports and accompanying statements regarding the proposed bills leading to enactment. We summarized the views of the congressional committees as stated in reports and bills that led to the creation of the OIG to obtain an understanding of where the OIG could focus its efforts and resources. We also analyzed the statutory requirements from the Inspector General Act of 1978, as amended, that apply to the AOC OIG to determine the full range of requirements applicable to the office. We obtained the OIG’s written policies and procedures that applied to the 4-year period we reviewed. We summarized the guidance provided by the OIG’s policies and procedures to help determine whether they included requirements for audits and investigations and addressed the requirements of the AOC IG Act. We obtained AOC OIG budget information for fiscal years 2012 through 2015 from the OIG that had been verified by AOC budget officials for data reliability. We obtained AOC OIG staffing information for fiscal years 2012 through 2015 from the OIG; AOC budget staff verified data reliability. We determined that the data were reliable for the purposes of this report. To examine the OIG’s oversight plans and policies, we obtained the strategic plans and annual audit plans for the 4-year period we reviewed from the OIG and discussed the focus and definition of continuous review efforts with the Inspector General (IG). We also obtained the OIG policies and procedures specific to OIG planning and compared these requirements with the OIG’s plans. We also reviewed the OIG’s plans for consistency with CIGIE’s Quality Standards for Federal Offices of Inspector General regarding planning. In addition, we obtained and summarized the requirements in the OIG’s policies and procedures for audits and investigations. We obtained all OIG reports issued during the 4-year period we reviewed, which included audits, investigations, evaluations, management advisories, and memorandums, and identified the AOC programs addressed by the reports. We compared the subjects addressed by OIG’s audit reports with AOC’s jurisdictions, offices, and major contracts and noted any lack of audit coverage during the 4-year period. We also obtained the monetary accomplishments from OIG audits reported in the OIG’s semiannual reports to determine any trends and changes over the period. We also summarized the content and results of all investigative reports for the 4-year period to determine any changes in the reported results and any trends and changes in reported monetary accomplishments. We interviewed all OIG investigative staff and the IG to obtain an understanding of changes made to the OIG’s investigative operations. We also interviewed an officer in the U.S. Capitol Police (USCP) Investigations Division identified by the AOC OIG to obtain an understanding of USCP’s mission and investigative procedures related to allegations of criminal violations referred to it by the AOC OIG. In addition, we obtained the assistance of AOC OIG investigators who provided examples of OIG investigations for our review. We determined the reliability of information in the OIG’s semiannual reports and other reported information by comparing it with the source information in individual reports issued by the OIG. The data we obtained were appropriate and reliable for meeting the report’s objectives. We conducted this performance audit from December 2014 to November 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Architect of the Capitol (AOC) Office of Inspector General (OIG) was established, in part, to conduct and supervise audits and investigations relating to AOC. This includes AOC’s jurisdictions, which are responsible for AOC’s programs and are subject to OIG oversight (see table 5). The Architect of the Capitol (AOC) Office of Inspector General (OIG) was established, in part, to conduct and supervise audits and investigations relating to AOC. This includes AOC’s Capitol Construction and Operations offices, which provide program support and are subject to OIG oversight (see table 6). The Architect of the Capitol (AOC) Office of Inspector General (OIG) was established, in part, to conduct and supervise audits and investigations relating to AOC. This includes AOC’s mega projects, which are each estimated to cost over $50 million and are subject to OIG oversight (see table 7). In addition to the contact named above, Jackson Hufnagle (Assistant Director), Lisa Boren, Jason Kirwan, Lisa Motley, Taya Tasse, and Kenneth Thiry made key contributions to this report. | The AOC OIG was established by statute in 2007, in part because of congressional concerns about time delays and cost overruns during construction of the Capitol Visitor Center. GAO was asked to assess the AOC OIG's oversight of AOC. This report describes AOC areas subject to OIG oversight and examines the extent to which the OIG developed plans and policies for AOC oversight for fiscal years 2012 through 2015 and the extent to which oversight was provided. GAO reviewed AOC's annual performance and accountability reports, the OIG's statutory requirements, the OIG's policies and procedures, and applicable CIGIE standards. GAO also interviewed AOC OIG officials, analyzed the OIG's plans and reports for the 4-year period, and compared these efforts with the AOC areas subject to oversight. During fiscal years 2012 through 2015—the 4-year period GAO reviewed—the Architect of the Capitol (AOC) Office of Inspector General (OIG) had responsibilities for independent audits and investigations of AOC's 10 jurisdictions with specific program responsibilities for the maintenance, operations, and preservation of the buildings and grounds across Capitol Hill; Capitol Construction and Operations with central support offices; and construction and restoration projects, including its four largest ongoing “mega projects,” with an estimated combined cost of over $1 billion. The AOC OIG's audit planning during this period did not include either risk assessments or assigned priorities for conducting audits consistent with standards of the Council of the Inspectors General on Integrity and Efficiency (CIGIE). In addition, the OIG did not adopt these CIGIE standards in its policies and procedures. Instead, the current IG emphasized “continuous review” of mega projects, which he defined as an effort to alert AOC and the Congress of contract management issues as they occurred. This approach and the prior IG's efforts did not result in any audit reports of AOC's mega projects during fiscal years 2012 through 2015. The OIG also reported a decline in total audit reports and monetary accomplishments of potential dollar savings during fiscal years 2014 and 2015 (see table). Further, the OIG provided only one audit report of an AOC jurisdiction program during the 4-year period. Because of incomplete plans, a limited number of audit reports, and the lack of audit reports of AOC's mega projects, AOC and the Congress did not have the full benefit of OIG findings and recommendations and were not kept fully and currently informed of possible AOC problems and deficiencies during the 4-year period. In fiscal year 2014, the IG rescinded the OIG's law enforcement authority and removed the OIG investigators' responsibility to complete criminal investigations. Instead, the OIG's investigators have responsibility for administrative investigations and rely primarily on the U.S. Capitol Police (USCP) to perform criminal investigations, and on occasion other AOC program offices perform their own investigations. USCP and AOC program offices are not subject to CIGIE standards. The OIG is required to follow CIGIE standards for investigations. These OIG changes contributed in part to a decline in investigative reports and monetary accomplishments. The OIG has volunteered to receive a peer review of its investigations that could be expanded to include consideration of investigations by these other entities. GAO is making two recommendations to the AOC OIG to (1) revise and implement policies and procedures to provide audit reports based on planning that includes risk assessment and assignment of priorities consistent with CIGIE standards and (2) obtain a peer review from another federal OIG of overall investigative operations, including consideration of the OIG's reliance on investigations performed by other entities, and to make any needed changes based on the results of such review. In comments on a draft of this report, the AOC OIG agreed with the two recommendations but raised concerns with some of GAO's findings. GAO continues to believe that its findings are valid, as discussed in the report. |
In our audit of the fiscal year 2004 financial statements for SEC, we found the financial statements as of and for the fiscal year ended September 30, 2004, including the accompanying notes, are presented fairly, in all material respects, in conformity with U.S. generally accepted accounting principles; SEC did not have effective internal control over financial reporting (including safeguarding of assets), but had effective control over compliance with laws and regulations that could have a material effect on the financial statements as of September 30, 2004; and no reportable noncompliance with laws and regulations we tested. We issued an unqualified, or clean, opinion on the SEC’s financial statements. This means that the financial statements and accompanying notes present fairly, in all material respects, SEC’s financial position as of September 30, 2004, and, as well, certain other financial information that the statements must provide: net cost, changes in net position, budgetary resources, financing, and custodial activities for the year then ended. We also found that the statements conform to U.S. generally accepted accounting principles. In order to reach our conclusions about the financial statements, we (1) tested evidence supporting the amounts and disclosures in the financial statements, (2) assessed the accounting principles used and significant estimates made by management, and (3) evaluated the presentation of the financial statements. We found three material weaknesses in internal control and thus issued an adverse opinion on internal control—stating that SEC management did not maintain effective internal control over financial reporting and the safeguarding of assets as of September 30, 2004. Internal control over financial reporting consists of an entity’s policies and procedures that are designed and operated to provide reasonable assurance about the reliability of that entity’s financial reporting and its process for preparing and fairly presenting financial statements in accordance with generally accepted accounting principles. It includes policies and procedures for maintaining accounting records, authorizing receipts and disbursements, and the safeguarding of assets. Because SEC makes extensive use of computer systems for recording and processing transactions, SEC’s financial reporting controls also include controls over computer operations and access to data and computing resources. Our opinion on SEC’s internal control means that SEC’s internal control did not reduce to a relatively low level the risk that misstatements material to the financial statements may occur and go undetected by employees in the normal course of their work. This conclusion on SEC’s internal controls did not affect our opinion on SEC’s financial statements. This is because during the audit process SEC made the adjustments identified during the audit as necessary for the fair presentation of its financial statements. However, the weaknesses we found could affect other, unaudited information used by SEC for decision making. Our evaluation of internal control covered SEC’s financial reporting controls which also cover certain operational activities that result in SEC’s financial transactions, such as activities pertaining to stock exchange transaction fees, public-filing fees, maintaining disgorgements and penalties receivable, payroll-related transactions, and others. We also tested SEC’s compliance with selected provisions of laws and regulations that have a direct and material impact on the financial statements. For example, we tested for compliance with sections of the Securities Exchange Act of 1934, as amended, that requires SEC to collect fees from the national securities exchanges and the National Association of Securities Dealers based on volume of stock transactions, and sections of the Securities Act of 1933, as amended, that requires SEC to collect fees from registrants for public filings. Our tests found no instances of noncompliance that are reportable. We also found that SEC maintained, in all material respects, effective internal control over compliance. I would now like to discuss in detail the three material internal control weaknesses we found during our audit. We found that SEC did not have formalized processes or documentation for the procedures, systems, analysis of accounts, and personnel involved in developing key balances and preparing the financial statements and related disclosures. As I will discuss later, this issue is compounded by SEC’s limitations with its financial management system. Also, SEC did not have formalized quality control or review procedures. As a result, we identified errors in the beginning asset and liability balances and in the September 30, 2004, draft financial statements prepared by SEC management, that if had not been corrected, would have resulted in materially misleading operating results for fiscal year 2004. SEC’s lack of formalized processes, documented procedures, and quality assurance checks, significantly delayed the reporting of fiscal year 2004 financial results, consumed significant staff resources, caused audit inefficiencies, and resulted in higher financial statement preparation and audit costs. I would like to highlight the following items we found: SEC did not have documentation providing an explanation or a crosswalk between the financial statements and the source systems, general ledger accounts, account queries, and account analyses. SEC did not maintain a subsidiary ledger for certain activities, such as customer deposit amounts pertaining to filing fees. Accounting staff had difficulty in retrieving support for certain account balances, such as undelivered-order amounts, and for certain property and equipment leases. Reconciliations of detail and summary account balances were not prepared for certain financial statement line items, such as for the customer deposit liability relating to filing fees and the associated earned filing fee revenue; the accounts receivable related to exchange fees and the related amount of earned exchange fee revenue; and the budgetary accounts related to undelivered and delivered orders, thus requiring SEC staff to create an audit trail after the fact. There also was no consistent evidence of supervisory review of journal entries, including closing and adjusting journal entries made in connection with preparing quarterly and year-end financial statements. Comprehensive accounting policies and procedures were still in draft or had not yet been developed for several major areas related to financial statements, including disgorgements and penalties, filing fees, exchange fees, and fixed asset capitalization. GAO’s Standards for Internal Control in the Federal Government requires that controls over the financial statement preparation process be designed to provide reasonable assurance regarding the reliability of the balances and disclosures reported in the financial statements and related notes in conformity with generally accepted accounting principles, including the maintenance of detailed support that accurately and fairly reflect the transactions making up the balances in the financial statements and disclosures. In addition, an effective financial management system includes policies and procedures related to the processing of accounting entries. SEC’s difficulties in the area of financial statement preparation are exacerbated because SEC’s financial management system is not set up to generate the user reports needed to perform analyses of accounts and activity on a real-time basis leading to SEC’s staff-intensive and time- consuming efforts to prepare financial statements. Because SEC does not maintain standard schedules for producing certain basic reports of account detail for analysis, users have to request reports generated on an ad hoc basis by a software application whose operations are known only to some SEC staff. Also, as I will discuss in more detail later, not all of SEC’s systems used for tracking and recording financial data are integrated with the accounting system. Federal agencies preparing financial statements are required to develop a financial management system to prepare a complete set of statements on a timely basis in accordance with generally accepted accounting principles. The financial statements should be the product of an accounting system that is an integral part of an overall financial management system with structure, internal control, and reliable data. Office of Management and Budget Circular No. A-127, Financial Management Systems, requires that each agency establish and maintain a single integrated financial management system—basically a unified set of financial systems electronically linked for agencywide support. Integration means that the user is able to obtain needed information efficiently and effectively from any level of use or access point. (This does not necessarily mean having only one software application covering all financial management system needs or storing all information in the same database.) Interfaces between systems are acceptable as long as the information needed to enable reconciliation between the systems is accessible to managers. Interface linkages should be electronic unless the number of transactions is so small that it is not cost beneficial to automate the interface. Reconciliations between systems, where interface linkages are appropriate, should be maintained to ensure data accuracy. To support its financial management functions, SEC relies on several different systems to process and track financial transactions that include filing and exchange fees, disgorgements and penalties, property and equipment, administrative items pertaining to payroll and travel, and others. Not all of these systems are integrated with the accounting system. For example, the case-tracking system and the spreadsheet application used to account for significant disgorgement and penalty transactions and the system used to account for property and equipment are not integrated with the accounting system. Without a fully integrated financial management system, SEC decision makers run the risk of delays in attaining relevant data or using inaccurate information inadvertently while at the same time dedicating scarce resources toward the basic collection of information. A properly designed and implemented financial statement preparation and reporting process (which encompasses the financial management system) should provide SEC management with reasonable assurance that the balances presented in the financial statements and related disclosures are materially correct and supported by the underlying accounting records. To address the issues related to SEC’s financial statement preparation and reporting processes, we recommended that SEC take the following 13 actions to improve controls over the process. 1. Develop written policies and procedures that provide sufficient guidance for the year-end closing of the general ledger as well as the preparation and analysis of quarterly and annual financial statements. 2. Establish clearly defined roles and responsibilities for the staff involved in financial reporting and the preparation of interim and year-end financial statements. 3. Prepare a crosswalk between the financial statements and the source systems, general ledger accounts, and the various account queries and analyses that make up key balances in the financial statements. 4. Maintain subsidiary records or ledgers for all significant accounts and disclosures so that the amounts presented in the financial statements and footnotes can be supported by the collective transactions making up the balances. 5. Perform monthly or periodic reconciliations of subsidiary records and summary account balances. 6. Perform a formal closing of all accounts at an interim date or dates to reduce the level of accounting activity and analysis required at year- end. The formal closing entails procedures to ensure that all transactions are recorded in the proper period through the closing date, and then closing the accounting records so that no new entries can be posted during that period. 7. Distinguish common closing and adjusting entries in a formal listing, which is used in the general ledger closing process and in preparing financial statements. 8. Require supervisory review for all entries posted to the general ledger and financial statements, including closing entries. A supervisor should review revisions to previously approved entries and revised financial statements and footnotes. All entries and review should be documented. 9. Establish milestones for preparing and reviewing the financial statements by setting dates for critical phases such as closing the general ledger; preparing financial statements, footnotes, and the performance and accountability report; and performing specific quality control review procedures. 10. Use established tools (i.e., checklists and implementation guides) available for assistance in compiling and reviewing financial statements. 11. Maintain documentation supporting all information included in the financial statements and footnotes. This documentation should be more self-explanatory than what has been retained in the past. The documentation should be at a level of detail to enable a third party, such as an auditor, to use the documentation for substantiating reported data without extensive explanation or re-creation by the original preparer. 12. Take advantage of in-house resources and expertise in establishing financial reporting policies, internal controls, and business practices, as well as in review of financial statement and footnote presentation. 13. Develop or acquire an integrated financial management system to provide timely and accurate recording of financial data for financial reporting and management decision making. In response to our audit findings, SEC plans to increase its financial reporting staff this fiscal year, formalize its policies and procedures, and solicit advice from corporate financial reporting experts within SEC. SEC senior management has reviewed and endorsed certain initial policies applied in the first year of financial reporting, and has modified or recommended others for further review. In addition, SEC plans to establish a formal audit committee to provide for regular review by key management officials and advise on policies and controls. SEC is undertaking a multiyear project to replace the existing case-tracking system with a system that is better designed for financial reporting purposes. Now I would like to shift to the second material internal control weakness. As part of its enforcement responsibilities, SEC issues and administers judgments that order disgorgements and civil penalties against violators of federal securities laws. The resulting transactions for fiscal year 2004 involved collections of about $945 million, and recording and reporting of fiduciary and custodial balances on the financial statements. SEC records and tracks information on over 12,000 parties in SEC enforcement cases involving disgorgements and penalties through a case-tracking system. However, the case-tracking system is not designed for financial reporting and is not integrated with SEC’s general ledger accounting system, which accumulates, tracks, and summarizes SEC’s financial transactions. To compensate for limitations in the system, SEC staff compiles quarterly subsidiary ledgers using extensive and time-consuming procedures. After downloading financial information on disgorgements and penalties from the case-tracking system to a spreadsheet with thousands of cases and defendants with a magnitude of approximately 1 million data elements, SEC staff performs numerous calculations using the data in the spreadsheet to compile the disgorgement and penalty balances as of the end of each quarter. Such a process is inherently inefficient and prone to error. Further, since the source of the data included on the spreadsheet is from the case-tracking system, whose data reliability has been reported as a problem by SEC for the past three years, it is imperative that specific control procedures be put in place to provide reasonable assurance over the completeness and reliability of the data in the case-tracking system. In addition, control procedures are needed to reduce the risk of errors in the spreadsheet and ultimately the reported financial statement information. Finally, when reviewing case files we noted instances in which the supporting documentation in the files contained notations by the case managers indicating that potential activities or transactions related to the case had occurred. However, there was not adequate supporting documentation to support an entry to the case-tracking system. These instances raised questions about whether SEC’s accounting and financial reporting information related to penalties and disgorgements was potentially incomplete or out-of-date. As a result of the issues I have described, we concluded that SEC did not have adequate control procedures in place to provide adequate assurance over the reliability of financial information related to this area. Thus, our auditors performed additional testing over SEC’s financial statement balances related to penalties and disgorgements. GAO’s Standards for Internal Control in the Federal Government requires that agencies establish controls to ensure that transactions are recorded in a complete, accurate, and timely manner. Although SEC has a draft policy that covers certain aspects of accounting for disgorgements and penalties, it is not comprehensive. For example, the policy does not define who is responsible for recording disgorgement and penalty data or the documentation that should be maintained to support the amounts recorded. Of even greater importance, the policy does not identify controls that are critical for determining the amounts to be recorded and for reviewing entries for completeness and accuracy, including the specific types of controls needed for the quarterly downloading of data and use of the spreadsheets for arriving at the accounting entries. Nor does the policy address supervisory review necessary to ensure consistent application of the procedures. A lack of comprehensive policies and controls over disgorgement and penalty transactions increases the risk that the transactions will not be completely, accurately, and consistently recorded and reported. In our audit of the estimated net amounts receivable from disgorgements and penalties, we did find errors in the recorded balances for the related gross accounts receivable and allowance for loss. Specifically, we noted errors where SEC had made entries to the accounting system that conflicted with information in the files. We also noted inconsistent treatment in recording judgments, interest amounts, terminated debts, and collection fees imposed by Treasury. We believe that these errors and inconsistencies occurred because of the control weaknesses we found. While, in most cases, these errors and inconsistencies were offsetting, such errors raise concern about the reliability of the $1.673 billion gross accounts receivable for disgorgements and penalties and the related allowance amounts of $1.394 billion reported in footnote 3 to SEC’s financial statements. To address internal control weaknesses over disgorgements and penalties, we recommended that SEC 1. implement a system that is integrated with the accounting system or that provides the necessary input to the accounting system to facilitate timely, accurate, and efficient recording and reporting of disgorgement and penalty activity; 2. review the disgorgement and penalty judgments and subsequent activities documented in each case file by defendant to determine whether individual amounts recorded in the case-tracking system are accurate and reliable; 3. implement controls so that the ongoing activity involving disgorgements and penalties is properly, accurately, and timely recorded in the case-tracking system and the accounting system; 4. strengthen coordination, communication, and data flow among staff of SEC’s Division of Enforcement and Office of Financial Management who share responsibility for recording and maintaining disgorgement and penalty data; and 5. develop and implement written policies covering the procedures, documentation, systems, and responsible personnel involved in recording and reporting disgorgement and penalty financial information. The written procedures should also address quality control and managerial review responsibilities and documentation of such a review. SEC agrees with our findings in this area and has begun efforts to strengthen internal controls. For example, SEC plans to complete a comprehensive review of files and data and review and strengthen policies and procedures for recording and updating amounts receivable for disgorgements and penalties. SEC anticipates that consistent application of strengthened internal controls and potentially some limited redesign of the existing management information system will be adequate to resolve the material weaknesses in fiscal year 2006. However, SEC acknowledges that a replacement of the current case-tracking system and a more thorough reexamination of the relevant business process would provide more effective assurance. Accordingly, in fiscal year 2006, SEC plans to complete a requirements analysis as the first phase of the multiyear project to replace the case-tracking system. Now I would like to shift to the discussion of the material internal control weakness pertaining to information security. Information system controls are essential for any organization that depends on computer systems and networks to carry out its mission or business and maintain key records and accountability information. Without proper safeguards, organizations run the risk that intruders may obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. SEC—which relies extensively on computer systems to support its operations—needs a comprehensive program of general controls to monitor and manage information security risks. Our review of SEC’s information system general controls found that the commission did not effectively implement controls to protect the integrity, confidentiality, and availability of its financial and sensitive information. In March 2005, we reported weaknesses in electronic access controls, including controls designed to prevent, limit, and detect access to SEC’s critical financial and sensitive systems. We found these weaknesses in user accounts and passwords, access rights and permissions, network security, and the audit and monitoring of security-related events. These weaknesses were heightened because SEC had not fully established a comprehensive monitoring program. We identified the following electronic access control weaknesses: SEC operating personnel did not consistently set password parameters—such as a minimum of six digits including both numbers and letters—to ensure a level of difficulty for an intruder trying to guess a password, and users sometimes did create easy-to-guess passwords. All 4,100 network users were inadvertently granted access that would allow them to circumvent the audit controls in the commission’s main financial systems. Key network devices were not configured to prevent unauthorized individuals from gaining access to detailed network system policy settings and lists of users or user groups. SEC did not have a comprehensive monitoring program for routine review, audit, or monitoring of system user-access activities. For example, audit logging, which is typically used to track certain types of activity on a system, was not consistently implemented on network services and there was no real-time capability to target unusual or suspicious network events for review. In addition, SEC had not fully implemented a network intrusion-detection system. The commission did, however, have several initiatives under way to monitor user access activity. We also identified weaknesses in other information system controls— including physical security, segregation of computer functions, application change controls, and service continuity. For instance: At the time of our review, 300 employees and contractors had physical access to SEC’s data center. Persons with access included an undetermined number of application programmers, budget analysts, administrative staff, and customer support staff. Typically, persons serving these functions do not need access to the data center for their work. SEC had not sufficiently separated incompatible system administration and security administration functions on its key financial applications. Although a change control board at SEC was responsible for authorizing all application changes, none of the software modifications reviewed had documentation to show that such authorizations had been obtained. SEC had not implemented a service-continuity plan to ensure that the system and its major applications could continue to function after a major disruption, such as a loss of electricity. As a result of these weaknesses, sensitive SEC data—including payroll and financial transactions, personnel data, regulatory, and other mission- critical information—were at increased risk of unauthorized disclosure, modification, or loss. A key reason for weaknesses in SEC’s information system general controls is that the commission has not fully developed and implemented a comprehensive agency information security program. The Federal Information Security Management Act (FISMA) requires each agency to develop, document, and implement an agencywide information security program to provide security for the information and systems that support the operations and assets of the agency. Agencies are required to use a risk- based approach to information security management. FISMA also requires an agency’s information security program to include these key elements: periodic assessments of risk and the magnitude of harm that could result from unauthorized access, use, or disruption of information systems; policies and procedures that are based on risk assessments and risk reductions to ensure that information security is addressed throughout the life cycle of each system and that applicable requirements are met; security awareness training to inform all users of information security risks and users’ responsibilities in complying with information security policies and procedures; and periodic tests and evaluations of the effectiveness of information security policies, procedures, and practices related to management, operational, and technical controls of every major system. Although SEC has taken some actions to improve security management— including establishing a central security management group and appointing a senior information security officer to manage the information security program—further efforts are needed. For example, we found that the commission had not clearly defined roles and responsibilities for the central security group it had established. In addition, SEC had not fully (1) assessed its risks, (2) established or implemented security policies, (3) promoted security awareness, or (4) tested and evaluated the effectiveness of its information system controls. SEC and its Office of Inspector General (OIG) have recognized weaknesses in the commission’s information security program. Since 2002, SEC has reported information security as a material weakness in its FMFIA reports. In its fiscal year 2004 FISMA report, SEC’s OIG reported that the commission had several weaknesses in information security and was not substantially in compliance with information security requirements contained in FISMA. Without proper safeguards for its information systems, SEC is at risk from malicious intruders entering inadequately protected systems. It is at risk that intruders will use this access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. We believe the primary cause of these weaknesses has been the lack of a fully developed and implemented entitywide information security program. In our March 2005 report, we recommended 6 actions to fully develop and implement an effective security program. In addition, we made 52 recommendations to correct specific information security weaknesses related to electronic access control and other information system controls. Due to their sensitivity, these recommendations were included in a separate report designated for “Limited Official Use Only.” A fully developed, documented, and implemented agency information security program would provide the commission with a solid foundation for resolving its information security problems and for ongoing management of its information security risks. We believe that if our recommendations and SEC’s planned actions are carried out effectively, SEC can make considerable progress toward its declared vision as “the standard against which federal agencies are measured” and will be in a stronger position to manage its daily operations and accomplish its mission. This testimony is based on our recent audit of SEC’s fiscal year 2004 financial statements, which was conducted in accordance with U.S. generally accepted government auditing standards. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or the other members of the Subcommittee may have. For further information on this testimony, please contact Jeanette Franzel at (202) 512-9471 or at [email protected]. and Greg Wilshusen at (202) 512-6244 or at [email protected]. Individuals making key contributions to this testimony include Cheryl Clark, Kim McGatlin, Charles Vrabel, Estelle Tsay, Kristi Dorsey, and Maxine Hattery. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Pursuant to the Accountability for Tax Dollars Act of 2002, the Securities and Exchange Commission (SEC) is required to prepare and submit to Congress and the Office of Management and Budget audited financial statements. GAO agreed, under its audit authority, to perform the initial audit of SEC's financial statements. GAO's audit was done to determine whether, in all material respects, (1) SEC's fiscal year 2004 financial statements were reliable, (2) SEC's management maintained effective internal control over financial reporting and compliance with laws and regulations, and (3) SEC's management complied with applicable laws and regulations. Established in 1934 to enforce the securities laws and protect investors, the SEC plays an important role in maintaining the integrity of the U.S. securities markets. GAO was asked by the Chairman of the Senate Subcommittee on Federal Financial Management, Government Information, and International Security, Committee on Homeland Security and Governmental Affairs, to present the results of its May 26, 2005, report, Financial Audit: Securities and Exchange Commission's Financial Statements for Fiscal Year 2004 (GAO-05-244). The SEC's first ever financial audit was performed by GAO for fiscal year 2004. In reporting on the results of the audit, GAO issued an unqualified, or clean, opinion on the financial statements of the SEC. This means that SEC's financial statements presented fairly, in all material respects, its financial position as of September 30, 2004, and the results of operations for the year then ended. However, because of material internal control weaknesses in the areas of preparing financial statements and related disclosures, recording and reporting disgorgements and penalties, and information security, GAO issued an adverse opinion on internal controls, concluding that SEC did not maintain effective internal control over financial reporting as of September 30, 2004. However, SEC did maintain, in all material respects, effective internal control over compliance with laws and regulations material in relation to the financial statements as of September 30, 2004. In addition, GAO did not find reportable instances of noncompliance with laws and regulations it tested. It is important to remember that GAO's opinions on SEC's financial statements and internal controls reflect a point in time. SEC prepared its first complete set of financial statements for fiscal year 2004 and made significant progress during the year in building a financial reporting structure for preparing financial statements for audit. However, GAO identified inadequate controls over SEC's financial statement preparation process including a lack of sufficient documented policies and procedures, support, and quality assurance reviews, increasing the risk that SEC management will not have reasonable assurance that the balances presented in the financial statements and related disclosures are supported by SEC's underlying accounting records. In addition, GAO identified inadequate controls over SEC's disgorgements and civil penalties activities, increasing the risk that such activities will not be completely, accurately, and properly recorded and reported for management's use in its decision making. GAO also found that SEC has not effectively implemented information system controls to protect the integrity, confidentiality, and availability of its financial and sensitive data, increasing the risk of unauthorized disclosure, modification, or loss of the data, possibly without detection. The risks created by these information security weaknesses are compounded because the SEC does not have a comprehensive monitoring program to identify unusual or suspicious access activities. SEC agreed with our findings and is currently working to improve controls in all these areas. |
The Gulf Opportunity Zone Act of 2005 includes tax incentives to assist recovery and economic revitalization for individuals and businesses in designated areas in Alabama, Florida, Louisiana, Mississippi, and Texas following Hurricanes Katrina, Rita, and Wilma in 2005. Some of the tax incentives in the act are extensions of existing federal tax incentives (e.g., tax-exempt private activity bonds, new markets tax credits, low-income housing tax credits, etc.). In some cases, the GO Zone Act liberalizes the rules that taxpayers must follow to be eligible to claim the incentives. JCT estimates indicate that these tax incentives will reduce federal revenue by about $9 billion over the 10-year period from 2006 to 2015. The act defined the GO Zones as those counties and parishes in the presidentially-declared disaster areas that warranted additional, long-term federal assistance. Rather than making the incentives available statewide in the five states affected by the 2005 hurricanes, the act defined a separate GO Zone for the three major 2005 hurricanes—Katrina, Rita, and Wilma. Both Hurricane Katrina and Hurricane Rita affected certain areas in Louisiana, so those GO Zones overlap in Louisiana. Figure 1 illustrates the GO Zone areas in each of the five affected states. Tax provisions to assist recovery vary by GO Zone, with the most assistance provided to areas damaged by Hurricane Katrina, particularly in Louisiana and Mississippi. The GO Zone Act was broadly similar to legislation that created the New York Liberty Zone following the September 11, 2001, terrorist attacks in New York City. The Job Creation and Worker Assistance Act of 2002 contained various tax incentives designed for response and recovery for the area of lower Manhattan designated as the New York Liberty Zone. Similar to those in the Liberty Zone, individuals and businesses may claim the GO Zone tax incentives as long as they meet specified federal requirements. In these cases, the tax incentives essentially function as open-ended entitlement programs that any eligible taxpayer may claim. For GO Zone tax incentives such as claiming accelerated depreciation on certain assets or partial expensing for certain clean-up costs, the full cost to the federal government depends on how many taxpayers claim the provision on their tax returns. For certain tax incentives in the GO Zone Act of 2005, state governments in states with GO Zones were authorized to establish processes and select which qualified projects are to receive tax incentive allocations up to each state’s allocation authority limit. As mandated by law, this report discusses those tax incentives in the GO Zone Act of 2005 that state and local governments play a role in allocating and overseeing. Because the states are responsible for selecting which entities receive the tax benefit up to the allocation authority provided by law, these GO Zone tax incentives appear similar to traditional grant programs where governments provide funds directly to grant recipients. For example, Gulf Coast states have processes for allocating funds through FEMA’s Public Assistance program and the CDBG program. In contrast with grant programs, where funds come directly from the government, investors in projects that have received GO Zone incentives are allowed to either claim a credit against their tax liability, as in the case of the LIHTC program, or exclude certain portions of the return on their investment, such as interest earned, in GO Zone projects from their taxable income, as is the case for tax-exempt bonds. Unlike traditional grant programs, the projects receiving these tax incentives are financed with bonds that must be repaid or equity provided by private investors, meaning developers, investors, insurers and other participants have an additional incentive to ensure that projects are financially viable and will remain compliant with applicable laws. Since the 2005 hurricanes, we are conducting ongoing work and have completed a number of studies evaluating disaster relief and recovery efforts along the Gulf Coast, including how states with GO Zones have made use of federal resources. Appendix IV lists related GAO products. Under the GO Zone Act of 2005, state and local governments are responsible for allocating and overseeing the use of four GO Zone tax incentives. According to our analysis of JCT estimates, these incentives accounted for about 40 percent, or $3.5 billion, of the projected cost to the federal government from the GO Zone Act over the 10-year period from 2006 to 2015. States with GO Zones were awarded authority to allocate $14.9 billion in tax-exempt private activity bonds, resulting in an estimated $1.6 billion reduction in federal revenue from 2006 to 2015; $330 million in low-income housing tax credits (LIHTC), resulting in an estimated $1.1 billion reduction in federal revenue from 2006 to 2015; $350 million in tax credit bonds, resulting in an estimated $57 million reduction in federal revenue from 2006 to 2015; and $7.9 billion in additional advance refunding bonds, resulting in an estimated $741 million reduction in federal revenue from 2006 to 2015. Alabama, Louisiana, and Mississippi received authority for all three bond provisions to use by the end of 2010 and increased LIHTC allocations each year for 2006 through 2008. Florida and Texas did not receive allocation authority for the bond provisions, but each received a onetime increase in LIHTC authority of $3.5 million for 2006. Table 1 shows the amount of allocation authority that each state received by provision. For GO Zone tax-exempt private activity bonds, the allocation amount that can be used over the 5-year period from 2006 to 2010 is significantly greater than the amount of qualified private activity bonds that may be allocated subject to annual state volume caps. For perspective, in 2008, state volume caps for tax-exempt private activity amounted to about $393 million, $365 million, and $262 million for Alabama, Louisiana, and Mississippi, respectively. Similarly, GO Zone LIHTC allocation authority in Alabama, Louisiana, and Mississippi exceeds regular LIHTC authority for 2006 to 2008. Alabama, Louisiana, Mississippi, received a total of about $27 million, $26 million, and $17 million, respectively, from their combined regular state LIHTC allocations (credit ceilings) for 2006 to 2008. The GO Zone Act of 2005 grants Alabama, Louisiana, and Mississippi the authority to award tax-exempt bond allocations for qualified private activities. Tax-exempt private activity bonds allow private business owners and corporations to borrow capital at interest rates lower than would otherwise be available. Governmental entities issue the tax- exempt private activity bonds, generally serving as conduits to provide the bond proceeds to borrowers, including business owners and corporations that have been awarded allocation authority from the state. The borrowers’ payments on their loans are then used to repay principal and interest on the bonds, and investors who purchased the bonds can generally exclude interest earned on the bonds from their federal income tax. All states receive an annual tax-exempt private activity bond allocation that is subject to a volume cap and is limited to certain types of facilities. GO Zone bonds can be issued in addition to annual state volume caps in eligible states and can be used for a broader range of facilities than tax- exempt private activity bonds subject to annual state volume caps. According to the GO Zone Act of 2005, eligible states have the authority to allocate an amount of GO Zone private activity bonds equal to $2,500 per person in the GO Zone, based on prehurricane population counts. States with GO Zone bond allocation authority must issue the bonds by December 31, 2010. For the three eligible states, aggregate GO Zone allocation authority over the period is between 6 and 22 times greater than annual qualified private activity bond authority subject to state volume caps. In contrast with qualified private activity bonds subject to annual state volume caps, GO Zone tax-exempt private activity bonds are not subject to the Alternative Minimum Tax. While GO Zone bonds can be used for facilities that generally cannot be built with tax-exempt private activity bond authority subject to annual state volume caps, such as hotels and retail facilities, GO Zone private activity bonds are subject to certain tax-exempt private activity bond prohibitions against financing golf courses, massage parlors, gambling, and liquor stores, among other prohibited uses. In addition, 95 percent of the bond proceeds must be used for qualified projects in the GO Zone. The liberalized use for GO Zone tax-exempt private activity bonds in comparison to qualified private activity bonds subject to annual state volume caps is broadly similar to private activity bond authority awarded in the New York Liberty Zone. Whereas the Liberty Zone provision divided the authority between the state and local levels and set specific limits on retail and residential rental property, the GO Zone provision did not set aside any portion to be allocated for specific localities or set dollar caps on certain categories of use. The GO Zone Act of 2005 provided states with broad discretion in allocating GO Zone tax-exempt private activity bonds. The GO Zone Act of 2005 temporarily increased the amount of allocated tax credits for the five states along the Gulf Coast by a total of about $330 million. The Tax Reform Act of 1986 authorized the LIHTC program to provide an incentive for the acquisition and development or the rehabilitation of rental housing affordable to low-income households. As required under the Internal Revenue Code, state housing finance agencies (HFA) must award credits to developers of qualified projects according to their qualified allocation plans (QAP), which establish the agencies’ funding priorities and eligibility criteria. Developers either use the credits or sell them to investors to raise capital (i.e., equity) for real estate projects. In general, investors can claim credits on the qualified basis of the property—that is, total development cost (excluding land and other certain costs) of the low-income units. The investor receives approximately 9 percent of the qualified basis in tax credits annually for 10 years. The equity raised by the tax credits reduces the need for debt financing and as a result, these properties can offer lower, more affordable rents. GO Zone LIHTCs were provided to the five eligible states in addition to their regular annual allocations. Under the regular LIHTC program in 2006, each of the five states was allocated tax credits equal to $1.90 per capita. The allocation claimed by Louisiana, Alabama, and Mississippi for 2006, 2007, and 2008 equaled $18.00 times each state’s prehurricane population in the GO Zone. As a result of the increased per capita allocation, the 2006 through 2008 GO Zone LIHTC authority was about 75 percent, 567 percent, and 523 percent greater than the regular LIHTC authority that Alabama, Louisiana, and Mississippi received in the same period, respectively. While projects that receive LIHTCs must normally be placed in service within 2 years of the credit allocation, GO Zone LIHTC-funded units (as well as regular LIHTC units funded during 2006 through 2008 in a GO Zone) must be placed in service before January 1, 2011. Unlike regular LIHTC allocations, GO Zone credits generally cannot be carried forward. The GO Zone LIHTC provision also expanded the amount of credits available to investors. To promote investment in areas where the need is greatest for affordable housing, investors can receive additional credits if the tax credit properties are located in distressed areas known as “difficult development areas” (DDA). Investors in such “enhanced” tax credit properties can claim credits for 91 percent (instead of the normal 70 percent) of the project’s qualified basis. The GO Zones are treated as DDAs under the 2005 act, and LIHTC properties located in the GO Zone are thus eligible for those enhanced LIHTCs. The GO Zone Act of 2005 created a new category of tax credit bonds—GO Zone tax credit bonds—which Alabama, Louisiana, and Mississippi were authorized to issue to provide debt relief to state and local governments. The GO Zone tax credit bonds must be general obligations of the state and the bond proceeds must be used to make payments on existing state and local debt. Tax credit bonds differ from tax exempt bonds in that rather than receiving tax-exempt interest payments, bondholders are entitled to a federal tax credit equal to a certain percentage of their investment. Because the state does not pay interest on the GO Zone tax credit bonds, they essentially serve as an interest-free loan while they are outstanding. Under the act, states are required to match the GO Zone tax credit bonds with funds from other sources, such as issuing additional general obligation bonds or money from a state’s general fund. The three eligible states were required to issue tax credit bonds by December 31, 2006, and must retire the bonds within 2 years of issuing them, meaning that they provide only temporary debt relief to the state and local governments that they benefit. The GO Zone Act of 2005 permits one additional advance refunding for certain existing tax-exempt bonds in Alabama, Louisiana, and Mississippi. State and local governments typically refund, or refinance, bonds to reduce interest costs and ease restrictions on the original bond contract. However, state and local governments can generally only advance refund a bond one time and, in certain cases, including most tax-exempt private activity bonds, bonds cannot be advance refunded. The GO Zone additional advance refunding provision provides Alabama, Louisiana, and Mississippi the ability to reduce interest costs on existing debt that has already been advance refunded one time or did not originally meet the requirements to allow for advance refunding. The provision can be used by any issuer in the state and is not limited to counties and parishes located in the GO Zone. All additional advance refunding bonds must be issued by December 31, 2010. The additional advance refunding provision can only be used for bonds where one bond is currently outstanding, whether that bond is the original bond or the first advance refunded bond. To clarify, bonds being refunded are often subject to a call date, or a date at which the bond issuer may recall the original bonds and refund or retire them. For advance refundings, a second set of bonds is often established in advance of the call date with the proceeds from these bonds being dedicated to retire the original bond when the call date is reached. Therefore, after the second set of bonds are issued and before the call date on the original bonds is reached, two sets of bonds are outstanding. GO Zone additional advance refundings cannot be used in these instances. Like GO Zone tax-exempt private activity bonds, GO Zone additional advance refunding authority is similar to additional advance refunding authority awarded in the New York Liberty Zone following the September 11, 2001, terrorist attacks. Liberty Zone advance refunding was limited to certain debts of the state or New York City, including qualified hospital 501(c)(3) bonds. With some variations in their processes, Alabama, Louisiana, and Mississippi generally allocated the GO Zone bond provisions on a first- come, first-served basis, and all five eligible states used existing processes to award GO Zone LIHTCs. For the most part, all three eligible states allocated GO Zone tax-exempt private activity bond authority without consistently targeting the allocations to assist recovery in the most damaged areas, although Louisiana recently set aside some of its remaining allocation authority for the most damaged parishes. Officials in Louisiana and Mississippi acknowledged that the first-come, first-served approach made it difficult for applicants in some of the most damaged areas to make use of the bond provision at the beginning of the program. While all eligible states followed existing, prehurricane procedures to award LIHTCs and will use existing procedures to monitor for federal compliance, the states exhibited some differences in how they targeted the GO Zone LIHTC authority to the most damaged areas. Louisiana and Mississippi adopted different approaches for issuing tax credit bonds and identifying matching fund sources, and Alabama chose not to issue tax credit bonds due to time constraints associated with issuing the bonds. Similar to the case of GO Zone private activity bonds, when allocating additional advance refunding authority the three eligible states generally used a first-come, first-served approach. To expedite bond processing, the three eligible states generally allocated the GO Zone private activity bond authority and approved projects on a first-come, first-served basis without consistently targeting the allocations to assist recovery in the most damaged areas. Alabama and Mississippi have similar processes for allocating GO Zone private activity bonds, whereas Louisiana’s bond allocation process has changed over time. Some state officials we interviewed said they were uncertain whether the GO Zone bonds should be used quickly anywhere within the qualified areas to provide an economic stimulus to the GO Zone, or if the bonds should be more focused on recovery efforts in the damaged areas. State officials we interviewed acknowledged that the first-come, first-served approach led to awarding bond allocation to projects in less damaged areas in the GO Zone because businesses in these areas were ready to apply for and issue bonds before businesses in more damaged areas could make use of the incentive. Counties and parishes in the most damaged coastal areas of Louisiana and Mississippi faced challenges dealing with the immediate aftermath of hurricane debris removal and helping those displaced. While potential GO Zone private activity bond applicants in some of the most damaged areas were not positioned to apply for allocations when they first became available, applicants in those areas have been able to use the GO Zone bonds more recently. As with tax-exempt private activity bonds that are subject to state volume caps, state officials said they rely on bond counsel to ensure that bond applications meet program requirements before issuance. Issuers must also file information return Form 8038 with the IRS. According to IRS officials, post-issuance compliance costs often do not surface until as long as 5 years after bonds have been issued. State officials and others we interviewed did not identify any alleged cases of fraud, waste, or abuse with GO Zone private activity bonds. However, the processes states used to allocate GO Zone bonds introduced risks that could lead to inefficient allocation of the federal assistance. For example, officials interviewed indicated that some projects that may have been viable without tax- exempt financing received some tax-exempt bond authority. Alabama and Mississippi applied similar first-come, first-served approaches to allocate GO Zone tax-exempt private activity bonds. In accordance with the GO Zone Act of 2005, the governors of both states had the authority to award GO Zone private activity bond allocations. Both states outlined general criteria for making awards, but also acknowledged that they did not specifically target certain areas within the zone to receive set-aside amounts of GO Zone private activity bond allocations. The Governor of Alabama initially identified four criteria for prioritizing GO Zone bond authority: (1) replacing property damaged or destroyed by Hurricane Katrina, (2) rebuilding infrastructure of cities or counties damaged by Hurricane Katrina, (3) projects that substantially improve the quality of life for the area, and (4) new economic development projects. Beginning in April 2006, Alabama began awarding allocations based on the Governor’s priorities. According to Alabama officials, the apparent demand was for new economic development projects, and the state decided to adopt a first-come, first-served approach for allocating GO Zone private activity bond authority. According to the officials we interviewed, one possible explanation why many potential GO Zone bond borrowers in the first two categories generally did not apply for tax- exempt private activity bonds could be that they received insurance settlements and federal grants and loans for rebuilding efforts. Unlike regular private activity bonds subject to volume caps, Alabama did not charge an application fee for GO Zone bonds. GO Zone private activity bond applicants in Alabama submit applications to the Alabama Department of Finance, Debt Management Division, which reviews the applications for completeness and the applications are then submitted to the Finance Director for his approval. The Governor did not set a maximum amount of authority that may be allocated per project. In Alabama, the bonds must be issued within 1 year of the time the borrower receives final approval to issue bonds, but borrowers may request an extension. In Mississippi, the Governor’s Office grants the final approval of GO Zone bond allocations on a case-by-case basis after reviewing recommendations from the Mississippi Development Authority (MDA). The MDA reviews GO Zone bond applications for information on the number of jobs the project will create, its projected general economic effect, and the degree to which the developer is willing to invest its own funds in the project. Mississippi’s application for private activity bond authority includes a $1,000 application fee and applications are reviewed in the order in which they are received. Recommendations for allocation are generally made by MDA on a first-come, first-served basis. Mississippi officials indicated that they have tried to ensure that the six most-damaged counties along the Gulf Coast have benefited from projects with GO Zone private activity bond allocations, but no portion of the GO Zone bond allocation authority was set aside specifically for these counties. The Governor did not set a maximum amount of authority that may be allocated per project. Once a project receives final approval in Mississippi, bonds must be issued within 120 days. Louisiana’s process for allocating GO Zone private activity bonds has changed over time, and as of June 2008, most of the bond allocation authority has been awarded on a first-come, first-served basis with a portion set-aside for the 13 most-damaged parishes. The Louisiana State Bond Commission (SBC) in February 2006 planned to set aside 50 percent of its allocation authority for the 13 most-damaged parishes and 50 percent for the rest of the GO Zone. No single project was to receive more than $250 million in GO Zone bond allocation authority. In March 2006, the Louisiana Department of Economic Development (LDED) began recommending that projects receive allocations based on applications to the Louisiana SBC on a first-come, first-served basis. Under the GO Zone Act of 2005, the Louisiana SBC had authority to make GO Zone bond allocations. However, following SBC approval of GO Zone bond allocations, the final step in the Louisiana process included the Governor issuing an allocation letter to each GO Zone bond recipient. Louisiana state officials we interviewed stressed that they did not initially believe they would have sufficient demand to use all of their $7.9 billion allocation by the end of 2010, when the GO Zone bond authority expires. Beginning in May 2006, Louisiana SBC staff provided status updates to Louisiana SBC members on the amount of remaining and used GO Zone bond authority for the 13 most-damaged parishes and the remaining parishes at monthly Louisiana SBC meetings. By the end of June 2007, the Louisiana SBC had granted final approval for almost $4.6 billion, or nearly 60 percent, of the state’s GO Zone bond allocation authority, and preliminary approval to projects worth an additional $3.0 billion, leaving roughly $300 million in allocation authority. The allocations approved included projects exceeding the $250 million cap with two large projects for $1 billion each. According to bond commission meeting transcripts, some bond commission members expressed concerns that the allocations had not been effectively targeted to areas and businesses with the greatest need. While the 13 most-damaged parishes in aggregate had received more than 50 percent of the final allocations, some of the most severely damaged areas around New Orleans had received relatively little as of June 2007. For example, as of June 2007, Orleans parish had only received about 2 percent of the GO Zone bond allocation authority that had been granted final approval. According to Louisiana SBC officials, this was in large part due to a lack of applications for potential GO Zone bond projects in the most severely damaged areas. Around that time, the Governor’s office declined to issue final allocation letters pending review of the state’s GO Zone bond process, and the Louisiana SBC temporarily stopped awarding final approval for GO Zone bonds. In September 2007, the Louisiana Attorney General ruled that the Governor did not have the authority to issue final approval letters for GO Zone private activity bonds because the federal act specified that the state bond commission has final approval. In September 2007, the Louisiana SBC, in conjunction with the Governor’s Office and the LDED, revised the GO Zone allocation process and established two pools of allocation authority—a “designated pool” for the 13 most-damaged parishes and a “competitive pool” for the rest of the GO Zone parishes. While the designated pool resembled the SBC’s original concept, the list of the 13 parishes targeted for GO Zone bond allocations was updated based in part on FEMA data on businesses and infrastructure damage and HUD data on major and severe housing damage. Each pool was to receive half of the remaining GO Zone bond authority, or about $1.7 billion per pool. Further, each of the 13 parishes in the designated pool was to receive GO Zone bond allocation authority in proportion to the degree of damage sustained. In addition, projects that had received preliminary approval earlier were put on hold and had to qualify for final approval by satisfying certain criteria relating to the number of jobs created and overall economic effect. The Louisiana SBC announced allocation awards in October 2007, awarding allocations for all remaining authority in the competitive pool with about $900 million of allocation authority remaining in the designated pool. From October 2007 to April 2008, GO Zone bond projects faced difficulty obtaining credit due in part to changing market conditions nationwide, and some GO Zone projects requested additional time to complete their issuance. However, some bond commissioners expressed concern that repeatedly extending the period to issue GO Zone bonds allowed less viable projects to retain their allocations while other applications were waiting for consideration. On the basis of its March 2008 review of Louisiana’s GO Zone bond process requested by the newly elected Governor, the LDED acknowledged that the Louisiana GO Zone bond process lacked effective prioritization mechanisms and lacked mechanisms to ensure selected projects would be viable, and that the changing rules had led to disappointment and confusion among some stakeholders. Therefore, the LDED recommended that the Louisiana SBC revise its rules for allocating the remaining private activity bond authority. In April 2008, the Louisiana SBC adopted the LDED’s recommendations and, among other recommendations, took the following actions: Award authority in the competitive pool to applicants that applied before November 2007 on a first-come, first-served basis. After making awards to those existing applicants, future awards from the competitive pool will be prioritized based on whether the LDED determines that the proposed project is consistent with Louisiana’s longer-term economic development strategy. However, even if the LDED does not designate a project as consistent with Louisiana’s longer-term economic development strategy, the projects are considered by the SBC on a first-come, first-served basis, and, at the SBC’s discretion, may or may not be awarded an allocation. Limit the amount of authority that any single developer can receive to $100 million going forward. Allow local jurisdictions in the designated pool to recommend projects to the Louisiana SBC for approval, and local jurisdictions may recommend projects exceeding the $100 million limit, as long as the local jurisdiction is located in a designated pool parish that has allocation authority remaining and the parish approves the allocation. Impose a 240-day period to issue bonds after receiving allocations. Require, in addition to the regular application fee, that applicants must submit a refundable deposit totaling 0.5 percent of the proposed allocation with their application. This change also applied to the applicants that had received allocations but had not yet issued bonds as a condition for them to receive the full 240-day extension of their allocation time limit. In May 2008, the Louisiana SBC awarded GO Zone bond authority to applicants in the competitive pool pending before November 2007 as well as to all other pending applications on the basis of the prioritization policy described above, effectively exhausting the competitive pool allocation authority on a first-come, first-served basis. As of June 2008, parishes in the designated pool have bond authority remaining to allocate. According to Louisiana officials, if the designated pool parishes have not issued bonds for all of their authority by the beginning of 2010, the remaining authority will be available to both the competitive and designated pools during 2010. The HFAs from the five eligible states used their existing procedures to announce the availability of credits, process applications, and award GO Zone LIHTCs. Three of the five states announced the availability of GO Zone LIHTCs in their qualified allocation plans (QAP), which HFAs use to announce the availability of credits, eligibility criteria, and information on ranking applications. The Florida HFA had already finalized its QAP in 2006 (before the state received the extra GO Zone allocation), so the availability of these funds was not announced in the QAP, but was advertised at the beginning of the 2006 funding cycle. Similarly, the Texas HFA announced the availability of GO Zone LIHTCs through a separate policy because its QAP had already been published. Officials from each state HFA stated that they used their existing procedures to process GO Zone LIHTC applications and to award the credits. These processes generally include scoring and ranking applications, reviewing market studies and environmental assessments, underwriting, and making funding recommendations to the boards. States generally announced funding priorities for each funding cycle through their QAPs. According to the 2006 through 2008 QAPs and HFA officials, in some funding cycles the HFAs for Louisiana, Mississippi, and Texas gave priority to projects that were to be developed in the GO Zone counties with the most hurricane-related damage. For example, according to the 2007/2008 QAP for Louisiana, at least 75 percent of the 2007 and 2008 GO Zone credits were reserved for parishes with significant numbers of rental housing with damage from Hurricanes Katrina and Rita. While 13 specific parishes were targeted in Louisiana for GO Zone bonds, the QAP targeted 8 parishes for GO Zone LIHTCs. The QAPs for Alabama and Florida did not give any priority to projects in the most-damaged GO Zone counties. According to the addendum to Alabama’s 2006 QAP, the HFA’s priority was to balance the distribution of credits throughout the state in terms of geographical regions, counties, and urban and rural areas. Officials from the Florida HFA stated that the need to prioritize the GO Zone counties with the most damage was mitigated by an agency policy to prioritize the counties that were hardest hit by 2004 and 2005 hurricanes, which already included some of the GO Zone counties. HFAs are required to annually monitor LIHTC-funded projects for compliance with the Internal Revenue Code once projects are complete and in service, and report any instances of noncompliance to the IRS after giving the owner time to correct the problem. As a part of this annual monitoring process, officials review documentation to ensure that the properties meet one of two tests that restrict both the amount of rent that is assessed to tenants and the income of eligible tenants. In addition, at least once every 3 years, HFAs must conduct on-site inspections of all buildings in a project and, for at least 20 percent of the project’s low- income units, inspect the units and review income and rent records. For example, the Alabama HFA’s compliance manual states that its staff will perform annual file reviews, including site visits as needed, to ensure that the owner is operating the project in compliance. Similarly, according to the Louisiana HFA compliance manual, staff will annually review compliance monitoring reports, which all project owners are required to submit, and periodically conduct on-site visits. We did not assess the extent to which monitoring policies have been followed since few GO Zone LIHTC-funded units had been placed in service as of April 2008. The Louisiana SBC issued tax credit bonds matched by the state with general obligation bonds and will retire the bonds by July 17, 2008, by refunding them with tax-exempt general obligation bonds. The state accepted applications from local entities to receive funds from the tax credit bond proceeds. In Mississippi, the State Treasurer’s Office issued tax credit bonds matched by state general fund money to provide temporary debt relief at the state level. They will retire these bonds on October 30, 2008. Both states rely on the state’s bond counsel opinions to ensure that the tax credit bonds are issued in accordance with applicable federal laws. Alabama did not issue the tax credit bonds because officials in Alabama’s Debt Management Division believed the deadline for issuing tax credit bonds would expire before the state would be able to issue general obligation bonds. Alabama officials said the state is required to amend its constitution to issue general obligation bonds; as we noted earlier, states are required to issue the GO Zone tax credit bonds as general obligation bonds. As such, Alabama officials did not believe they would be able to meet the deadline for issuing the tax credit bonds. The Governor of Alabama delegated the responsibility to allocate advance refundings to the Finance Director. Alabama allocated advance refundings on a first-come, first-served basis as counties and bond authorities submitted advance refunding applications to the Alabama Department of Finance, Debt Management Division. According to Alabama officials, as long as the bond issuer and bond counsel certified that the bonds qualified for advance refunding, the state approved the application. In Louisiana, additional advance refunding applicants were to apply to the Louisiana SBC, with a copy being sent to the Governor’s Office. Applications for additional advance refunding authority were approved on a first-come, first-served basis. Louisiana indicated the state will approve eligible advance refundings on a first-come, first-served basis until $1.5 billion of allocation authority remains and then will develop criteria to allocate the remaining authority. The issuer’s bond counsel must indicate in writing that the bonds meet applicable requirements. The Mississippi State Treasurer’s Office, along with other state agencies, made cities, counties, and other governmental debt issuers aware of the additional advance refunding provision. It then allocated additional advance refunding authority to entities that applied to the Treasurer’s Office generally on a first-come, first-served basis after the issuer’s bond counsel indicated the bonds met the eligibility requirements for GO Zone additional advance refundings and the State Treasurer certified that the issuer filed a completed application. As of mid-June 2008, states with GO Zones had allocated 87 percent of their authority for GO Zone tax-exempt private activity bonds and as of April 2008 about 95 percent of their LIHTC authority. However, bonds have been issued for only about half of the final allocations, representing 44 percent of the total GO Zone tax-exempt private activity bond authority, and few LIHTC units were in service. As of those dates, Alabama, Louisiana, and Mississippi used GO Zone private activity bonds to finance a broad range of private facilities, including manufacturing facilities, utilities, housing, retail facilities, hotels, and other facilities. Depending on circumstances such as time frames for issuing the bonds and the number of eligible bonds available, states’ use of GO Zone tax credit bonds and additional advance refundings varied. States used these latter two provisions to provide debt relief at the state and local level. State officials identified some challenges in making use of all four provisions. As of mid-June 2008, the three eligible states have allocated 87 percent of the $14.9 billion in GO Zone private activity bond authority, and bonds issued amounted to about 44 percent of the total allocation authority (see fig. 2). Alabama, Louisiana, and Mississippi have allocated 88 percent, 89 percent, and 84 percent of their GO Zone private activity bond authority, respectively. Bonds had been issued for about half of the allocation authority that the states awarded. If applicants receiving private activity bond allocations cannot issue bonds within required time frames, the allocation authority is returned to the state to be reallocated. States have used these bonds for a wide range of facilities as allowed under the act. For example, states used GO Zone bonds for manufacturing facilities, utilities, housing, hotels, conference centers, retail facilities, and a variety of other private business activities. Under normal rules for tax-exempt private activity bond financing, some of these uses, including hotels and retail facilities, would not be allowed. For a listing of GO Zone private activity bonds issued in Alabama, Louisiana, and Mississippi as of mid- June 2008, see appendix I. In all three eligible states, larger bonds account for at least 69 percent of the total GO Zone bonds issued. As of mid-June 2008, the 10 largest bonds issued in Alabama, Louisiana, and Mississippi account for approximately 92 percent, 76 percent, and 69 percent of total bonds issued, respectively. This is one indication that borrowers financing larger projects may have been able to use the GO Zone tax-exempt private activity bonds to a greater extent than borrowers financing smaller projects. Officials in Louisiana and Mississippi said that smaller projects, such as those issuing bonds for less than about $2 or $3 million, may have a more difficult time using the GO Zone private activity bond provision because the issuance costs, including fees for legal and financial advisors, begin to account for a larger percentage of bond proceeds for smaller bonds than for larger bonds. Some GO Zone parishes and counties contained multiple projects that received GO Zone bond allocations, with total amounts in the counties and parishes ranging from less than $1 million to over $1 billion. The states also awarded projects that were to benefit multiple GO Zone counties and parishes. Alabama awarded 7 percent of its GO Zone private activity bond authority to projects affecting multiple counties. Louisiana awarded 6 percent of its authority to projects affecting multiple parishes and Mississippi awarded 29 percent of its authority in a similar manner. Figure 3 shows the range of GO Zone tax-exempt private activity bond allocations by state and by county or parish as of mid-June 2008. Even though the figure indicates that some of the most-damaged counties and parishes in Mississippi and Louisiana did not yet have any specific projects that have received GO Zone bond allocations as of June 2008, these counties and parishes may have benefited from some of the projects affecting multiple counties or parishes within that state. Projects are experiencing challenges in issuing the GO Zone private activity bonds. According to state officials and others we interviewed: private businesses may need to combine tax-exempt bond financing with other federal, state, or local subsidies for a project to be viable; risks associated with financing projects in areas most vulnerable to future natural disasters can make it difficult to sell the bonds; insurance costs have risen since the 2005 hurricanes; reduced availability of credit nationwide has made it more difficult to issue bonds; and the most heavily damaged areas in and around New Orleans, Louisiana, may not be able to use their designated allocation authority before it expires in 2010. The five eligible states have awarded 95 percent of the combined $330 million in additional tax credit allocation from 2006 through 2008 as provided under the GO Zone Act of 2005. The state HFAs for Florida, Mississippi, and Texas awarded all of the $113 million of the additional GO Zone LIHTC allocations from 2006 through 2008. As of April 2008, the Alabama HFA had awarded 74 percent of its $47 million GO Zone LIHTC allocation for 2006 through 2008. The agency awarded all of its 2006 and 2007 allocations, and was in the process of awarding its nearly $16 million 2008 GO Zone LIHTC allocation. The Louisiana HFA had awarded all of its GO Zone LIHTCs, but some have since been returned. Few of the planned GO Zone LIHTC funded units were in service as of April, 2008 (see fig. 4). While LIHTC-funded units are generally required to be placed in service within 2 years of credit allocation, Congress extended this requirement for units funded with GO Zone LIHTCs. GO Zone LIHTC- funded units (as well as regular LIHTC units funded during 2006 through 2008 in a GO Zone) must be placed in service before January 1, 2011. Officials from the five HFAs that received GO Zone LIHTCs told us that most units are anticipated to be in service within the required time frames, but that they are working to address challenges to help ensure that all units are placed in service by the deadline. One of the challenges that some HFA officials stated that they are facing is the declining market value of tax credits. As the value of tax credits declines, developers get less equity from investors for each dollar in tax credits awarded. As a result, developers must seek additional funding sources to make up for the equity shortfall. HFA officials from Alabama, Louisiana, and Mississippi also explained that they have encountered resistance from some communities to the development of affordable rental housing. Such resistance generally slows development, as developers try to work out such issues with local officials. In addition, HFA officials noted that other challenges include the need to address environmental issues, and increases in the total costs to develop projects, due to increased costs for labor, materials, and land. The number of planned GO Zone LIHTC-funded units, and units placed in service as of April 2008, varies by GO Zone parish and county (see fig. 5 and app. II, table 5). As an example, the total number of planned and in- service units ranged from a low of 24 units in Washington County, Alabama, to a high of 8,636 in Orleans Parish, Louisiana. In most counties, from 1 to 150 units were either planned or in service as of April 2008. In other GO Zone counties, no GO Zone LIHTC-funded units are planned. Our analysis focuses on GO Zone LIHTC-funded units, and it is important to note that other programs can also be used to fund affordable rental housing, such as HUD programs and other state and local programs. According to statewide HUD data, over 82,000 rental housing units in Louisiana and over 20,000 rental housing units in Mississippi had major or severe damage. Of these units, nearly 80,000 were in eight Louisiana parishes and about 19,200 were in three Mississippi counties (see fig. 6). In Louisiana and Mississippi, the number of rental units with major or severe damage ranged from a low of 468 in Vermilion Parish, Louisiana, to a high of nearly 52,000 units in Orleans Parish, Louisiana. Comparing the number of planned and in-service GO Zone LIHTC-funded units to the number of rental housing units with major or severe damage shows that about 17 percent of the rental housing units with major or severe damage in the state of Louisiana, and 45 percent of the similarly damaged units in the state of Mississippi, will be addressed by GO Zone LIHTC units. While other programs can also be used to fund affordable rental housing, our analysis focuses on GO Zone LIHTC-funded units. The extent to which damaged units will be addressed varies by county and parish (see fig. 7 and app. II, table 5). Louisiana and Mississippi issued a total of $300 million of GO Zone tax credit bonds, exhausting their allocation authority. Louisiana issued its $200 million of the tax credit bonds to distribute funds through a “Debt Service Assistance Fund” to selected local government entities that applied to the State Bond Commission. In all, 13 local government entities in New Orleans and the surrounding area received bond proceeds from Louisiana’s tax credit bonds and general obligation matching bonds. The local governments used the funds to make premium and interest payments on bonds issued before August 28, 2005. Mississippi issued $100 million of the tax credit bonds to make debt service payments on state bonds for fiscal year 2007. As discussed above, Alabama did not issue tax credit bonds because state officials did not believe they would be able to meet the requirements for issuing general obligation bonds (tax credit bonds must be general obligations of the state) before the provision expired. Mississippi officials indicated that educating investors about the mechanics of tax credit bonds—that investors receive a credit against tax liability but no interest payments—proved to be one challenge in marketing the bonds to investors. In total, Alabama, Louisiana, and Mississippi have issued GO Zone additional advance refunding bonds accounting for 16 percent of their total allocation authority. As of mid-June 2008, Alabama had issued 65 bonds and approved an additional 19 advance refundings, totaling $599 million, or about 53 percent of its allocation authority. In contrast, Louisiana has issued 9 advance refundings, totaling $483 million, or about 11 percent of its allocation authority, and Mississippi has only issued 6 advance refundings, totaling $169 million, or about 8 percent of its allocation authority. Appendix III lists the additional advance refunding bonds issued in Alabama, Louisiana, and Mississippi. Similar to the case of other tax incentives such as private activity bonds and tax credit bonds, the states faced certain challenges in issuing additional advance refundings. According to state officials in Louisiana and Mississippi, some bonds for which the state may have been able to approve an additional advance refunding still had both the original bond awaiting its call date and the first advance refunding bond outstanding. Under the GO Zone Act of 2005, such bonds are ineligible for an additional advance refunding. Also, according to officials in Louisiana, some debt had already been refunded before the hurricanes at lower interest rates than those available after the 2005 hurricanes. We provided state officials in Alabama, Florida, Louisiana, Mississippi, and Texas with portions of the draft report that addressed aspects of GO Zone tax incentive allocation and use in their jurisdictions. We incorporated their technical comments as appropriate. We also provided relevant sections of the report to the Internal Revenue Service and the Office of Tax Policy, Department of the Treasury. We also incorporated their technical comments as appropriate. We are sending copies of this report to the interested congressional committees, the Commissioner of Internal Revenue, the Secretary of Housing and Urban Development, and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report or would like additional information, please contact me at (202) 512-9110 or at [email protected]. Major contributors to this report are acknowledged in appendix V. GO Zone LIHTC units as a percent of units with severe or major damage n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. GO Zone LIHTC units as a percent of units with severe or major damage n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. Appendix III: Additional Advance Refunding Bonds Issued, by State Alabama A&M University The Public Building Authority of the City of Huntsville Jasper Water Works & Sewer Board Inc. Utilities Board of the City of Opp Dale County Board of Education Crenshaw County Public Building Authority The Governmental Utility Services Corp. of Bessemer The Hale County Health Care Authority Lamar County Water & Fire Protection Authority Inc. Tax Policy: Tax-Exempt Status of Certain Bonds Merits Reconsideration, and Apparent Noncompliance with Issuance Cost Limitations Should Be Addressed. GAO-08-364. Washington, D.C.: February 15, 2008. Tax Compliance: Federal Grant and Direct Assistance Recipients Who Abuse the Federal Tax System. GAO-08-31. Washington, D.C.: November 16, 2007. Tax Compliance: Some Hurricanes Katrina and Rita Disaster Assistance Recipients Have Unpaid Federal Taxes. GAO-08-101R. Washington, D.C.: November 16, 2007. Gulf Coast Rebuilding: Observations on Federal Financial Implications. GAO-07-1079T. Washington, D.C.: August 2, 2007. Preliminary Information on Gulf Coast Rebuilding. GAO-07-809R. Washington, D.C.: June 29, 2007. Gulf Coast Rebuilding: Preliminary Observations on Progress to Date and Challenges for the Future. GAO-07-574T. Washington, D.C.: April 12, 2007. Tax Policy: New Markets Tax Credit Appears to Increase Investment by Investors in Low-Income Communities, but Opportunities Exist to Better Monitor Compliance. GAO-07-296. Washington, D.C.: January 31, 2007. Empowerment Zone and Enterprise Community Program: Improvements Occurred in Communities, but the Effect of the Program Is Unclear. GAO- 06-727. Washington, D.C.: September 22, 2006. Government Performance and Accountability: Tax Expenditures Represent a Substantial Federal Commitment and Need to Be Reexamined. GAO-05-690. Washington, D.C.: September 23, 2005. Elderly Housing: Federal Housing Programs That Offer Assistance for the Elderly. GAO-05-174. Washington, D.C.: February 14, 2005. Tax Administration: Information Is Not Available to Determine Whether $5 Billion in Liberty Zone Tax Benefits Will Be Realized. GAO-03-1102. Washington, D.C.: September 30, 2003. Federal Housing Assistance: Comparing the Characteristics and Costs of Housing Programs. GAO-02-76. Washington, D.C.: January 31, 2002. Tax Credits: Opportunities to Improve Oversight of the Low-Income Housing Program. GAO/GGD/RCED-97-55. Washington, D.C.: March 28, 1997. In addition to the individual named above, MaryLynn Sergent, Assistant Director; Dan Garcia-Diaz; Thomas Gilbert; Patrick Hatch; Suzanne Heimbach; Lisa Moore; John McGrail; Cheryl Peterson; and Jessica Thomsen made key contributions to this report. | In 2005, Hurricanes Katrina, Rita, and Wilma devastated the Gulf Coast, destroying wide swaths of housing, key infrastructure, and numerous private businesses. In response, Congress granted the states a wide range of disaster relief, including billions of dollars of grants and tax incentives to revitalize the Gulf Coast. Specifically, the Gulf Opportunity (GO) Zone Act of 2005 (Pub. L. No. 109-135) provided tax incentives to individuals and businesses in certain presidentially declared disaster areas. Congress mandated that GAO review how state and local governments allocated and used federal tax incentives in the act and subsequent legislation. This report (1) identifies tax incentives in the GO Zone Act of 2005 and subsequent legislation for which state and local governments have allocation and oversight responsibilities, (2) describes the procedures state governments use in allocating the tax incentives, including how they plan to monitor compliance with federal laws, and (3) describes how tax incentives have been allocated and for what purposes. To address these objectives, GAO analyzed key documentation from GO Zone states and interviewed state officials, selected local officials, and representatives from private and nonprofit entities. States with GO Zones are responsible for allocating and overseeing the use of four tax incentives in the GO Zone Act of 2005. Alabama, Louisiana, and Mississippi received allocation authority for all four provisions. Florida and Texas each received $3.5 million in GO Zone low-income housing tax credit (LIHTC) authority, but did not receive allocations under the other incentives. With some process variations, the three eligible states with GO Zones generally allocated bond authority on a first-come, first-served basis without consistently targeting the allocations to assist recovery in the most damaged areas. Officials in Louisiana and Mississippi acknowledged that the first-come, first-served approach led to allocating bond authority to less-damaged areas at the start of the program. The five eligible state housing finance agencies used existing processes to award GO Zone LIHTCs, but differed in how they targeted these credits. For all three bond provisions, state officials and bond issuers said the borrower's bond counsel is generally responsible for ensuring that the bonds are compliant with applicable laws when issued. State housing finance agencies plan to use existing procedures to monitor compliance once units are placed in service. As of mid-June 2008, eligible states had allocated 87 percent of the GO Zone private activity bond authority, but bonds issued amount to about 50 percent of the total awarded allocation authority. The bonds issued will be used to finance a wide range of facilities, including manufacturing facilities, utilities, housing, retail facilities, and hotels. State housing finance authorities have awarded 95 percent of the GO Zone LIHTCs. Although few housing units are currently in service, state housing finance agency officials said planned units will be in service by the mandated deadline. GO Zone LIHTC-funded units will address about 17 and 45 percent of the rental housing units with major or severe damage in the states of Louisiana and Mississippi, respectively. The three eligible states with GO Zones used the tax credit bonds and additional advance refundings to varying degrees to provide debt relief. State officials said current economic conditions pose challenges for using both GO Zone bond and LIHTC financing. |
Multiple executive-branch agencies are responsible for different phases of the federal government’s personnel security clearance process. For example, in 2008, Executive Order 13467 designated the DNI as the Security Executive Agent. As such, the DNI is responsible for developing policies and procedures to help ensure the effective, efficient, and timely completion of background investigations and adjudications relating to determinations of eligibility for access to classified information and eligibility to hold a sensitive position. In turn, executive branch agencies determine which of their positions—military, civilian, or private-industry contractors—require access to classified information and, therefore, which people must apply for and undergo a personnel security clearance investigation. Investigators—often contractors—from Federal Investigative Services within the Office of Personnel and Management (OPM) conduct these investigations for most of the federal government using federal investigative standards and OPM internal guidance as criteria for collecting background information on applicants. OPM provides the resulting investigative reports to the requesting agencies for their internal adjudicators, who use the information along with the federal adjudicative guidelines to determine whether an applicant is eligible for a personnel security clearance. DOD is OPM’s largest customer, and its Under Secretary of Defense for Intelligence (USD(I)) is responsible for developing, coordinating, and overseeing the implementation of DOD policy, programs, and guidance for personnel, physical, industrial, information, operations, chemical/biological, and DOD Special Access Program security. Additionally, the Defense Security Service, under the authority, direction, and control of the USD(I), manages and administers the DOD portion of the National Industrial Security Program for the DOD components and other federal services by agreement, as well as providing security education and training, among other things. The Intelligence Reform and Terrorism Prevention Act of 2004 prompted government-wide suitability and security clearance reform. required, among other matters, an annual report to Congress—in February of each year from 2006 through 2011—about progress and key measurements on the timeliness of granting security clearances. It specifically required those reports to include the periods of time required for conducting investigations and adjudicating or granting clearances. However, the Intelligence Reform and Terrorism Prevention Act requirement for the executive branch to annually report on its timeliness expired in 2011. More recently, the Intelligence Authorization Act of 2010 established a new requirement that the President annually report to Congress the total amount of time required to process certain security clearance determinations for the previous fiscal year for each element of the Intelligence Community. The Intelligence Authorization Act of 2010 additionally requires that those annual reports include the total number of active security clearances throughout the United States government, including both government employees and contractors. Unlike the Intelligence Reform and Terrorism Prevention Act of 2004 reporting requirement, the requirement to submit these annual reports does not expire. Pub. L. No. 108-458 (2004) (relevant sections codified at 50 U.S.C. § 3341). In 2007, DOD and the Office of the Director of National Intelligence (ODNI) formed the Joint Security Clearance Process Reform Team, known as the Joint Reform Team, to improve the security clearance process government-wide. In a 2008 memorandum, the President called for a reform of the security clearance and suitability determination processes and subsequently issued Executive Order 13467, which in addition to designating the DNI as the Security Executive Agent, also designated the Director of OPM as the Suitability Executive Agent. Specifically, the Director of OPM, as Suitability Executive Agent, is responsible for developing policies and procedures to help ensure the effective, efficient, and timely completion of investigations and adjudications relating to determinations of suitability, to include consideration of an individual’s character or conduct. Further, the executive order established a Suitability and Security Clearance Performance Accountability Council (Performance Accountability Council) to oversee agency progress in implementing the reform vision. Under the executive order, this council is accountable to the President for driving implementation of the reform effort, including ensuring the alignment of security and suitability processes, holding agencies accountable for implementation, and establishing goals and metrics for progress. The order also appointed the Deputy Director for Management at the Office of Management and Budget as the Chair of the council. To help ensure the trustworthiness and reliability of personnel in positions with access to classified information, executive branch agencies rely on a personnel security clearance process that includes multiple phases: requirements determination, application, investigation, adjudication, appeals (if applicable, where a clearance has been denied), and reinvestigation (where applicable, for renewal or upgrade of an existing clearance). Figure 1 illustrates the steps in the personnel security clearance process, which is representative of the general process followed by most executive branch agencies and includes procedures for appeals and renewals. While different departments and agencies may have slightly different personnel security clearance processes, the phases that follow are illustrative of a typical process. In the first step of the personnel security clearance process, executive branch officials determine the requirements of a federal civilian position, including assessing the risk and sensitivity level associated with that position, to determine whether it requires access to classified information and, if required, the level of access. Security clearances are generally categorized into three levels: top secret, secret, and confidential. The level of classification denotes the degree of protection required for information and the amount of damage that unauthorized disclosure could reasonably be expected to cause to national defense. A sound requirements determination process is important because requests for clearances for positions that do not need a clearance or need a lower level of clearance increase investigative workloads and resultant costs. In addition to cost implications, limiting the access to classified information and reducing the associated risks to national security underscore the need for executive branch agencies to have a sound process to determine which positions require a security clearance. In 2012, we reported that the DNI, as the Security Executive Agent, had not provided agencies with clearly defined policy and procedures to consistently determine if a position requires a security clearance, or established guidance to require agencies to review and revise or validate existing federal civilian position designations. We recommended that the DNI issue policy and guidance for the determination, review, and validation of requirements, and ODNI concurred with those recommendations, stating that it recognized the need to issue or clarify policy. We routinely monitor the status of agency actions to address our prior report recommendations. As part of that process, we found that a January 25, 2013 presidential memo authorized the DNI and OPM to jointly issue revisions to part 732 of Title 5 of the Code of Federal Regulations, which provides requirements and procedures for the designation of national security positions. Subsequently, ODNI and OPM drafted the proposed regulation; published it in the Federal Register on May 28, 2013; and the comment period closed. We reported on October 31, 2013 that ODNI and OPM officials stated that they would jointly review and address comments and prepare the final rule for approval from the Office of Management and Budget. Once an applicant is selected for a position that requires a personnel security clearance, a security clearance must be obtained in order for an individual to gain access to classified information. To determine whether an investigation would be required, the agency requesting a security clearance investigation conducts a check of existing personnel security databases to determine whether there is an existing security clearance investigation underway or whether the individual has already been favorably adjudicated for a clearance in accordance with current standards. During the application submission phase, a security officer from an executive branch agency (1) requests an investigation of an individual requiring a clearance; (2) forwards a personnel security questionnaire (Standard Form 86) using OPM’s electronic Questionnaires for Investigations Processing (e-QIP) system or a paper copy of the Standard Form 86 to the individual to complete; (3) reviews the completed questionnaire; and (4) sends the questionnaire and supporting documentation, such as fingerprints and signed waivers, to OPM or its investigation service provider. During the investigation phase, investigators—often contractors—from OPM’s Federal Investigative Services use federal investigative standards and OPM’s internal guidance to conduct and document the investigation of the applicant. The scope of information gathered in an investigation depends on the needs of the client agency and the personnel security clearance requirements of an applicant’s position, as well as whether the investigation is for an initial clearance or a reinvestigation to renew a clearance. For example, in an investigation for a top secret clearance, investigators gather additional information through more time-consuming efforts, such as traveling to conduct in-person interviews to corroborate information about an applicant’s employment and education. However, many background investigation types have similar components. For instance, for all investigations, information that applicants provide on electronic applications is checked against numerous databases. Both secret and top secret investigations contain credit and criminal history checks, while top secret investigations also contain citizenship, public record, and spouse checks as well as reference interviews and an Enhanced Subject Interview to gain insight into an applicant’s character. Table 1 highlights the investigative components generally associated with the secret and top secret clearance levels. After OPM, or the designated provider, completes the background investigation, the resulting investigative report is provided to the requesting agencies for their internal adjudicators. In December 2012, ODNI and OPM jointly issued a revised version of the federal investigative standards for the conduct of background investigations for individuals who work for or on behalf of the federal government. According to October 31, 2013, testimony by an ODNI official, the revised standards will be implemented through a phased approach beginning in 2014 and continuing through 2017. During the adjudication phase, adjudicators from the hiring agency use the information from the investigative report along with federal adjudicative guidelines to determine whether an applicant is eligible for a security clearance. To make clearance eligibility decisions, the adjudication guidelines specify that adjudicators consider 13 specific areas that elicit information about (1) conduct that could raise security concerns and (2) factors that could allay those security concerns and permit granting a clearance. If a clearance is denied or revoked, appeals of the adjudication decision are possible. We have work under way to review the process for security clearance revocations. We expect to issue a report on this process in the spring of 2014. Once an individual has obtained a personnel security clearance and as long as he or she remains in a position that requires access to classified national security information, that individual is reinvestigated periodically at intervals that depend on the level of security clearance. For example, top secret clearance holders are reinvestigated every 5 years, and secret clearance holders are reinvestigated every 10 years. Some of the information gathered during a reinvestigation would focus specifically on the period of time since the last approved clearance, such as a check of local law enforcement agencies where an individual lived and worked since the last investigation. Further, the Joint Reform Team began an effort to review the possibility of continuing evaluations, which would ascertain on a more frequent basis whether an eligible employee with access to classified information continues to meet the requirements for access. Specifically, the team proposed to move from periodic review to that of continuous evaluation, meaning annually for top secret or similar positions and at least once every 5 years for secret or similar positions, as a means to reveal security-relevant information earlier than the previous method, and provide increased scrutiny of populations that could potentially represent risk to the government because they already have access to classified information. The revised federal investigative standards state that the top secret level of security clearances may be subject to continuous evaluation. Executive branch agencies do not consistently assess quality throughout the personnel security clearance process, in part because they have not fully developed and implemented metrics to measure quality in key aspects of the process. We have emphasized—since the late 1990s—the need to build and monitor quality throughout the personnel security clearance process to promote oversight and positive outcomes such as maximizing the likelihood that individuals who are security risks will be scrutinized more closely. For example, in 2008 two of the key factors we identified to consider in efforts to reform the security clearance process were building quality into every step of the clearance processes and having a valid set of metrics for evaluating efficiency and effectiveness.We have begun additional work to review the quality of investigations. GAO, High-Risk Series: An Update, GAO-05-207 (Washington, D.C.: Jan. 2005). Every 2 years at the start of a new Congress, GAO issues a report that identifies government operations that are high risk because of their vulnerabilities to fraud, waste, abuse, and mismanagement, or are most in need of transformation to address economy, efficiency, or effectiveness. percent of all federal clearance investigations, including those for DOD; and (2) the granting of some clearances by DOD adjudicators even though some required data were missing from the investigative reports used to make such determinations. For example, in May 2009, we reported that, with respect to DOD initial top secret clearances adjudicated in July 2008, documentation was incomplete for most OPM investigative reports. We independently estimated that 87 percent of about 3,500 investigative reports that DOD adjudicators used to make clearance decision were missing at least one type of documentation required by federal investigative standards. The type of documentation most often missing from investigative reports was verification of all of the applicant’s employment followed by information from the required number of social references for the applicant and investigative reports did not contain a required personal subject interview. Officials within various executive branch agencies have noted to us that the information gathered during the interview and investigative portion of the process is essential for making adjudicative decisions. Department of Defense, DOD Personnel Security Program Regulation 5200.2-R (January 1987, incorporating changes Feb. 23, 1996). ability to explain the extent to which or the reasons why some files are incomplete. In November 2010, we reported that agency officials who utilize OPM as their investigative service provider cited challenges related to deficient investigative reports as a factor that slows agencies’ abilities to make adjudicative decisions. The quality and completeness of investigative reports directly affects adjudicator workloads, including whether additional steps are required before adjudications can be made, as well as agency costs. For example, some agency officials noted that OPM investigative reports do not include complete copies of associated police reports and criminal record checks. Several agency officials stated that in order to avoid further costs or delays that would result from working with OPM, they often choose to perform additional steps internally to obtain missing information. According to ODNI and OPM officials, OPM investigators provide a summary of police and criminal reports and assert that there is no policy requiring inclusion of copies of the original records. However, ODNI officials also stated that adjudicators may want or need entire records, as critical elements may be left out. For example, according to Defense Office of Hearings and Appeals officials, in one case, an investigator’s summary of a police report incorrectly identified the subject as a thief when the subject was actually the victim. As a result of the incompleteness of OPM’s investigative reports on DOD personnel and the incompleteness of DOD’s adjudicative files that we first identified in our 2009 report, we made several recommendations to OPM and DOD. We recommended that OPM measure the frequency with which its investigative reports meet federal investigative standards, so that the executive branch can identify the factors leading to incomplete reports and take corrective actions. OPM did not agree or disagree with our recommendation. In a subsequent February 2011 report, we noted that the Office of Management and Budget, ODNI, DOD, and OPM leaders had provided congressional members and executive branch agencies with metrics to assess the quality of investigative reports and adjudicative files and other aspects of the clearance process. For example, the Rapid Assessment of Incomplete Security Evaluations was one tool the executive branch agencies planned to use for measuring quality, or completeness, of OPM’s background investigations. However, in June 2012 an OPM official said that OPM chose not to use this tool and opted to develop another tool. We currently have work under way to review any actions OPM has taken to develop and implement metrics for measuring the completeness of OPM’s investigative reports. However, ODNI officials confirmed in January 2014 that OPM did not have such metrics in place. According to OPM officials, OPM also continues to assess the quality of investigations based on voluntary reporting from customer agencies. Specifically, OPM tracks investigations that are (1) returned for rework from the requesting agency, (2) identified as deficient using a web-based customer satisfaction survey, or (3) identified as deficient through adjudicator calls to OPM’s quality hotline. In our past work, we have noted that the number of investigations returned for rework is not by itself a valid indicator of the quality of investigative work because DOD adjudication officials told us that they have been reluctant to return incomplete investigations in anticipation of delays that would affect timeliness. Further, relying on agencies to voluntarily provide information on investigation quality may not reflect the quality of OPM’s total investigation workload. We also recommended in 2009 that DOD measure the frequency with which adjudicative files meet requirements, so that the executive branch can identify the factors leading to incomplete files and include the results of such measurement in annual reports to Congress on clearances.November 2009, DOD subsequently issued a memorandum that established a tool to measure the frequency with which adjudicative files meet the requirements of DOD regulation. Specifically, the DOD memorandum stated that DOD would use a tool called the Review of Adjudication Documentation Accuracy and Rationales, or RADAR, to gather specific information about adjudication processes at the adjudication facilities and assess the quality of adjudicative In documentation. In following up on our 2009 recommendations, as of 2012, a DOD official stated that RADAR had been used in fiscal year 2010 to evaluate some adjudications, but was not used in fiscal year 2011 because of funding shortfalls. DOD restarted the use of RADAR in fiscal year 2012. Several efforts are underway to review the security clearance process, and those efforts, combined with sustained leadership attention, could help facilitate progress in assessing and improving the quality of the security clearance process. After the September 16, 2013 shooting at the Washington Navy Yard, the President directed the Office of Management and Budget, in coordination with ODNI and OPM, to conduct a government-wide review into the oversight, nature, and implementation of security and suitability standards for federal employees and contractors. In addition, in September 2013, the Secretary of Defense directed an independent review to identify and recommend actions that address gaps or deficiencies in DOD programs, policies, and procedures regarding security at DOD installations and the granting and renewal of security clearances for DOD employees and contractor personnel. The primary objective of this review is to determine whether there are weaknesses in DOD programs, policies, or procedures regarding physical security at DOD installations and the security clearance and reinvestigation process that can be strengthened to prevent a similar tragedy. We initially placed DOD’s personnel security clearance program on our high-risk list in 2005 because of delays in completing clearances. In February 2011, we removed DOD’s personnel security clearance program from our high-risk list largely because of the department’s demonstrated progress in expediting the amount of time processing clearances. We also noted DOD’s efforts to develop and implement tools to evaluate the quality of investigations and adjudications. Even with the significant progress leading to removal of DOD’s program from our high-risk list, the Comptroller General noted in June 2012 that sustained leadership would be necessary to continue to implement, monitor, and update outcome-focused performance measures.initial development of some tools and metrics to monitor and track quality not only for DOD but government-wide were positive steps; however, full implementation of these tools and measures government-wide has not yet been realized. While progress in DOD’s personnel security clearance program resulted in the removal of this area from our high-risk list, significant government-wide challenges remain in ensuring that personnel security clearance investigations and adjudications are high-quality. However, if the oversight and leadership that helped address the timeliness issues focuses now on the current problems associated with quality, we believe that progress in helping executive branch agencies to assess the quality of the security clearance process could be made. Although executive branch agency officials have stated that reciprocity is regularly granted as it is an opportunity to save time as well as reduce costs and investigative workloads, we reported in 2010 that agencies do not consistently and comprehensively track the extent to which reciprocity is granted government-wide. In addition to establishing objectives for timeliness, the Intelligence Reform and Terrorism Prevention Act of 2004 established requirements for reciprocity, which is an agency’s acceptance of a background investigation or clearance determination completed by any authorized investigative or adjudicative executive branch agency, subject to certain exceptions such as completing additional requirements like polygraph testing. Further, in October 2008, ODNI issued guidance on the reciprocity of personnel security clearances. The guidance requires, except in limited circumstances, that all Intelligence Community elements “accept all in-scope security clearance or access determinations.” Additionally, Office of Management and Budget guidance requires agencies to honor a clearance when (1) the prior clearance was not granted on an interim or temporary basis; (2) the prior clearance investigation is current and in-scope; (3) there is no new adverse information already in the possession of the gaining agency; and (4) there are no conditions, deviations, waivers, or unsatisfied additional requirements (such as polygraphs) if the individual is being considered for access to highly sensitive programs. While the Performance Accountability Council has identified reciprocity as a government-wide strategic goal, we have found that agencies do not consistently and comprehensively track when reciprocity is granted, and lack a standard metric for tracking reciprocity. Further, while OPM and the Performance Accountability Council have developed quality metrics for reciprocity, the metrics do not measure the extent to which reciprocity is being granted. For example, OPM created a metric in early 2009 to track reciprocity, but this metric only measures the number of investigations requested from OPM that are rejected based on the existence of a previous investigation and does not track the number of cases in which an existing security clearance was or was not successfully honored by the agency. Without comprehensive, standardized metrics to track reciprocity and consistent documentation of the findings, decision makers will not have a complete picture of the extent to which reciprocity is granted or the challenges that agencies face when attempting to honor previously granted security clearances. In 2010, we reported that executive branch officials stated that they routinely honor other agencies’ security clearances, and personnel security clearance information is shared between OPM, DOD, and, to some extent, Intelligence Community databases.that some agencies find it necessary to take additional steps to address limitations with available information on prior investigations, such as insufficient information in the databases or variances in the scope of investigations, before granting reciprocity. For instance, OPM has taken However, we found steps to ensure that certain clearance data necessary for reciprocity are available to adjudicators, such as holding interagency meetings to determine new data fields to include in shared data. However, we also found that the shared information available to adjudicators contains summary-level detail that may not be complete. As a result, agencies may take steps to obtain additional information, which creates challenges to immediately granting reciprocity. Further, we reported in 2010 that according to agency officials since there is no government-wide standardized training and certification process for investigators and adjudicators, a subject’s prior clearance investigation and adjudication may not meet the standards of the inquiring agency. Although OPM has developed some training, security clearance investigators and adjudicators are not required to complete a certain type or number of classes. As a result, the extent to which investigators and adjudicators receive training varies by agency. Consequently, as we have previously reported, agencies are reluctant to be accountable for investigations or adjudications conducted by other agencies or organizations. To achieve fuller reciprocity, clearance-granting agencies seek to have confidence in the quality of prior investigations and adjudications. Because of these issues identified by agency officials as hindrances to reciprocity and because the extent of reciprocity was unknown, we recommended in 2010 that the Deputy Director of Management, Office of Management and Budget, in the capacity as Chair of the Performance Accountability Council, should develop comprehensive metrics to track reciprocity and then report the findings from the expanded tracking to Congress. Although the Office of Management and Budget agreed with our recommendation, a 2011 ODNI report found that Intelligence Community agencies experienced difficulty reporting on reciprocity. The agencies are required to report on a quarterly basis the number of security clearance determinations granted based on a prior existing clearance as well as the number not granted when a clearance existed. The numbers of reciprocal determinations made and denied are categorized by the individual’s originating and receiving organizational type: (1) government to government, (2) government to contractor, (3) contractor to government, and (4) contractor to contractor. The ODNI report stated that data fields necessary to collect the information described above do not currently reside in any of the data sets available, and the process was completed in an agency-specific, semimanual method. The Deputy Assistant Director for Special Security of ODNI noted in testimony in June 2012 that measuring reciprocity is difficult, and despite an abundance of anecdotes, real data are hard to come by. To address this problem, in 2013 ODNI planned to develop a web-based form for individuals to use to submit their experience with reciprocity issues to ODNI. According to ODNI, this would allow it to collect empirical data, perform systemic trend analysis, and assist agencies with achieving workable solutions. However, in January 2014, ODNI officials told us that required resources and information technology were not available to support the development and implementation of a web-based form. Instead, ODNI is conducting a Reciprocity Research Study that will involve, among other things, agencies identifying their ability to collect reciprocity metrics. This study would assist ODNI in developing reciprocity performance measures and a new policy for reciprocity. ODNI would also use the study to determine if a web-based form would be of value. In conclusion, to avoid the risk of damaging, unauthorized disclosures of classified information, oversight of the reform efforts to measure and improve the quality of the security clearance process is imperative. The progress that was made with respect to reducing the amount of time required for processing clearances would not have been possible without committed and sustained congressional oversight and the leadership of the Performance Accountability Council. Further actions are needed now to fully develop and implement metrics to oversee quality at every step in the process. Further, ensuring the quality of personnel security clearance investigations and adjudications is important government-wide, not just for DOD. While reciprocity is required by law and, if implemented correctly, could enhance efficiency and present cost savings opportunities, much is unknown about the extent to which previously granted security clearance investigations and adjudications are honored government-wide. Therefore, we recommended that metrics are needed to track reciprocity, which have yet to be fully developed and implemented. Assurances that all clearances are of a high quality may further encourage reciprocity of investigation and adjudications. We will continue to monitor the outcome of the agency actions discussed above to address our outstanding recommendations. Chairman Issa, Ranking Member Cummings and Members of the Committee, this concludes my statement for the record. For further information on this testimony, please contact Brenda S. Farrell, Director, Defense Capabilities and Management, who may be reached at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Margaret Best (Assistant Director), Lori Atkinson, Kevin Copping, Elizabeth Hartjes, Jeffrey Heit, Suzanne Perkins, Amie Steele, Erik Wilkins-McKee, and Michael Willems. Personnel Security Clearances: Actions Needed to Help Ensure Correct Designations of National Security Positions. GAO-14-139T. Washington, D.C.: November 20, 2013. Personnel Security Clearances: Opportunities Exist to Improve Quality Throughout the Process. GAO-14-186T. Washington, D.C.: November 13, 2013. Personnel Security Clearances: Full Development and Implementation of Metrics Needed to Measure Quality of Process. GAO-14-157T. Washington, D.C.: October 31, 2013. Personnel Security Clearances: Further Actions Needed to Improve the Process and Realize Efficiencies. GAO-13-728T. Washington, D.C.: June 20, 2013. Managing for Results: Agencies Should More Fully Develop Priority Goals under the GPRA Modernization Act. GAO-13-174. Washington, D.C.: April 19, 2013. Security Clearances: Agencies Need Clearly Defined Policy for Determining Civilian Position Requirements. GAO-12-800. Washington, D.C.: July 12, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Background Investigations: Office of Personnel Management Needs to Improve Transparency of Its Pricing and Seek Cost Savings. GAO-12-197. Washington, D.C.: February 28, 2012. GAO’s 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February, 2011. Personnel Security Clearances: Overall Progress Has Been Made to Reform the Governmentwide Security Clearance Process. GAO-11-232T. Washington, D.C.: December 1, 2010. Personnel Security Clearances: Progress Has Been Made to Improve Timeliness but Continued Oversight Is Needed to Sustain Momentum. GAO-11-65. Washington, D.C.: November 19, 2010. DOD Personnel Clearances: Preliminary Observations on DOD’s Progress on Addressing Timeliness and Quality Issues. GAO-11-185T. Washington, D.C.: November 16, 2010. Personnel Security Clearances: An Outcome-Focused Strategy and Comprehensive Reporting of Timeliness and Quality Would Provide Greater Visibility over the Clearance Process. GAO-10-117T. Washington, D.C.: October 1, 2009. Personnel Security Clearances: Progress Has Been Made to Reduce Delays but Further Actions Are Needed to Enhance Quality and Sustain Reform Efforts. GAO-09-684T. Washington, D.C.: September 15, 2009. Personnel Security Clearances: An Outcome-Focused Strategy Is Needed to Guide Implementation of the Reformed Clearance Process. GAO-09-488. Washington, D.C.: May 19, 2009. DOD Personnel Clearances: Comprehensive Timeliness Reporting, Complete Clearance Documentation, and Quality Measures Are Needed to Further Improve the Clearance Process. GAO-09-400. Washington, D.C.: May 19, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Personnel Security Clearances: Preliminary Observations on Joint Reform Efforts to Improve the Governmentwide Clearance Eligibility Process. GAO-08-1050T. Washington, D.C.: July 30, 2008. Personnel Clearances: Key Factors for Reforming the Security Clearance Process. GAO-08-776T. Washington, D.C.: May 22, 2008. Employee Security: Implementation of Identification Cards and DOD’s Personnel Security Clearance Program Need Improvement. GAO-08-551T. Washington, D.C.: April 9, 2008. Personnel Clearances: Key Factors to Consider in Efforts to Reform Security Clearance Processes. GAO-08-352T. Washington, D.C.: February 27, 2008. DOD Personnel Clearances: DOD Faces Multiple Challenges in Its Efforts to Improve Clearance Processes for Industry Personnel. GAO-08-470T. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Delays and Inadequate Documentation Found for Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. DOD Personnel Clearances: Additional OMB Actions Are Needed to Improve the Security Clearance Process. GAO-06-1070. Washington, D.C.: September 28, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06-233T. Washington, D.C.: November 9, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. DOD Personnel Clearances: Preliminary Observations Related to Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-202T. Washington, D.C.: May 6, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Recently the DNI reported that more than 5.1 million federal government and contractor employees held or were eligible to hold a security clearance. GAO has reported that the federal government spent over $1 billion to conduct background investigations (in support of security clearances and suitability determinations for federal employment) in fiscal year 2011. A high quality process is essential to minimize the risks of unauthorized disclosures of classified information and to help ensure that information about individuals with criminal activity or other questionable behavior is identified and assessed as part of the process for granting or retaining clearances. This statement addresses (1) a general overview of the security clearance process; (2) what is known about the quality of investigations and adjudications, which are the determinations made by executive branch agency officials to grant or reject clearance requests based on investigations; and (3) the extent of reciprocity, which is the decision of agencies to honor clearances previously granted by other agencies. This statement is based on GAO work issued from 2008 to 2013 on DOD's personnel security clearance program and government-wide suitability and security clearance reform efforts. As part of that work, GAO (1) reviewed relevant statutes, federal guidance, and processes, (2) examined agency data on the timeliness and quality of investigations and adjudications, (3) assessed reform efforts, and (4) reviewed a sample of case files for DOD personnel. Several agencies have key roles and responsibilities in the multi-phased personnel security clearance process, including the Director of National Intelligence (DNI) who, as the Security Executive Agent, is responsible for developing policies and procedures related to security clearance investigations and adjudications, among other things. The Deputy Director for Management at the Office of Management and Budget chairs the Performance Accountability Council that oversees reform efforts to enhance the personnel security process. The security process includes: the determination of whether a position requires a clearance, application submission, investigation, and adjudication. Specifically, agency officials must first determine whether a federal civilian position requires access to classified information. After an individual has been selected for a position that requires a personnel security clearance and the individual submits an application for a clearance, investigators—often contractors—from the Office of Personnel Management (OPM) conduct background investigations for most executive branch agencies. Adjudicators from requesting agencies use the information from these investigations and federal adjudicative guidelines to determine whether an applicant is eligible for a clearance. Further, individuals are subject to reinvestigations at intervals based on the level of security clearance. Executive branch agencies do not consistently assess quality throughout the personnel security clearance process, in part because they have not fully developed and implemented metrics to measure quality in key aspects of the process. For more than a decade, GAO has emphasized the need to build and monitor quality throughout the clearance process to promote oversight and positive outcomes such as maximizing the likelihood that individuals who are security risks will be scrutinized more closely. GAO reported in 2009 that, with respect to initial top secret clearances adjudicated in July 2008 for the Department of Defense (DOD), documentation was incomplete for most of OPM's investigative reports. GAO independently estimated that 87 percent of about 3,500 investigative reports that DOD adjudicators used to make clearance eligibility decisions were missing some required documentation, such as the verification of all of the applicant's employment, the required number of social references for the applicant, and complete security forms. In May 2009, GAO recommended that OPM measure the frequency with which its investigative reports met federal investigative standards to improve the completeness—that is, quality—of investigation documentation. In January 2014, DNI officials said that metrics to measure quality of investigative reports had not been established. GAO reported in 2010 that executive branch agencies do not consistently and comprehensively track the extent to which reciprocity is occurring because no government-wide metrics exist to consistently and comprehensively track when reciprocity is granted. The acceptance of a background investigation or personnel security clearance determination completed by another authorized agency is an opportunity to save resources and executive branch agencies are required by law to grant reciprocity, subject to certain exceptions, such as completing additional requirements like polygraph testing. GAO's 2010 recommendation that the leaders of the security clearance reform effort develop metrics to track reciprocity has not been fully implemented. |
Advisory groups—both FACA and non-FACA—exist throughout the executive branch of the federal government, providing input and advice to agencies in a variety of ways, such as preparing reports and developing recommendations. Agencies are not required to implement the advice or recommendations of advisory groups because they are by design advisory. While an advisory group’s input or recommendations may form the basis for a federal agency’s decisions or policies, other factors may play a role in determining what action an agency ultimately takes. However, both types of advisory groups serve as a mechanism for federal agencies to obtain input from internal and external stakeholders such as academics, industry associations, or other agencies. FACA was enacted in 1972 in response to concerns that federal advisory groups were proliferating without adequate review, oversight, or accountability. The General Services Administration (GSA) Committee Management Secretariat oversees each federal agency’s management of FACA advisory groups, develops guidelines and regulations, and conducts an annual review of FACA advisory groups governmentwide. For example, GSA provides guidance to federal agencies sponsoring FACA advisory groups and is involved in the process to establish new and oversee the management of existing FACA advisory groups. GSA collects and makes available governmentwide FACA advisory group information that agencies—including DOT and DOE—are required to provide through a publicly accessible database each fiscal year. In addition, each agency also develops its own policies and procedures for following FACA requirements. For example, DOT and DOE each have policy manuals governing the management of their FACA advisory groups. Each agency sponsoring FACA advisory groups appoints a Committee Management Officer responsible for overseeing compliance with FACA requirements, and appoints to each FACA advisory group a Designated Federal Official (DFO) responsible for attending meetings, approving agendas, and maintaining records on costs and membership, among other duties. Decisions regarding the establishment of new FACA advisory groups and recommendations to terminate or continue existing groups are made by the head of each agency based on recommendations made by the Committee Management Officer, the DFO assigned to each group, or other agency officials. FACA sets forth requirements for FACA advisory groups’ formation, their operations, and how they provide advice and recommendations to the federal government. To help avoid duplication of resources, FACA regulations require that the process to establish, renew, or reestablish discretionary FACA advisory groups—those established under agency authority or authorized by statute—must include an explanation stating why the group’s functions cannot be performed by the agency, another existing group, or other means. FACA also articulates broad requirements for balance, transparency, and independence. For example, for transparency, a range of information is to be reported in the FACA database, and meeting minutes and reports are to be made available to the public. The act also requires that all FACA advisory groups have a charter containing specific information, including the group’s scope and objectives, a description of duties, and the period of time necessary to carry out its purposes. Charters—and thus the FACA advisory groups— generally expire at the end of 2 years unless renewed by the agency, the Congress, or executive order. This requirement was intended to encourage agencies to periodically reexamine their need for FACA advisory groups. As previously noted, not every advisory group that provides advice or recommendations to an agency is subject to the FACA requirements. An advisory group may not be subject to FACA for a variety of reasons, including statutory language that may exempt a group from FACA. Further, certain types of groups are also exempt from FACA, including groups not managed or controlled by the executive branch, groups with membership consisting entirely of federal government officials, or intergovernmental groups. Non-FACA advisory groups are generally less formal than those established under and subject to the requirements of FACA. Because they are not subject to FACA, non-FACA advisory groups are not required to follow FACA requirements to hold public meetings or to make meeting minutes and reports publicly available. Similarly, agencies are not required to collect or report information identifying non-FACA advisory groups, and GSA does not have any oversight responsibilities pertaining to non-FACA advisory groups. While there is no specific entity or office that oversees non-FACA advisory groups, general guidance for the management of some of these groups— such as federal interagency groups—may be included within the agency’s committee management policy manuals. For example, DOT’s committee management policy covers FACA advisory groups, as well as interagency groups—one type of non-FACA advisory group—while DOE’s policy is focused exclusively on FACA advisory groups. Agency-reported fiscal year 2010 costs for DOT and DOE FACA advisory As noted groups were approximately $4 and $13.6 million, respectively. above, agencies self-report cost information, such as travel and per diem costs incurred by FACA advisory group activity or payments to members or consultants. Agencies sponsoring FACA advisory groups determine the level of financial and administrative support for their groups. Variations in costs are common given factors such as the number of meetings held or compensation rates for groups’ members. For fiscal year 2010, the FACA database identified the 15 DOT and 21 DOE actively chartered FACA advisory groups covering various topics and issues related to their respective agency’s mission. The approach used by DOT and DOE to assess duplication amongst advisory groups is often informal, and agency officials are not always clear about what steps should be taken to ensure the assessment of existing advisory groups is consistently made. GSA relies on federal agencies to follow the FACA requirement to check for duplication prior to filing a charter to establish a new, or renew an existing, FACA advisory group under agency authority. Furthermore, guidance for our two selected agencies requires officials to determine whether the objectives or duties of a proposed FACA advisory group could be achieved by an existing entity, committee, or organization within the agency or governmentwide. Some DOT and DOE officials told us they use the FACA database to check for potentially duplicative advisory groups. This may be a good first step to identifying FACA groups working on similar issues; however, it does not necessarily provide an adequate assessment for duplication. While the FACA database contains information on advisory group issue areas, it is limited in its ability to directly identify related groups. For example, a search of the FACA database in the issue area of “surface and vehicular transportation” yielded approximately 60 FACA advisory groups working across 10 federal agencies. Further, issue areas are self- identified by agency officials and may not be consistently defined across agencies. We found that several agency officials were not aware of a process to determine whether the objectives or duties of an advisory group could be achieved by an existing entity, committee, or organization. In cases where officials indicated they were aware of a process, when asked to describe the process, a number described informal approaches for checking for duplication and did not articulate consistent steps taken to make these determinations. Several DOE officials reported that the agency’s Committee Management Office is involved and engages each FACA advisory group’s DFO to be aware of any existing entity or committee that could achieve the objectives being proposed, but they did not provide additional detail outlining formal steps taken to identify these groups. DOT and DOE officials also indicated that agency officials working in a program or issue area are generally able to identify groups that may be addressing similar topics using their existing knowledge of agency offices and programs. For example, some DOT officials noted that high-level program officials are likely to be aware of other groups dealing with an area of possible duplication and that this approach can serve as an informal mechanism to help identify relevant advisory groups working on related issues. However, without a process with specific steps to check for duplication (such as reaching out to key contacts of relevant advisory groups) assessment results may be inconsistent or incomplete. In contrast, one of the DOT agency offices we reviewed, the Federal Aviation Administration (FAA) Office of Rulemaking, has a policy that outlines specific steps agency officials should take prior to establishing a new advisory group. This policy is specific to aviation rulemaking advisory groups, covers both FACA and non-FACA advisory groups, and clearly lays out the process used to determine the need for and how to establish a new group. For instance, when an FAA office identifies an issue on which it would be helpful to obtain advice from industry, officials decide whether to request the standing Aviation Rulemaking Advisory Committee to accept the task or to charter a new aviation rulemaking committee based on the best fit given the specific topic or activity. The Aviation Rulemaking Advisory Committee is a formal, standing FACA advisory group; aviation rulemaking committees are non-FACA advisory groups formed on an ad hoc basis, for a specific purpose, and are typically of limited duration. One FAA official involved in these rulemaking advisory groups noted that this guidance offers those offices establishing advisory groups a process they can use to establish and manage their advisory groups, a useful tool because Congress often directs FAA to use these types of advisory groups to conduct rulemakings. While readily available information on FACA advisory groups—such as a designated point of contact and description of objectives—is accessible through a centralized database managed by GSA, similar information is not available for non-FACA advisory groups. Information on all FACA advisory groups—including DOT’s and DOE’s fiscal year 2010 groups—is readily available through the public FACA database, providing agency officials and interested parties with a basic level of transparency. This includes basic information such as contact information and descriptions of activities. In contrast, federal agencies are not required to, and may or may not track their non-FACA advisory groups, and neither of our two selected agencies had an existing inventory of all non-FACA advisory groups that provide advice or input to the agency. Using an agreed-upon definition for non-FACA advisory groups, DOT identified 19 and DOE identified 33 fiscal year 2010 non-FACA advisory groups. However, we could not confirm whether the groups identified include all of the non- FACA advisory groups for each agency, and DOT and DOE officials noted they do not necessarily consider their various groups as falling under a single definition of non-FACA advisory groups. Both DOT and DOE agency officials faced some challenges identifying and collecting basic information for non-FACA advisory groups—including agency points of contact and brief group descriptions—and the process was, at points, time consuming or cumbersome for them. DOT and DOE officials used different approaches to identify non-FACA advisory groups but encountered the following similar challenges in collecting basic information on these groups: DOT generally relied on officials at the program level to identify the agency’s non-FACA advisory groups, and in most cases, agency liaisons served as a conduit to identify the groups by providing officials working on various programs with the non-FACA definition. According to DOT officials, challenges in compiling the requested information included identifying the agency point of contact and locating additional descriptive information pertaining to non-FACA advisory group activities. For example, one agency official we spoke with relied on an Internet search engine to locate relevant information about some of the non-FACA advisory groups. Another DOT official was able to identify a few advisory groups based on indirect involvement and knowledge of agency activities and programs. Of the four DOT agency components that identified non-FACA advisory groups for this review, only one identified these groups based on a readily available roster. In this case, Maritime Administration officials identified five non-FACA advisory groups using a committee roster the agency maintains for internal purposes. This roster identifies the names of both FACA and non-FACA groups, any subcommittees, and primary and secondary points of contact. DOE officials coordinated with each of their program offices to identify their non-FACA advisory groups based on the agreed-upon definition. The officials told us there was some difficulty in trying to identify the non-FACA advisory groups because basic information pertaining to these groups is not readily available as it is for FACA advisory groups. DOE officials told us that they had to coordinate the efforts of multiple program offices to compile the information and noted the process was time-consuming because there is no existing source for non-FACA advisory group information. As a result, the program officers had to cull much of the information for these groups from various Internet websites. Because there is no way to readily identify non-FACA advisory groups providing advice to the agencies, there is no formal source of information enabling agency officials to conduct a comprehensive check for potentially duplicative groups. For example, DOT officials told us that, because they are only able to check whether a FACA advisory group overlaps or duplicates the work of existing FACA advisory groups, they would not necessarily be aware of potential overlap with advisory groups not subject to FACA. DOT officials also pointed out that, given the time and resources required to establish and manage an advisory group, there is no incentive to maintain a FACA advisory group that duplicates the activities of another group. However, with limited visibility over the universe of non-FACA advisory groups, there is no assurance that agency officials checking for duplication would know where to look or whom to contact for additional information necessary to assess duplication vis-à-vis those groups. This raises the risk that new advisory groups may be created or existing groups retained that are unnecessarily duplicative and therefore not an efficient use of agency resources. Further, this absence of readily available information may hinder other federal agencies from coordinating with or ensuring that their advisory groups are not unnecessarily duplicative with DOT or DOE non-FACA advisory groups. GAO, Unmanned Aircraft Systems: Federal Actions Needed to Ensure Safety and Expand Their Potential Uses within the National Airspace System, GAO-08-511 (Washington, D.C.: May 15, 2008). offices, and advisory groups have emerged over time and serve a similar role as the Air Traffic Procedures Advisory Committee (ATPAC), a DOT FACA advisory group. Based on their review, agency officials involved in ATPAC identified potential duplication with ATPAC and other DOT advisory groups covering aviation topics including aviation charts, publications, or procedures. In this case, extensive knowledge of the organization, its history, and awareness of current advisory groups agencywide enabled these officials to perform this assessment, which raised questions about the ongoing need for ATPAC. According to FAA officials, ATPAC was the only mechanism of its kind for industry input to the FAA when it was created in 1976 but, over time, has essentially become a conduit to pass issues identified by members on to the appropriate FAA office or group. However, these officials noted this was the first step in the assessment process that will ultimately require internal agency concurrence to consider whether to retain or terminate ATPAC. Other agency officials we spoke with had differing perspectives regarding whether unnecessary duplication with ATPAC and other DOT aviation advisory groups exists. While advisory groups are not the sole source of information or input for agencies such as DOT and DOE, they can be a relatively effective and efficient way to gather input on topics of interest. Specifically, advisory groups can inform agencies about topics of importance to the agency’s mission, consolidate input from multiple sources, and provide input at a relatively low cost. We reviewed information on 36 DOE and DOT FACA advisory groups and found that these groups all provided some form of input to agencies about topics related to the agency’s mission. For example, each of the 36 FACA advisory groups had goals and topics that were aligned with their respective agency’s missions or strategic goals, and each was engaged in activities that could help it produce advice, such as producing reports and making formal recommendations. To further review the usefulness of advisory groups, we conducted case studies on five DOT and DOE FACA and non-FACA advisory groups and identified several practices that helped enhance the usefulness of some of these advisory groups and, in some cases, also helped avoid duplication (see table 1 below). The five case studies provided examples of how agencies may address issues that could impact an advisory group’s usefulness. According to some members, stakeholders, and agency officials involved in these five advisory groups, certain practices or circumstances positively affected the group’s usefulness, while in other cases, the absence of those practices or circumstances may have limited the group’s usefulness. Practices identified as influencing the usefulness of some advisory groups include (1) securing clear agency commitment, (2) finding a balance between responsiveness to the agency and independence, (3) leveraging resources through collaboration with similar groups, and (4) evaluating the group’s usefulness to identify future directions for the group or actions to improve its usefulness. Securing agency commitment: Clear agency commitment to an advisory group can help enhance the group’s usefulness. As we have noted before, perhaps the single most important element in successfully implementing organizational change is the demonstrated, sustained Agency commitment to advisory groups can commitment of top leaders.be demonstrated by active participation in meetings, open communication with group members, and allocation of resources to the group. Some agency officials, members, and third party stakeholders explained that the level of agency commitment can positively or negatively impact the usefulness of advisory groups. For example, high-level agency participation can help the advisory group consider the agency’s needs when developing recommendations and may impact the likelihood that recommendations are implemented. In contrast, an absence of agency commitment to an advisory group can hinder the group’s usefulness by limiting resources or information that may help the group to be useful to the agency. According to DOT officials, involvement of high level decision makers enhanced the usefulness of the Federal Interagency Committee on Emergency Medical Services (FICEMS). FICEMS is a statutorily mandated body not subject to FACA whose members primarily are The group shares information and discusses federal agency officials.methods to improve emergency medical services and produces formal recommendations and reports. The Administrator of DOT’s National Highway Traffic Safety Administration is a FICEMS member, which officials believe enhances the group’s usefulness. Because DOT has committed high-level involvement to FICEMS, the items discussed during meetings directly involve the agency’s decisionmakers with the authority to make changes based on the advice. The marine transportation system encompasses numerous modes of transportation overseen by multiple agencies, one of which is the Maritime Administration. highways. According to members and a stakeholder, the Maritime Administration may have had limited commitment to MTSNAC in part because the group’s original scope was the marine transportation system and all related federal agencies, some of which is beyond the administration’s jurisdiction. In their view, MTSNAC provided a useful and needed service by addressing the wide-ranging issues affecting the marine transportation system, but its advice may have been better targeted at agency officials with a commitment to the broader marine transportation system. Balancing responsiveness with independence: Balancing responsiveness to agencies’ needs with ensuring independence can improve the usefulness of an advisory group. On one hand, responding to agencies’ needs may help advisory groups produce useful recommendations or reports. But on the other hand, as we have previously reported, the advice and recommendations of federal advisory groups should be independent of influence by the entity that created the advisory group. Similarly, we previously reported that advisory groups’ independence is important because the effectiveness of FACA advisory groups can be undermined if the members are, or are perceived to be, lacking independence. According to officials and members of the Electricity Advisory Committee (EAC), a DOE FACA advisory group, EAC’s responsiveness to DOE needs enhanced its usefulness and officials worked closely with the group’s members to focus the direction of the group to meet the agency’s needs. According to some EAC members, agency officials generally identified the topic to be covered while members determined how EAC would address the topic and sometimes identified additional topics to cover. Officials found EAC members to be responsive to DOE needs. For example, at DOE’s suggestion, EAC began developing “quick response” products to react to agency requests for information and input in lieu of lengthier reports. DOE officials also assisted with the development of agendas for meetings, which can be highly interactive. This type of dialogue between agency officials and advisory group members can help the advisory group meet agency needs and enhance the usefulness of the group’s products. However, there may be a tension between responsiveness and independence that could affect the group’s usefulness. Some advisory group members indicated that some situations may challenge members’ efforts to maintain independence. For example, in one instance, members of MTSNAC said that their sponsoring agency drafted recommendations and asked the group to endorse them, which the group declined. The members believed that the consensus-based recommendations they developed were valid even if they were not the recommendations that the agency wanted to hear. When asked for their perspectives on the group’s usefulness in general, DOT officials stated that they implemented some of MTSNAC’s recommendations and, while they did not always agree with other recommendations, members’ diverse and varied perspectives could be useful. Leveraging resources through collaboration: Collaboration between an advisory group and other groups focusing on similar topics can help agencies spend resources efficiently, prevent unnecessary duplication, and enhance the group’s usefulness. As we reported in 2011, interagency mechanisms or strategies to coordinate programs that address crosscutting issues may reduce potentially duplicative, overlapping, and Collaboration with groups focusing on similar topics fragmented efforts.may help ensure that groups are not duplicating activities but are instead focusing on the most useful tasks. Similarly, it may help advisory groups leverage existing resources to more quickly obtain information or expertise already possessed by other groups, thereby enhancing their usefulness and efficiency. Some advisory groups—such as non-FACA interagency coordination groups—share resources and information with other advisory groups. One official explained that collaborating and coordinating helps DOE’s federal Smart Grid Task Force (SGTF) to be useful and accomplish its purpose. SGTF is a statutorily mandated non-FACA group created primarily for the federal agencies involved in smart grid activities to coordinate projects and priorities, and the group’s members are representatives of the relevant agencies. According to one agency official and a third party stakeholder, SGTF’s coordinating function is useful in part because member agencies can become more aware of ongoing or proposed activities in the federal government that may affect their agency. Further, an agency official explained that members contributed to the body of knowledge about smart grid activities, for example, by collaboratively identifying common challenges for smart grid implementation. DOE also benefits from SGTF reaching out beyond the federal government—involving states and other entities—to accomplish its purpose. For example, SGTF members are statutorily required to coordinate with members of EAC’s smart grid subcommittee, who are nonfederal parties with interests or expertise in the smart grid. According to DOE agency officials, SGTF and EAC members meet every few months to discuss smart grid technological changes and developments. Agency officials stated that this type of coordination helps minimize the risk of unnecessary duplication of efforts. GAO-08-1026T. changes needed to bring about performance improvements and enhance usefulness. Some officials from DOT’s FAA have taken steps to evaluate ATPAC, a FACA advisory group, and are consequently better equipped to assess the group’s strengths, weaknesses, and whether the group continues to be relevant and useful. For example, officials (1) collected information on the group’s accomplishments—identifying the number of issues addressed over a number of years, (2) gathered members’ perspectives on the relevance and continuation of the group, and (3) informally considered whether the group’s costs outweigh its benefits. Based on the information gathered on ATPAC’s accomplishments, officials determined that the group’s workload had decreased. For example, while the committee resolved an average of about 16 issues per year over its first 29 years, over the last 6 years, ATPAC resolved approximately 6 issues per year. According to agency officials, FAA and ATPAC have responded to the change in workload by decreasing the frequency of meetings from about four to three times a year. Agency officials explained that they are further evaluating the group and may consider additional actions in the future. The practices identified through our advisory group case studies— securing agency commitment, balancing responsiveness with independence, leveraging resources through collaboration with similar groups, and evaluating usefulness—can help agencies leverage the advice produced by both FACA and non-FACA advisory groups to better address topics of importance to the agencies and avoid duplication of efforts. Advisory groups exist governmentwide and are generally considered useful and cost efficient mechanisms for federal agencies to obtain advice and input from a range of stakeholders and experts. However, the advisory group environment is fluid, and the potential for duplication exists both within and outside the agency as advisory groups are routinely established and used, taking on new issues in response to emerging agency needs. Therefore, assessments of whether existing advisory groups continue to be needed or whether another body or entity may be better suited to carry out advisory functions are important to help prevent unnecessary duplication and inefficient use of government resources. FACA requirements direct agencies to check for duplication among advisory groups, and DOT and DOE guidance incorporates these requirements. However, neither agency’s guidance includes specific steps for assessing duplication, resulting in an informal process that is not always comprehensive. These issues are further exacerbated by the lack of visibility over non-FACA groups, which often address the same or similar issues as FACA advisory groups. Advisory groups addressing similar issues may also be housed in different agencies across government, further complicating any assessment for duplication. While agencies are not required to track their non-FACA advisory groups, having available at least minimal information about non-FACA advisory groups, as well as specific assessment steps, would help ensure more comprehensive assessments of whether new advisory groups should be created and existing groups should be retained. DOT and DOE are only two among many federal government agencies that widely use advisory groups, however, these actions could be a good first step in facilitating coordination and sharing of information of advisory groups governmentwide. To reduce the risk of potential duplication of efforts and further inform assessments of advisory groups, we recommend that the Secretary of Transportation and the Secretary of Energy take the following two actions: Identify and document specific steps that should be taken in periodically assessing potential duplication and the ongoing need for both FACA and non-FACA advisory groups. Develop and make public (e.g., on the agency’s website) information identifying non-FACA advisory groups providing advice to the agency—including the group name, agency point of contact, and a brief description of the group’s purpose. We provided copies of our draft report to DOT, DOE, and GSA for their review and comment. DOT and DOE agreed to consider the recommendations. GSA provided technical comments, which we incorporated. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, the Secretary of Energy, the Administrator of the General Services Administration, and other interested parties. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (206) 287-4809 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The FACA database is available to the public at www.fido.gov/facadatabase. groups may have a mix of federal and nonfederal members and are established to provide advice or recommendations on issues or policies pertaining to the agency or its components. Because the non-FACA advisory groups were self-identified by DOT and DOE officials based on this definition, the groups identified may not include all of the existing non-FACA advisory groups for each agency.Non-FACA advisory groups are not subject to FACA for a variety of reasons, including statutory language that excludes a group or membership consisting entirely of federal government employees. We gathered information for each of the 88 FACA and non-FACA advisory groups identified as active in fiscal year 2010 using the FACA database and working with agency officials to collect information for each non-FACA group, such as a purpose statement or group descriptions. To better understand advisory group management, operations, and agency oversight responsibilities, we reviewed relevant documentation such as the FACA regulations and guidance, DOT and DOE committee management policies, and prior GAO reports on advisory groups. We interviewed agency officials within GSA’s Committee Management Secretariat and General Counsel and agency officials within DOT and DOE Committee Management Offices to better understand how each agency operates and manages advisory groups. We also spoke with aviation industry groups that participate as members in some DOT aviation advisory groups to obtain their perspectives on general experiences with these advisory groups. To assess the reliability of the FACA database, we (1) reviewed existing documentation related to the database, (2) reviewed a previous GAO data reliability assessment of the FACA database, (3) reviewed database use protocols, including verification and internal controls, and (4) interviewed knowledgeable agency officials about the data. We determined that the data used were sufficiently reliable for the purposes of identifying FACA advisory groups and their status, presenting the total cost of FACAs, determining the most commonly reported interest areas, and analyzing FACA missions and activities. To assess the extent to which DOT’s and DOE’s assessment process helps to ensure advisory groups efforts are not duplicative, and to determine what challenges may exist in assessing duplication, we narrowed the scope of our review and assessed the potential for duplication, overlap, and fragmentation among 47 of the 88 FACA and non-FACA advisory groups identified as active within fiscal year 2010. Specifically, we reviewed the 47 groups focusing on those interest areas most relevant to DOT and DOE—using the interest area identification in the FACA database and assigning these same interest areas to the non- FACA advisory groups—ultimately identifying aviation and energy as the most common advisory group interest areas for DOT and DOE, respectively. We also formulated definitions for duplication, overlap, and fragmentation using the broad definitions provided in GAO’s recent work.For the FACA advisory groups, we reviewed information within the FACA database performance measures section, their charters, and other agency documentation; for the non-FACA advisory groups, we reviewed agency provided descriptions and other agency documentation to help determine with more specificity the types of issues or topics the groups covered. We reviewed responses to a brief questionnaire sent to agency points of contact for the 47 selected DOT and DOE groups asking the respondents to identify, among other items, (1) any internal agency processes used to determine duplication, overlap, or fragmentation of proposed advisory groups with existing advisory groups and (2) their awareness of any other FACA or non-FACA advisory group within the agency or governmentwide that focused on the same issues as their group. From these 47 advisory groups, we then selected those groups that focus on common issues or topic areas in these broad areas for further analysis to better understand whether in fact the groups’ efforts were potentially duplicative and interviewed agency officials in the following offices: DOT: Federal Aviation Administration (FAA) officials within the Office of the Deputy Administrator; Air Traffic Organization; Office of Aviation Safety; and Office of Policy, International Affairs and Environment that were involved in five FACA and four non-FACA advisory groups that were identified as potentially duplicative, overlapping or fragmented; and DOE: Office of the Secretary; Office of Science; Office of Health, Safety and Security; and Office of Energy Efficiency and Renewable Energy officials that were involved in three FACA and five non-FACA advisory groups that were identified as potentially duplicative, overlapping, or fragmented. To review the usefulness of DOT and DOE advisory groups in assisting their respective agencies in carrying out their mission, and to identify practices to enhance their usefulness or help avoid duplication, we conducted in-depth case studies on three FACA and two non-FACA advisory groups. See table 3 below. We judgmentally selected these five advisory groups to obtain a mix of characteristics with the purpose of reporting additional details on a targeted selection of advisory groups.coverage across several characteristics, we considered the following factors in selecting the advisory group case studies: the agency they advise, FACA status, age, how the group was established, and whether they generated reports or recommendations. For FACA advisory groups, we also considered results from the performance measures section of the FACA database, but this information was not available for non-FACA advisory groups. For each case study, we reviewed relevant documentation and interviewed agency officials, advisory group members, and third party or industry stakeholders to obtain their perspectives on the group’s activities and its usefulness to the agency. For instance, to understand the group’s usefulness, we asked about how helpful the group was at assisting the agency in carrying out its mission, the impact the group or its products had on the agency, and the value added by the group. For example, we met with FAA, Maritime Administration, and National Highway Traffic Safety Administration officials to discuss the effectiveness and usefulness of selected DOT advisory groups. The two selected non-FACA advisory groups were interagency coordination bodies whose membership consisted of federal employees. Because of this, the interviewees were able to represent both the agency and member perspectives. We reviewed advisory group charters, reports, meeting minutes, and performance measures from the To obtain a diverse mix and FACA database, other documentation as available, and observed an advisory group meeting for the Electricity Advisory Committee. In addition, we developed criteria to understand the extent to which advisory groups provided input on topics of importance to their respective agencies’ missions and to describe the advice producing activities of advisory groups, such as whether the advisory group held meetings and produced reports and recommendations and if the groups’ objectives were documented and were related to the agency’s strategic goals or mission. We developed these criteria by reviewing a selection of previous GAO reports, including those on the Government Performance and Results Act Modernization Act of 2010 (GPRAMA) and the Program Assessment Review Tool (PART),assess effectiveness and usefulness in consultation with internal GAO experts, and soliciting the perspectives of agency officials. We applied these criteria only to the 36 DOT and DOE FACA advisory groups actively chartered in fiscal year 2010 because similar information for non-FACA advisory groups was not available. We also gathered information on a selection of FACA and non-FACA advisory groups by reviewing information from the FACA database, advisory group charters and websites, relevant agency strategic planning documents, and interviewing agency officials for both FACA and non-FACA advisory groups. identifying a list of potential criteria to We conducted this performance audit from January 2011 to March 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To identify DOT and DOE FACA advisory groups, agency officials verified the active fiscal year 2010 groups identified in the FACA database. To identify non-FACA advisory groups providing input and advice to DOT and DOE, agency officials used the following definition: groups active in fiscal year 2010 that serve primarily an advisory function and provide input to the agency and/or component agency offices on areas related to the agency or office’s mission. These groups may have a mix of federal and nonfederal members, and are established to provide advice or recommendations on issues or policies pertaining to the agency or its components. We selected those groups focusing on the most common advisory group issue areas of aviation for DOT and energy for DOE for further review, covering 47 of the 88 DOT and DOE advisory groups active in fiscal year 2010 (see tables 4, 5, 6, 7, 8). To understand the extent to which advisory groups provided input to agencies on topics of importance to their missions, we reviewed information on the 15 DOT and 21 DOE FACA advisory groups that were actively chartered in fiscal year 2010. We selected information to review by developing criteria based on agency officials’ input and a review of relevant literature—including FACA guidelines and GAO reports on the Government Performance and Results Act Modernization Act. Each of the 36 FACA advisory groups had documented goals and topics that were aligned with their respective agency’s missions or strategic goals (See table 9). Further, each was engaged in activities that could help the group produce advice (See table 10). Linda Calbom, (206) 287-4809 or [email protected]. In addition to the individual named above, Sharon Silas, Assistant Director; Kathy Gilhooly; Laura Henry; Delwen Jones; Hannah Laufe; Janet Lee; Sara Ann Moessbauer; Steven Putansu; and Maria Wallace made key contributions to this report. | Advisory groupsthose established under the Federal Advisory Committee Act (FACA) and other groups not subject to the actcan play an important role in the development of policy and government regulations. There are more than 1,000 FACA advisory groups and an unknown number of non-FACA advisory groups governmentwide. Non-FACA groups include intergovernmental groups. Section 21 of Pub. L. No. 111-139 requires GAO to conduct routine investigations to identify programs, agencies, offices, and initiatives with duplicative goals and activities. In that context, GAO reviewed (1) the extent to which the Department of Transportations (DOT) and Department of Energys (DOE) assessment process helps ensure advisory group efforts are not duplicative and what challenges, if any, exist in assessing potential duplication, and (2) to what extent DOT and DOE advisory groups are useful in assisting their respective agencies in carrying out their missions and how the groups usefulness could be enhanced. GAO selected DOT and DOE for review based on knowledge of these agencies advisory groups. GAO interviewed agency officials; reviewed advisory group documentation; and conducted case studies of five advisory groups. Federal Advisory Committee Act (FACA) and Department of Transportation (DOT) and Department of Energy (DOE) guidance require officials to check for duplication prior to filing a charter to establish a new or renew an existing FACA advisory group. However, GAO found that DOT and DOEs processes for assessing duplication are often informal, and neither agency has specific steps identified for making such an assessment. Using an informal approach without specific steps makes it more likely that agency assessments for duplication will be inconsistent or incomplete. In addition, while basic information about the 15 DOT and 21 DOE fiscal year 2010 FACA advisory groups is publicly available in the FACA database, including designated points of contact and the objectives of the groups, no such information is readily available for non-FACA advisory groups. This limits the agencies ability to fully assess the universe of advisory groups for particular topic areas. DOT and DOE officials faced some challenges identifying and collecting information for the 19 DOT and 33 DOE non-FACA advisory groups GAO reviewed, relying on various sources and Internet searches to gather basic information, since neither agency maintains an inventory of its non-FACA advisory groups and their activities. In addition, advisory groups often address complex and highly technical issues that span across agencies. For example, one advisory group GAO identified focused on experimental and theoretical research in nuclear physics. Agency officials familiar with these types of technical topic areas and other potential stakeholders covering these same topics are best positioned to assess the potential for unnecessary duplication and would be even better positioned to do so if the departments develop specific assessment steps and enhance the visibility of non-FACA advisory groups. DOT and DOE advisory groups can be effective tools for agencies to gather input on topics of interest by informing agency leaders about issues of importance to the agencies missions, consolidating input from multiple sources, and providing input at a relatively low cost. To further review the usefulness of advisory groups, GAO conducted case studies on five DOT and DOE FACA and non-FACA advisory groups and identified several practices that could enhance the usefulness of these advisory groups and, in some cases, also help avoid duplication. These practices include the following: securing clear agency commitment, finding a balance between responsiveness to the agency and independence, leveraging resources through collaboration with similar groups, and evaluating the groups usefulness to identify future directions for the group or actions to improve its usefulness. The practices identified can help agencies leverage the advice produced by advisory groups to more efficiently and effectively address topics of importance to the agencies. For example, DOE officials from a FACA advisory group stated that coordination with officials involved in related groups helps to ensure sharing of useful information and that efforts are complementary rather than duplicative. GAO recommends that DOT and DOE document specific steps to assess potential duplication among FACA and non-FACA advisory groups and develop and make public basic information identifying non-FACA advisory groups to further inform periodic assessments. DOT and DOE agreed to consider the recommendations. |
The collection of outstanding criminal debt has been a long-standing problem, with many of the issues that we have been reporting on since October 1985 still remaining. Since that time, as reported in the U.S. Attorneys’ statistical reports, the balance of outstanding criminal debt has grown from $260 million to over $13 billion (see figure 1). The Congress attempted to address some of these problems through the Criminal Fines Improvement Act of 1987 when it transferred the responsibility for accounting for and processing criminal debt from Justice to the courts and gave them the responsibility for establishing a centralized accounting system (see appendix II, “History of Criminal Debt Collection Legislation”). In 1990, the Administrative Office of the United States Courts (AOUSC) began developing a centralized entity, called the National Fine Center (NFC) to record, track, and report on federal criminal debt. The NFC was expected to automate and centralize criminal debt processing for the 94 federal judicial districts and provide a management information system to replace the existing fragmented approach for receiving payments and alleviate long-standing weaknesses in accounting for, collecting, and reporting on criminal monetary penalties imposed on federal criminals. However, after several years of developing a National Fine Center that was criticized by GAO and the Congress, the AOUSC engaged an independent consulting firm in February 1996 to perform a full review of the project. The consulting firm concluded that the task of developing a National Fine Center, involving several agencies in two branches of government, proved to be more complex than expected and that the needs of the districts could not be met through a centralized approach. Thus, with the consent of the Congress, the NFC was terminated. As such, the criminal debt collection process continues to be fragmented, involving both judicial and executive branch entities in 94 districts across the country. Also, around the time of the consultant’s report, the Mandatory Victims Restitution Act of 1996 (MVRA) was enacted, requiring that restitution be assessed at the full amount regardless of an offender’s ability to pay. Since that time the balance of reported uncollected criminal debt has increased dramatically. Reported uncollected criminal debt has more than doubled from about $5.6 billion as of September 30, 1995, to approximately $13 billion as of September 30, 1999, with about 66 percent of that amount attributed to restitution owed to nonfederal parties. The collectibility rate however has not increased proportionally. Criminal debt arises when a court orders an offender to pay fines and/or restitution as part of the punishment for violating a federal criminal law. Unless the offender immediately pays the debt, Justice is responsible for enforcing its collection. Justice has delegated this responsibility to its Financial Litigation Units (FLU) in the United States Attorneys’ Offices (USAO) across the country. As of September 30, 1999, the Executive Office for United States Attorneys (EOUSA) database reflected approximately $13.1 billion in reported outstanding criminal debt, of which about $5.6 billion (or 43 percent) was accounted for by the four districts we visited (see figures 2 and 3 for a breakout of the major types of criminal debt involved). Each of the 94 districts has a USAO, an executive branch agency, and a U.S. district court that includes district judges, a clerk’s office, and a probation office within the judicial branch of government. The districts operate independently from one another with guidance provided by the offices indicated in table 1. In addition to general guidance provided by Justice, the Judicial Conference, and the U.S. Sentencing Commission (USSC), each district office develops supplemental guidance for criminal debt collection procedures. Within each district, the USAO, probation office, and the clerk’s office enter into a memorandum of understanding (MOU) that documents how criminal debt collection activities will be accomplished. Each of the USAOs also has a Financial Litigation Plan that details district guidance on collecting criminal debt. The following sections provide additional detail on (1) assessing criminal fines and restitution and (2) accounting for and collecting criminal debt in this currently decentralized and fragmented environment. Agencies such as Justice’s Federal Bureau of Investigation and the Drug Enforcement Administration investigate violations of federal law and refer the results of their investigations to a local USAO (see figure 4 for a general overview of the criminal debt assessment process). The country is divided into 94 federal judicial districts, with a federal district court in each district. Each of the 94 districts is located in one of 12 regional circuits, and each circuit has a Court of Appeals. After the USAO obtains the conviction of an offender, the court issues a Judgment in a Criminal Case (JCC), which details terms of the sentence and orders the payment of a fine and/or restitution, if applicable. To assist judges in determining the fine and/or restitution amount, a probation officer prepares and provides to the court a pre-sentence report that includes financial information related to an offender’s ability to pay a fine and information related to victims’ losses. In preparing the pre-sentence report, probation officers are to use financial information obtained from the investigating agency, the trial, and the offender. In deciding whether to assess a fine and, if so, the amount to assess, courts are to consider an offender’s income, earning capacity, and financial resources; the potential burden placed on an offender’s family; and any restitution or other obligations that the offender is required to make. For example, if large amounts of restitution are ordered, the assessment of fines is typically waived based on the offender’s inability to pay a fine. USSC guidelines provide guidance on the minimum and maximum fine amounts for the U.S. courts to impose based on the offense. The statute requires the court to order the payment of a fine immediately unless, in the interest of justice, the court provides for payment on a date certain or in installments. According to the guidelines and the statute, judges may consider whether paying the fine in a lump-sum would have an unduly severe impact on the offender or any dependents, and if so, should establish an installment schedule for paying the fine. The installments should be in equal monthly payments over the period established by the court, unless the court establishes another schedule. The length of time over which scheduled payments should be made is the shortest time in which full payment can reasonably be made, generally not to exceed 12 months. In addition, judges may waive fines if they believe that offenders will be unable to pay and are unlikely to become able to pay (e.g., if they are sentenced to life in prison or cannot afford to hire private counsel). Judges may also order restitution to be paid to the victims of a crime. In accordance with statute, before MVRA was enacted in April 1996, the court typically waived or reduced the restitution amount based on the offender’s ability to pay. However, under MVRA, the court typically must order restitution to each victim in the full amount of each victim’s loss, without regard to an offender’s economic situation. If the court believes that an offender cannot immediately or fully pay the restitution amount in the foreseeable future, the court can order the offender to make nominal installment payments. In some districts, judges must set the payment schedules and document them in the JCC, and in other districts, judges can delegate to probation officers the authority to set payment schedules. However, within the last few years, more judges have been required to establish payment schedules as a result of several circuit court decisions that have affected policies in this area. For example, some circuit courts have held that courts are prohibited from ordering a defendant to pay criminal fines or restitution in accordance with a payment schedule set by a probation officer or a prison official because the setting of a payment schedule is an inherently judicial function that may not be delegated to others. In addition, some circuit courts have prohibited the imposition of an immediate payment order of the entire amount unless the defendant can pay the entire amount immediately. Finally, some circuit courts have interpreted the MVRA to require the court to set the payment schedule in all cases at sentencing. After the assessment process, the criminal debt collection process varies depending on the other sentencing terms imposed on the offender. Figure 5 shows the typical post-MVRA criminal debt collection process. The FLUs within the USAOs’ Civil Divisions have been delegated the responsibility for collecting criminal debt. After receiving a JCC, the FLU enters information from the JCC into the FLUs’ case tracking system and performs certain collection actions depending on such factors as the amount of the debt. The FLUs’ collection efforts include filing liens (based on debtor’s address or county of known residence), identifying debtor assets, garnishing debtor wages, and serving notice of late payments. To facilitate collection and reduce duplication of effort, the entities involved with assessing and/or collecting criminal debt (investigative agencies, prosecuting attorneys, and the courts) should share the financial information they have obtained about the offender with the FLUs. Offenders are encouraged to participate in the Bureau of Prisons (BOP) Inmate Financial Responsibility Program (IFRP). This program provides a means of collecting voluntary periodic deductions from inmates’ wages earned from a prison occupation. The amounts are generally small and are deducted periodically (e.g., monthly, quarterly, or semiannually). When released from prison or as ordered by the judge at sentencing, the offender is assigned to a probation officer. If the criminal debt has not been paid, the probation officer or the court, depending on the district, should establish an installment schedule for payment. Probation officers may restrict offenders from performing certain activities, such as traveling outside the district, if they are not making their required payments. Probation officers may also request that the court revoke supervision (i.e., send an offender to prison) if the offender is willfully refusing to make payments. Since, as noted above, the NFC effort did not succeed, the FLUs in each district maintain their own databases to meet their enforcement responsibilities. Restitution payments from offenders in most districts are submitted to the clerk’s office. The clerk’s office records these payments and provides a copy of the payment information to the FLUs so that they can update their databases. Most criminal fines are paid to the clerk’s office and deposited into Justice’s Crime Victims Fund, which provides grants for victim assistance programs and compensation to victims. Payments for restitution assessed after MVRA are paid to and disbursed by the clerk’s office; however, the handling of payments and disbursements for pre-MVRA restitution vary by district. In 18 districts, the clerk’s office accepts only post-MVRA restitution payments; therefore, the FLUs in these districts maintain an additional system to receive pre-MVRA restitution payments from offenders and to disburse payments received to applicable victims. Restitution is often owed to many victims, and disbursements must be prorated based on the amounts owed to each victim. The clerk’s office disburses checks to victims, whereas the FLU uses an independent financial institution to receive payments and disburse checks to victims. AOUSC officials have indicated that they are working with the staff in the remaining districts to assist them in assuming the receipting of collections responsibilities for pre-MVRA payments. Our objectives, as agreed to by the subcommittee staff, were to determine (1) the key reasons for the growth in reported uncollected criminal debt, (2) whether adequate processes exist to collect criminal debt, and (3) what role, if any, the Office of Management and Budget (OMB) or the Department of the Treasury (Treasury) plays in overseeing and monitoring the government’s collection of criminal debt. To determine the key reasons for the growth in reported uncollected criminal debt and whether adequate processes exist to collect criminal debt, we (1) interviewed officials from the Executive Office for United States Attorneys (EOUSA), the Administrative Office of the United States Courts (AOUSC), and five selected district offices, (2) reviewed applicable policies and procedures for collecting criminal debt, (3) obtained a database from EOUSA of all outstanding criminal debt as of September 30, 1999, and (4) reviewed all criminal debt cases greater than or equal to $14 million at the four districts with the largest amount of outstanding criminal debt as of September 30, 1999. These four districts—the Central District of California, the Eastern and Southern Districts of New York, and the Southern District of Florida—accounted for $5.6 billion (or 43 percent) of the over $13 billion of outstanding criminal debt as of this date. At these four districts, we reviewed all 44 cases greater than or equal to $14 million, which accounted for $3.7 billion (or 66 percent) of the $5.6 billion. We also selected and reviewed 35 random criminal debt cases with a dollar value of $5,000 or greater but less than $14 million at each of the four districts (for a total of 140 random cases); thus, we had a total of 184 cases selected for our review. We did not independently verify the completeness or accuracy of these data or test information security controls over the system used to compile these data because that verification was not necessary to meet the objectives of this report. To determine what role, if any, OMB and Treasury play in overseeing and monitoring the government’s collection of criminal debt, we interviewed officials from these entities and reviewed applicable laws and regulations. We performed our work from April 2000 through April 2001 in accordance with U.S. generally accepted government auditing standards. We requested comments on a draft of this report from the respective agencies. These comments are discussed in the “Agency Comments and Our Evaluation” section of the report and are reprinted in appendix III through appendix V. See appendix I for a more detailed discussion of our scope and methodology. According to statistics from the EOUSA, the amount of criminal debt has grown significantly since fiscal year 1995 (see figure 6). Several factors contributing to the growth in reported uncollected criminal debt, some of which are not within the FLUs’ or probation offices’ control, include (1) the nature of the debt, including the government’s limited ability to write off certain debt deemed to be uncollectible, (2) the assessment of mandatory restitution, (3) interpretation of payment schedules set by judges, and (4) limitations due to state laws. The nature of criminal debt, including how and why it is levied, can make the debt more difficult to collect. Criminals may not be willing to comply with the law, and forcing compliance is difficult because criminals are already convicted felons who may be serving time in prison or may have been deported. Moreover, offenders in prison have limited earning capacity, and so potential collections are limited. In 57 percent of the high-dollar cases we reviewed and in an estimated 20 percent of our sampled population, the offender was still in prison. Further, significant time may pass between an offender’s arrest and sentencing, giving offenders time to hide fraudulently obtained assets, such as funds in offshore accounts, shell corporations, or family members’ names and accounts. Even though the courts are required to consider an offender’s ability to pay when assessing fines, collection cannot always be assured. Fines are sometimes assessed to make a statement about the nature of the crime and its impact on society. Restitution, as discussed below, is typically assessed without regard to an offender’s ability to pay; therefore, collection may be unrealistic. Asset seizure and forfeiture are important components of law enforcement efforts to deprive criminals of the proceeds and instruments of their crimes. Several years may pass between an offender’s arrest and sentencing. Federal laws authorize agencies to seize assets before a criminal conviction, thereby potentially overcoming one difficulty in collecting fines and restitution—defendants diverting their assets before conviction. However, the FLUs are not permitted to pursue liquidation of assets for debt collection until after an offender is convicted and sentenced. Proceeds from forfeiture are typically used to make owners (e.g., a mortgager) whole and to fund law enforcement activities, and are not necessarily used to fulfill restitution orders. Therefore, the use of forfeiture, as we reported in June 1994, could decrease amounts that might otherwise be available for paying restitution to crime victims and reducing outstanding criminal debt. According to Justice statistics, of the estimated $536 million of forfeited cash and property recovered during fiscal year 1999, approximately $39 million (or 7 percent) was applied to restitution in victim-related offenses. The remaining amounts were either converted to cash and used for law enforcement purposes or retained for official law enforcement use. In our case reviews, only 2 of the 44 high-dollar cases provided that the proceeds from the sale of assets be used to pay restitution. None of the JCCs for the random cases stated such terms. In the 2 high-dollar cases, the JCCs specifically stated that the proceeds of the sale of seized and forfeited assets should be used to pay victims. Finally, according to 18 U.S.C. 3613, most criminal debts must remain “on the books” for 20 years plus the period of incarceration and cannot be “written off” until the statute of limitations expires, the debtor is deceased, or the court approves a petition of remission filed by the USAO. Therefore, even if Justice determines that certain debts are not collectible, these debts must remain “on the books,” and the FLUs must periodically reassess their collectibility in accordance with USAO policies (see chapter 3) regardless of the status of the offender or previous actions to collect these debts. For example, in accordance with USAO policies, a $30,000 debt must be reassessed annually even if the FLU was unsuccessful in previous attempts to identify assets and the offender is serving a sentence of life in prison. The U.S. Attorneys’ Manual states that if the FLU determines that a fine will likely never be collected, it can seek a petition for remission of all or part of a fine from the judge. According to the Manual, seeking remission is preferable to placing it “in suspense” and continuing to pursue collection. However, we found no evidence that the FLU had requested a petition for remission in the cases we reviewed, even though some cases appear to have met the criteria for remission. According to FLU officials, obtaining a court order to write off delinquent debt is a time-consuming process and is not considered a priority among the many other tasks (e.g., working on open cases) the FLUs must perform. Before the Mandatory Victims Restitution Act of 1996 (MVRA), the assessment of restitution, like the assessment of fines, was typically based on an offender’s ability to pay. However, MVRA requires that assessment of restitution be based on actual loss and not on the offender’s ability to pay. Assessments of restitution have significantly increased since the passage of the act. As of September 30, 1995, approximately $3.4 billion (102,158 cases) in criminal debts was owed to the federal government in fines and federal restitution and about $2.2 billion (15,126 cases) was owed in nonfederal restitution. Although federal and nonfederal criminal debt amounts have increased from September 30, 1995, to September 30, 1999, the increase for nonfederal debt (i.e., restitution) is far greater (see figure 7). When assessed amounts are not based on an offender’s ability to pay, the likelihood of collecting the full amount may be unrealistic. For example, in addition to each offender receiving a 240-year prison sentence, the four offenders convicted of bombing the World Trade Center were each ordered to pay $250 million in restitution plus fines ranging from $250,000 to $4.5 million. These four cases alone increased the criminal debt balance by over $1 billion. All four offenders refused to provide financial information and, as of May 2000, these offenders had collectively only paid approximately $3,000. Under these circumstances, it is unlikely that a significant portion of the restitution owed by these offenders will be collected. In another example, an offender was convicted in March 1997 of conspiracy, mail fraud, and tax evasion; sentenced to 20 years imprisonment and 3 years of supervised release; and ordered to pay a fine of $1 million and restitution of over $475 million. FLU records show that as of September 30, 1999, the offender had paid only $25. FLU records also show the offender refused to participate in the Inmate Financial Responsibility Program. Before sentencing, the offender also refused to provide the probation officer with a personal financial statement that would identify income, expenses, assets, and liabilities. Other work by the probation officer, including obtaining a credit report, doing an online property search, and reviewing tax return information, did not disclose assets available to pay down the debt. In some districts, the judges may delegate the authority to set payment terms to probation officers. In those districts where it has been held that the courts may not delegate the authority to set payment schedules (including two of the districts we visited), judges must include them in the JCC. We found that the payment schedules set by judges can significantly influence collection efforts. EOUSA and FLU officials we interviewed indicated they believe that when a judge orders specific payment schedules in the JCC, they are precluded from making any collection efforts (other than filing a lien), such as pursuing liquidation of assets, until an offender is released from probation. However, the view of AOUSC officials and the Chief Judge in one of these districts is that the inclusion of payment schedules in the JCC does not preclude the FLU from identifying and pursuing assets but merely sets a minimum amount that must be paid while an offender is under supervision. The EOUSA and FLU interpretation inhibits the FLUs from taking prompt collection efforts, and the government may lose opportunities to collect criminal debt. In 16 of the 44 high-dollar and 41 of the 140 random cases we reviewed, judges stipulated terms in the JCCs regarding how or when fines or restitution should be paid; in the remaining cases, the fine or restitution was due immediately (see table 2 for terms stipulated in our selected cases). In the cases where judges stipulated payment terms, we found that the FLUs typically wait until after the offender is released from prison and probation before performing collection actions (e.g., searching for and liquidating assets). As a result of such delays, opportunities to maximize collections may be missed. As we reported in 1999, the payment schedules set by judges vary by district. For example, in 1 of the high-dollar cases and in 17 random cases, including 13 random cases selected from the Eastern District of New York, the judges stipulated that the amount was not due until the offender was released from prison. In these instances, the FLUs typically do not perform any collection actions other than filing a lien. In 2 high-dollar and 10 random cases, 9 of which were from the Southern District of New York, judges established a payment schedule based on a percentage of gross income to be paid on a periodic basis (e.g., 10 percent of gross monthly income to be paid monthly). In 8 high-dollar and 3 random cases from the Central District of California, the judge established a minimum amount that must be paid on a periodic basis (e.g., at least $300 to be paid each month). We found that probation officers typically did not recommend an increase in payment amounts, and the FLUs typically did not attempt to increase them or pursue liquidation of assets, even if financial circumstances improved. The following examples show the effects of terms being stipulated in the JCCs. In one case, in February 1998, an offender was convicted of bank fraud and ordered to pay $113 million in restitution jointly and severally with coparticipants through quarterly payments of at least $2,400. Since the payment schedule was specified in the order and the offender was making the minimum payments, neither the FLU nor the probation officer recommended or attempted to pursue the net proceeds of $80,000 from the sale of her house and $19,200 from the sale of two cars. While on probation, the offender was permitted to move to another country with court approval. As of February 2001, clerk records show that this offender had paid $28,800 and all coparticipants combined had paid less than $100,000. Another offender was convicted of wire fraud, sentenced in November 1997 to 6 months of home detention to be served concurrently with 3 years of probation and ordered to pay over $74 million in restitution jointly and severally with coparticipants. The judge ordered the defendant to make quarterly payments of at least $750 after she completed home detention. Based on financial statements the offender provided, her pre-sentence report showed that she had over $40,000 in unencumbered assets and $5,000 of unsecured debt, resulting in a net worth of over $35,000. The probation office had not recommended the pursuit of liquidation, nor had the FLU attempted liquidation of any assets owned by the offender. According to the FLU, because the judge included the payment schedule in the judgment, the FLU will not pursue collection until after the offender is released from supervision. As of April 2000, the offender had paid $6,000. In April 1998, another offender in our selection was convicted of conspiracy to commit mail fraud and ordered to pay a $20,000 fine in quarterly installments over a 3-year probation period even though just before his sentencing, the offender reported $5,000 in cash, $12,400 in his checking account, and over $100,000 of equity in his home. In this example, it appears that the offender may have had the ability to pay his whole fine or a significant portion of it immediately; however, since the judge set a payment schedule in the JCC, neither the FLU nor the probation office reassessed the offender’s ability to pay or pursued a lump-sum payment. According to clerk records, the offender had paid $14,400 as of June 2000. According to EOUSA officials, state law can also restrict the FLU’s ability to perform certain collection efforts and therefore contribute to the growth in outstanding criminal debt. State law may limit the type of property that can be seized and the amount of wages that can be garnished. For example, certain states, such as Florida, have unlimited homestead exemptions, prohibiting the seizure of a primary residence regardless of the amount of equity in the home and thus prohibiting the FLU from requiring an offender to borrow against a primary residence. During our case-file reviews, we found several instances in which real property and personal property were registered in the offender’s spouse’s name (or other family member). These assets may be difficult to liquidate unless the state is a “community property” state, such as California, in which each spouse is entitled to one-half interest in property owned or income earned by the other spouse. Because of the many agencies and districts involved in the collection process, improving the rate of criminal debt collection, which has averaged about 7 percent for fiscal years ending September 30, 1995 through 1999, hinges in part on the ability of these entities to work together and to implement effective processes (see figure 8). The four FLUs we visited did not have effective policies and procedures or did not always follow their policies and procedures to ensure that collection actions were prompt and adequate for increasing the potential for collecting the maximum amount of criminal debt. In addition, the four probation offices we visited did not always follow their procedures that could have allowed for increased collections from offenders under supervision. Further, because the entities involved in the criminal debt collection process did not adequately coordinate their efforts or share financial information about offenders, they weakened the government’s ability to increase collections. Of the $3.76 billion of debt assessed in our high-dollar cases, approximately $148 million (or about 4 percent) was collected through September 30, 1999. In addition, we estimate that 4 percent of the judgment amounts for our sampled population had been collected through September 30, 1999. As stated in Standards for Internal Control in the Federal Government, transactions should be promptly and accurately recorded to maintain their relevance and value to management in controlling operations and making decisions. Also, collection actions must be promptly performed, because, as industry statistics have shown, the likelihood of recovering amounts owed decreases dramatically with the age of delinquency. Many of the outstanding debts as of September 30, 1999, were over 3 years old (see figure 9). In reviewing our selected cases at the four FLUs we visited, we found that the FLUs did not always follow their established procedures or lacked procedures for performing the following actions in a timely manner: entering cases into their tracking systems; filing liens; performing asset discovery work, such as researching asset databases; using other enforcement techniques, such as wage garnishment; monitoring and reassessing cases; sending demand, delinquent, or default letters; and assessing interest and penalties. We also found that the lack of asset investigators, as well as the limited number of collection staff, a historical problem for the FLUs, weakens their ability to aggressively follow up on and enforce collections. In addition, we found that the FLUs’ tracking systems do not capture certain data, such as court-ordered terms or status of offender, that are needed to effectively assist in managing the debt portfolio. The EOUSA Resource Manual, along with local guidance for the four FLUs we visited, outlines the procedures that should be followed once the FLU receives a JCC. According to this guidance, the FLUs are to enter criminal debts into their collection tracking systems in a “timely fashion,” but no later than 14 days after receiving the JCC. Although procedures exist for entering the data, we noted during our reviews that no policies or procedures existed to ensure that a copy of the JCC is promptly sent to the FLUs. 18 U.S.C. Sec. 3612(b)(2) requires the clerk’s office to transmit a certified copy of the JCC to the Attorney General (i.e., USAO) within 10 days after the judgment or order. However, according to FLU officials, this copy is typically sent to the prosecuting attorney within the USAO’s Criminal Division and not to the FLU within the USAO’s Civil Division. The prosecuting attorney is to then forward a copy to the FLU. Since the FLUs we visited were not required to, and did not typically, stamp the date they received a copy of the JCC, we were unable to determine when they received the copy and how long they had the copy before entering the criminal debt information into their systems. The average length of time for entering the 44 high-dollar cases was 288 days; for the sampled population, we estimate that the length of time was 289 days. For most cases, FLU officials could not provide us with explanations as to why these cases were not entered into their tracking systems within 30 days of the JCC date. Unless the FLUs promptly receive a copy of the JCC and promptly enter the data into the tracking systems, time-sensitive collection actions, such as filing liens and performing asset discovery work, are delayed and opportunities to maximize collections may be missed. FLUs are required to file notices of liens on offenders’ properties, pursuant to 18 U.S.C. Sec. 3613, to establish the government’s claims on these assets and to prevent the sale or transfer of such property. The first liens filed by the FLU are typically filed according to the offender’s home address; additional liens should be filed if the FLU identifies assets in other locations. The U.S. Attorneys’ Manual specifies that liens are required to be filed in all cases over $650 but does not establish a specific time frame for filing. Only one of the FLUs in the four districts we visited—the Southern District of Florida—had established a time frame for filing a lien; this district requires liens to be filed within 45 days of the judgment date. Instead of specifying a time frame, the other three districts require that liens be filed to “guarantee enforcement to the fullest extent of the law.” However, during our reviews we found that liens often were not filed or not filed promptly. Specifically, we found that required liens had not been filed in 10 percent of the high-dollar cases and in an estimated 30 percent of the sampled population. In another 27 high-dollar cases and 68 random cases, we found that over 60 days elapsed between the judgment date and when the lien was filed. The filing of liens is further delayed if judgments are not promptly received and entered into the collection tracking system. For the 38 high-dollar and 96 random cases in which liens had been filed, the average number of days from the date the case was entered into the system until a lien was filed were 142 and 356 days, respectively; and from the judgment date to filing were 410 and 639 days, respectively. In most cases, the FLUs were not able to determine why a lien was not filed or not promptly filed. Not promptly filing liens or not filing them at all significantly increases the potential for offenders to liquidate their assets and avoid repaying debts owed to the government. For example, an offender in our selection was fined $25,000 in 1989. During 1991, the offender reported receiving net proceeds of $180,000 from selling a home and $18,000 from selling a boat (the offender did not specify whether the proceeds from the sale of the boat were gross or net). The offender’s last payment was received in 1995. As of May 2000, no lien had been filed, and the offender owed over $15,000 plus over $13,000 of interest. According to FLU officials, the file does not indicate why the lien was not filed. Had the FLUs promptly filed a lien, the proceeds from these sales might have been applied towards payment of the fine. The U.S. Attorneys’ Manual specifies that the USAO should “execute on” (i.e., seize) an offender’s property as soon as possible after sentencing. According to this manual, in order to identify property owned by the offender, the USAO should “promptly and vigorously” perform asset discovery work, which includes procedures such as reviewing the pre- sentence report, requesting financial statements and tax returns from the debtor, obtaining credit reports, and researching on-line property locator services. However, we found that the four FLUs we visited performed very limited asset discovery work and that established procedures did not specifically identify when these procedures were required to be performed. For example, in 48 percent of the high-dollar cases and in an estimated 66 percent of the sampled population we found no evidence that the FLU attempted to identify the debtor’s assets. According to USAO officials, asset discovery work is performed only if the FLU believes, based on its judgment, that the offender may have assets. Moreover, there is no requirement to document these judgments or whether they were made. Not promptly identifying whether an offender has assets increases the risk that the offender may have time to hide or liquidate assets that could have been available to pay toward the debt. For example, an offender was convicted of tax fraud, sentenced in 1994, released from supervision in 1995, and ordered to pay about $344,000 in restitution. As of May 2000, the offender had paid only $750. Our review of the FLU file showed that the FLU did not perform asset discovery work before May 2000. Since this offender had been selected for our review, the FLU performed an asset search to identify assets that could possibly be liquidated and also scheduled a deposition with the offender to determine whether there were assets that could be liquidated. However, over 6 years has passed since the offender was sentenced; therefore, the offender could have previously liquidated or hidden his assets. We also found that district guidance at the four FLUs we visited specifies that asset discovery work should be performed, but the guidance does not (1) establish time frames for performing the work or (2) prioritize debt cases based on factors that indicate increased potential for collections. Lack of time frames and prioritization increases the risk of delays in performing asset discovery work and thereby the potential for missing opportunities to maximize collections. In addition, asset discovery work is further delayed if the judgment is not promptly received and the information is not promptly entered into the case tracking system. Factors that could help prioritize collection efforts include the type of crime or the type of victim. For example, collection rates tend to be higher for offenses related to white-collar crimes than for those related to violent crimes. Or cases involving hundreds of nonfederal victims may take higher priority over those with a relatively insignificant fine amount owed to the federal government. The U.S. Attorneys’ Manual states that USAOs are to “litigate vigorously” to enforce the collection of debts “to the fullest extent of the law” and that the “government should execute on an offender’s property as soon as practicable after sentencing.” FLUs are authorized by law to perform a wide range of enforcement techniques, such as wage garnishment and asset seizure, to collect criminal debt. If offenders willfully do not pay their criminal debt, FLUs can summon them to appear in court. In court, offenders can be ordered to answer questions under oath or in writing about their financial status or explain why they have not complied with the court’s order for paying a debt. FLUs can also obtain a court order, called a writ of execution, that permits the U.S. Marshals Service to seize an offender’s property as complete or partial payment of a fine or restitution. Writs of execution can also be applied against an offender’s income or bank account in a process called garnishment. In October 1985, we reported that the FLUs we visited rarely used the techniques we have just discussed due to several factors, including limited resources. Based on our reviews at the four districts we visited, the FLUs are still rarely using any of these enforcement techniques, and the guidance does not specify when and how frequently these techniques should be used. For example, we found only one case in which the FLU garnished wages. According to FLU officials, enforcement techniques are not pursued until the FLU determines that an offender has assets or sufficient earnings and is willfully not paying amounts owed. However, as noted above, the FLU is performing limited asset discovery work to determine whether assets do exist that could be pursued, as indicated in the example where the FLU performed an asset search in May 2000, 5 years after the offender’s release from supervision, and only after the case had been selected for our review. Based on that search, the FLU scheduled a deposition with the offender to determine whether there were assets that could be liquidated. In another example, an offender was convicted of embezzlement, false imprisonment, and tax evasion and was sentenced in 1998 to 20 months of imprisonment and 3 years of probation. The offender was also ordered to pay approximately $67,000 in restitution in monthly installments of at least 10 percent of gross monthly income. After sentencing, but before surrendering for incarceration, the offender sold property and realized a profit of about $13,000. According to FLU officials, the FLU could have forfeited the property, but they could not explain why this option was not pursued. The FLUs use event codes in their collection tracking systems to document actions taken to pursue collection and the status of cases. According to the U.S. Attorneys’ Manual, criminal debts that are placed in suspense must be periodically reviewed to determine whether the offender’s status has changed and to reassess the offender’s ability to pay (see table 3). For example, the code “DDNL” is used to place an account in suspense when a debtor cannot be located. This policy allows FLUs to keep criminal debts “open” as legally required while limiting the time and effort to be spent on a case. According to the manual, debts placed in suspense must be periodically reviewed to reassess an offender’s ability to pay. In September 1993, Justice’s Office of the Inspector General (OIG) reported that the FLUs were not adhering to the prescribed policies for reviewing debts in suspense. During our reviews, we also found that the FLUs we visited were still not consistently using the event codes, including suspense codes, and they were not following their prescribed procedures for reassessing an offender’s ability to pay. Specifically, we found that the event code as of September 30, 1999, was inconsistent with the information in the case file for 14 percent of the high-dollar cases and for an estimated 20 percent of the sampled population. For example, an offender was fined $100,000 in October 1989. The offender reported over $420,000 in net worth on a personal financial statement dated September 1989 and was making payments towards the fine until September 1997, at which time the offender still owed $82,000 plus interest and penalties. However, the FLU had not reviewed the case from September 1997 through April 2000, well over the 1-year frequency-of-review guidelines for debt over $25,000. In addition, the FLUs did not promptly monitor cases and update their records, resulting in an inaccurate principal balance in EOUSA records. In 40 percent of the high-dollar cases and for an estimated 65 percent of the sampled population, the FLU had not revisited the case within established time frames. For example, an offender in our selection was ordered to pay restitution of $20 million in November 1991. The offender had paid over $50,000 before his death in 1993. However, as of September 30, 1999 (over 6 years after the offender’s death), the FLU was showing that the offender still owed a balance of over $19.9 million. Had the FLU revisited this case in a timely manner, this amount would have been written off. We found that the September 30, 1999, balance for GAO-selected cases was overstated by more than $450 million. Five of the high-dollar cases we reviewed involved one case with several defendants who were jointly and severally liable for all or some parts of the total restitution owed. To avoid double counting in joint-and-several cases, the FLUs are to open one record for the lead defendant and track all other codefendants or coparticipants under the lead defendant’s record. However, the FLU inappropriately opened separate records for these defendants, thereby overstating the amount owed as of September 30, 1999, by more than $430 million. The EOUSA Manual requires the FLUs to send a demand letter to offenders “as soon as” a case is entered into the criminal tracking system, notifying offenders of their debt and the consequences of not paying the debt (i.e., interest and penalties would be assessed). While three of the four districts in our review have incorporated this guidance into their local procedures, local procedures for the fourth, the Central District of California, state that the FLU should not send demand letters to an offender who is under the supervision of a probation officer. In October 1985, we reported that demand letters were sent in only 17 percent of the cases reviewed in five districts, and of those sent, the average number of days the FLUs took to send the letters was 143. Not sending or not promptly sending demand letters continues to be a problem for the USAOs. The problem could be attributed, in part, to the lack of specific guidance as to when demand letters should be sent. For example, EOUSA guidance states that demand letters be sent “as soon as” a case is entered into the criminal tracking system; however, it does not address situations that may not be applicable to this guidance, such as debts entered into the tracking system that are not yet due. We found that the FLUs had not sent demand letters required by the EOUSA Manual in 69 percent of the high-dollar cases and in an estimated 45 percent of the sampled population. Sending demand letters is further delayed by the amount of time it takes the FLUs to receive the judgments and enter information from them into their tracking systems. For the high- dollar and random cases for which demand letters were sent, the average number of days the FLUs took to send the first letter from the date the judgment was entered into the tracking system, was 163 and 433 days, respectively. Also, for those same high-dollar and random cases, the average number of days the FLUs took, from the judgment date, to send the first letter was 481 and 712 days, respectively. In accordance with 18 U.S.C. 3612(d), delinquency notices should be sent within 10 working days after a fine or restitution is determined to be delinquent (i.e., a payment more than 30 days late). A payment that is not made within 90 days after it is determined to be delinquent is in default, and a default notice should be sent within 10 working days. However, we found that neither delinquency nor default notices were sent in 21 of the 24 high- dollar cases and 58 of the 101 random cases in which the debt was determined to be delinquent or in default. Failing to promptly inform an offender of the penalties for not making payments diminishes the incentive for the offender to make prompt payments. In accordance with 18 U.S.C. 3612(f), interest and penalties are required to be assessed on unpaid fines or restitution over $2,500 unless the court waives this requirement (i.e., if the judge specifically states in the JCC that interest and/or penalties are waived). The law also permits the Attorney General to waive interest and penalties if it is determined that efforts to collect are not likely to be effective. However, according to the U.S. Attorneys’ Manual, a determination of whether to waive interest and penalties should be considered only after the principal has been paid. In September 1993, the Justice OIG reported that 8 of the 10 offices they visited did not pursue penalties and that 2 had waived both interest and penalties for all delinquent debts. The OIG recommended that the EOUSA emphasize the need for assessing interest and penalties. However, we found that the four FLUs we visited still were not consistently assessing interest and penalties. While in some instances, the FLUs assessed required interest, in 4 out of 7 high-dollar and in 12 out of 49 random cases that required interest and penalties to be assessed, the FLUs had not done so. Moreover, the FLUs generally do not assess penalties. EOUSA officials believe that assessing interest and penalties is not productive because the principal debt itself is often difficult to collect. As shown in table 4, inconsistently applying interest and penalties leads to inconsistent data and an understated balance. According to the EOUSA database, as of September 30, 1999, the outstanding debt balance included over $400 million of interest and penalties assessed by the FLUs. However, because the FLUs do not consistently assess interest and penalties, the reported amounts do not accurately represent how much total principal, interest, and penalties are due. In addition, failure to assess interest and penalties reduces the amount that could be recovered and passed along to victims or the federal government and eliminates a tool designed to provide debtors an incentive for prompt payments. Effective and prompt collection actions are affected by the adequacy of human resources. We recently designated human capital a governmentwide high-risk area, emphasizing that an organization’s people—its human capital—are its most critical asset in managing for results. Our high-risk report explains that human capital problems lead to programmatic problems and risks and that human capital shortfalls are eroding the ability of many agencies to effectively, efficiently, and economically perform their mission. In addition, according to the Comptroller General’s Standards for Internal Control in the Federal Government, only when the right personnel for the job are on board and are provided the right training, tools, structure, incentives, and responsibilities is operational success possible. The lack of asset investigators and the limited number of collection staff have presented a historical problem for the FLUs. In October 1985, we reported that debt collection, especially criminal debt collection, receives low priority and suffers from staffing problems. In that report, we stated that personnel spent more time on accounting for criminal fines than on enforcing collection. We also reported that the FLUs, who are responsible for civil and criminal collections, were staffed with one attorney and from 1 to 10 collection clerks, depending on the size of the district (generally the same staffing levels as in 1999). In July 1990, we reported that FLUs stated that they have insufficient trained staff to aggressively follow up on and enforce collections. In September 1993, the Justice OIG reported that as data entry responsibilities increase, less time is spent on actual criminal debt collection actions. Staffing levels for the four FLUs we visited have only slightly increased from an average of 8.7 individuals during 1995 to 9.3 individuals during 1999, even though the number of assessments and debts pending have significantly increased. Specifically, the number of debts pending for the four FLUs we visited increased from an average of 4,406 to 6,373 cases per district, or about 45 percent, and the average dollar amount of outstanding debts per staff increased by over 160 percent. Table 5 reflects the average number of criminal cases compared with the average number of staff (i.e., workload) for fiscal years 1995 and 1999 for the four FLUs we visited. In addition to the criminal case workload data presented in table 5, the FLUs are also responsible for collecting civil debt that other federal agencies refer to them. The number of outstanding civil debts for all FLUs increased from 44,786 debts as of the end of fiscal year 1995 to 146,421 at the end of fiscal year 1999. Further, none of the four FLUs we visited had full-time resources dedicated to or specializing in performing searches to identify hidden assets, and they had few resources available for enforcing collections. When assets are not promptly identified, offenders have more time to hide fraudulently obtained assets, such as funds in offshore accounts, shell corporations, or family members’ names and accounts. Once assets are identified, the FLUs should pursue collection through the use of enforcement techniques (i.e., legal remedies); however, most of the individuals assigned to the FLUs we visited were not attorneys or paralegals, whose skills are needed to pursue such techniques. EOUSA officials have historically recognized the need for additional training and staff, but they indicate that budget constraints limit the FLUs’ ability to provide the additional training or hire additional staff that would enable them to collect debt more effectively. In addition, an official from the FLU in the Southern District of New York in Manhattan indicated that this district often has difficulty in filling its lower-paying positions, such as those for debt collection agents. In conjunction with documenting our initial understanding of the debt collection process, we visited the Northern District of California. During this visit, we were briefed on a project that this district had initiated in 1999 to employ dedicated asset investigators. According to the EOUSA, the project, which provided for one full-time and four part-time former criminal investigators, has been very successful. The district reported that over 1,000 cases were investigated and over $10 million has been or is in the process of being recovered as a result of those investigations. As noted above, three of the four districts where we performed our testing did not have dedicated asset investigators, and the collection staff was not performing significant asset discovery work. The FLUs’ tracking systems do not capture key information needed for the FLUs and EOUSA to effectively manage the debt portfolio. As we reported in June 1994, the FLUs’ tracking systems do not indicate the terms of the fine or restitution orders. This continues to be a problem for the FLUs we visited. For example, although a JCC may state that an offender owes at least a certain amount on a periodic basis, this information would not be reflected in the systems. The tracking systems also do not capture an offender’s expected release dates from prison and probation, information that could assist the FLUs in determining time frames for reassessing an offender’s ability to pay. In addition, the systems do not permit the FLUs to allocate outstanding debts between amounts likely to be collected and those not likely to be collected. For example, even if an offender is making monthly installment payments, the FLU must either put the entire balance in suspense or none of the balance in suspense. The four probation offices we visited did not consistently adhere to certain policies and procedures for developing pre-sentence reports and collecting criminal debt. The AOUSC provides guidance to probation officers for (1) developing pre-sentence reports, (2) establishing installment schedules, and (3) monitoring installment schedules. However, we found that the probation offices we visited were not always following these procedures, thereby decreasing the usefulness of financial information in pre-sentence reports and the potential for maximizing criminal debt collections. In June 1998, we recommended that the AOUSC establish, as policy, specific guidance on how probation officers should determine how offenders should pay their fines and restitution, including criteria establishing what types of assets should be considered for immediate lump-sum payments or substantial payments, how installment schedules should be established, and the type and amount or range of expenses that should ordinarily be considered necessary when determining the amount of payments under installment schedules. To address these recommendations, the AOUSC issued revised guidance in September 2000 that, if properly implemented, should help address the reported weaknesses. However, as the AOUSC and we pointed out in that report, unless probation officers effectively implement these guidelines, such weaknesses will continue to exist. Prior to sentencing, probation officers perform “financial investigations” of offenders’ financial condition for inclusion in pre-sentence reports. This includes collecting, verifying, and analyzing financial information regarding the offender. Probation officers depend on offenders to provide certain financial information; however, offenders are not always cooperative. In 10 of the 42 high-dollar and 18 of the 125 random cases we reviewed, the offender did not provide this information, thus decreasing the usefulness of the pre-sentence report for debt collection purposes. Regardless of whether the offender provides this information, probation officers are responsible for taking steps to determine an offender’s ability to pay, such as obtaining pay stubs, reviewing tax returns, searching for assets, and running credit reports. However, in 20 of the 42 high-dollar and 40 of the 125 random cases, we found that probation officers did not take adequate steps to develop the financial condition section of the pre- sentence report. For example, one probation officer included information provided by the offender and obtained a credit report to verify liabilities but did not take the steps needed to identify assets or verify income. In another case, the offender did not provide information, and there was no evidence in the file that the probation officer attempted to obtain financial information by other means. The probation officer for another case obtained prior years’ income tax returns to verify income information provided by the offender but did not take the steps needed to identify assets. Since the offender most likely would not report income obtained through criminal activities on his tax returns, other steps should have been taken to assess the reasonableness of reported income versus the offender’s lifestyle. Probation officials indicated that they often have limited time frames for preparing the pre-sentence reports and have to obtain the offender’s consent and cooperation to obtain certain information and documents (e.g., tax returns). Further, probation officials have indicated that their investigations focus on analyzing information provided by the offender and not necessarily on identifying unreported assets. According to 18 USC 3572(d), offenders should pay their fines and restitution “immediately, unless, in the interest of justice, the court provides for payment on a certain date or in nominal installments.” As noted earlier, depending on the district, installment schedules are established by a judge and documented in the JCC or by probation officers while an offender is under their supervision. Therefore, once a probation officer is supervising an offender, the officer either (1) monitors the court- ordered installment schedule or (2) establishes and then monitors the installment schedule. Probation officers were required to establish installment schedules in 10 of the 42 high-dollar cases and in 72 of the 125 random cases. However, probation officers did not establish installment schedules in 3 of the 10 high-dollar cases and 15 of the 72 random cases as required. In the other cases, the probation officers were not required to establish an installment schedule because the (1) offender was still in prison or (2) the court stipulated an installment schedule. As discussed in chapter 2, judges may stipulate payment terms in the JCC. These terms can influence actions taken by probation officers and the FLUs to collect criminal debt. See table 6 for the status of offenders in our selected cases as of May 2000. Unless a court has set a payment schedule, probation officers should establish installment schedules (or reassess court-established schedules) to collect outstanding criminal debt from offenders once they are released to their supervision. Probation officers should recommend that offenders make a full lump-sum or a significant one-time partial payment based on their ability to pay and establish an installment schedule for the balance not paid. The guidelines require probation officers to request that offenders periodically report on their financial circumstances by preparing personal financial statements listing their assets—such as bank accounts, securities, and real estate—that could be used for lump-sum payments against their fines and restitution. The FLU should be notified if lump-sum payments are made, and identified assets should be reported to the FLU so it could pursue collection. According to AOUSC guidelines, probation officers should set an installment payment schedule based on the offender’s monthly cash flow if full payment is not possible. The monthly cash flow is determined by deducting necessary monthly expenses from monthly income. Necessary expenses are broadly defined as those for the offender’s continued employment and for the basic health and welfare of the offender’s dependents, which could include home rent or mortgage, utilities, groceries and supplies, insurance, transportation, medical treatment, and clothing. We found deficiencies in the establishment of installment schedules, including inadequate recommendations for significant partial payments, in 3 of 7 high-dollar and 16 of 57 random cases in which installment schedules were established. Specifically, we found that probation officers did not consider an offender’s reported assets, such as bank accounts and second homes that might have been available for full or partial payment of a fine or restitution. Instead, probation officers typically established installment schedules and did not recommend lump-sum payments or liquidation of assets. For example, an offender was convicted of tax fraud in September 1994, sentenced to 5 months in prison and 1 year probation, and ordered to pay approximately $344,000 in federal restitution. Although the offender reported having significant assets on his financial submission for use in preparing the pre-sentence report, the probation officer did not recommend a significant partial payment. Instead, the probation officer established a $25-per-month installment schedule which the offender stopped paying after he was released from probation. Even if the offender had continued to make these payments, it would have taken over 1,000 years for the debt to be paid off. We also found that probation officers used arbitrary methods, such as negotiated amounts and good-faith payments, to establish the installment payment schedules, instead of linking them to income, expenses, or other financial criteria as required. For example, an offender who was ordered to pay $5,900 in restitution entered into a payment agreement with the probation office that called for $10 monthly payments, even though financial submissions indicated that the offender had a positive monthly cash flow (income minus necessary monthly expenses) of about $360. At a rate of $10 a month, it would take the offender over 49 years to pay off the debt. Offenders under supervision are to submit to their probation officers (1) monthly supervision reports listing income and necessary expenses and (2) on a less frequent basis, updated personal financial statements. Probation officers are required to scrutinize these reports, including the type and amount of offender-reported “necessary expenses.” In addition, probation officers may request that offenders increase or decrease installment payment amounts if their ability to pay changes (with court approval if the court set the payment schedule). However, we found that probation officers did not follow their guidelines for reviewing an offender’s financial circumstances. Following the guidelines could have allowed for increased installment payments for 1 of the 13 high-dollar and 12 of the 77 random cases in which an installment schedule had been established by either the probation officer or the judge. We also found that in the cases in which the judge set a minimum amount that must be paid or set other payment terms, probation officers typically did not recommend increased payment amounts or liquidation of assets, even if an offenders’ financial circumstances improved (see examples and related discussion in chapter 2). For example, for the offender with the $25-per-month payment, the probation officer did not attempt to increase the amount even though the offender reported (1) significant assets on his financial submission for use in preparing the pre-sentence report, (2) ownership of two vehicles (a 1990 Lexus and a 1991 Ford Bronco), (3) a net positive cash flow on his monthly reports, and (4) bank accounts without listed balances. In another example, an offender was convicted of mail fraud, sentenced in 1998 to 3 years of probation, and ordered to pay restitution in the amount of $12,000. The judge ordered restitution to be paid in $100 quarterly payments (i.e., $33 per month) unless modified by the probation officer. For the July 1999 reporting period, 9 months after being sentenced, the offender reported a positive monthly cash flow of over $1,500; however, the probation officer did not recommend that the payment amount be increased. Not adequately monitoring an offender’s financial circumstance results in missed opportunities to seek an increase in an offender’s installment payments. Probation officers have considerable leverage over an offender under supervision and can take actions if offenders are not making agreed-upon installment payments. For example, probation officers can withhold consent for a debtor to travel outside the district, or they can seek to revoke probation. Even though installment payments typically range from $25 to $100 per month, offenders do not always make the agreed-upon installment payments. In several cases, we found no evidence that probation officers took action to enforce the installment schedules (i.e., sought to revoke probation and send the offender back to prison) when the offender failed to make agreed-upon installment payments. Probation officers indicated that they must prioritize their time in light of the number of offenders they are supervising. They stated that their first priority is to ensure that offenders do not engage in criminal activity or violate other terms of probation (e.g., are not using drugs during the probation period). However in a couple of instances, we found that when probation officers recommended against an offender being released from probation based on nonpayment of criminal debt in accordance with an established installment schedule, the judges rejected the recommendations. For example, in April 1999, a judge for one of our sample cases granted an offender an early release from probation even though the probation office had recommended against the release stating that the offender, who had been convicted of mail fraud in April 1997 and ordered to pay $175,000 in restitution, had not made sufficient restitution payments. Before his release, the offender had paid only $600. As of June 2000, the last payment from this offender was for $50 received in May 1999. In over half the cases we reviewed at the four districts visited, we found little evidence of coordination among the entities involved in assessing and collecting criminal debt and a lack of policies and procedures to ensure that efforts are coordinated. For example, we found little evidence that prosecutors and probation officers had shared financial information with FLUs, thus potentially weakening the FLUs’ ability to assess an offender’s ability to pay. In addition, we found that FLUs typically were not monitoring the collection efforts of probation officers, as advised by the U.S. Attorneys’ Manual, and that, contrary to district procedures, probation officers were not informing FLUs of an offender’s upcoming release from probation. Furthermore, at the four districts we visited, the FLUs and the clerks’ offices maintained separate databases to track criminal debt collections. This lack of coordination is a long-standing problem that has not been adequately addressed. The failure to adequately address this problem results in inefficient processes and duplication of efforts. Because of the many agencies and districts involved in assessing and collecting criminal debt—including two branches of the federal government and 94 districts—enhancing the effectiveness and efficiency of criminal debt collection hinges on these entities working together. Investigating agencies and prosecuting attorneys typically obtain substantial financial information concerning criminal debtors during the investigation of a case and prosecution of offenders. However, no national requirements exist for sharing financial information with the FLUs, and only two of the four districts we visited have incorporated specific (but different) procedures for sharing financial information in their MOUs. The Justice OIG reported in September 1993 that (1) prosecuting attorneys (who are on the criminal side of the USAOs) did not always provide the FLUs (who are on the civil side of the USAOs) with available financial data on a regular, systematic basis and (2) no formal requirement exists for attorneys to provide this financial information to the FLUs. The OIG recommended that a formal national requirement be established for prosecuting attorneys to provide debtor financial information to the FLU staff after an offender has been sentenced. However, based on our case file reviews, we found that sharing financial information with the FLUs continues to be a problem in the four districts we visited. Specifically, in 52 percent of the high-dollar cases and in an estimated 61 percent of the sampled population, we found no evidence in the FLU files of correspondence with the investigating case agents or prosecuting attorneys. According to FLU officials, this type of correspondence may have occurred but was not documented in the case file. As stated in the Standards for Internal Control in the Federal Government, internal control and all transactions and other significant events need to be clearly documented and the documentation should be readily available for examination. After an offender is sentenced, district guidance requires the FLUs to obtain a copy of the financial information contained in the pre-sentence report from either the probation officers or the prosecuting attorney. In October 1985, we reported that guidance did not exist for probation offices to share information with the FLUs and that probation officers did not routinely provide such information to the FLUs. In September 1993, the Justice OIG reported that 154 of 185 FLU files they reviewed did not contain a copy of the pre-sentence report. Recently issued guidance now specifically requires probation officers to share financial information from pre-sentence reports with the FLUs. In most of the high dollar and random cases reviewed, we found no evidence in the FLU files that the FLUs had reviewed a copy of the pre- sentence report. As a result of this lack of coordination, the FLUs do not have valuable financial information needed to assess an offender’s ability to pay, to enforce collections, and to reduce duplication of effort in identifying assets. For example, an offender in our sample reported over $420,000 of net worth in a personal financial statement dated September 1989 that was used by the probation office to prepare the pre-sentence report. In another example, prior to sentencing, an offender provided a bank statement showing a balance of over $73,000; however, there was no evidence that actions were taken to pursue these funds. In both examples, there was no evidence in the FLU’s files that the FLU had obtained a copy of the pre- sentence report. If the information had been shared with the FLUs, they could have used this report as a starting point for performing asset discovery work. In October 1985, we reported that although the U.S. Attorneys’ Manual advises the FLUs to monitor the collection efforts of probation offices, there was little involvement by the FLUs in probation office collections. The guidance requires the FLUs to maintain contact with probation offices regarding the offender’s compliance or failure to pay criminal debt. However, district guidance for the four districts we visited states that the FLUs are to assist the probation offices, if requested. During our reviews, we found that the FLUs typically did not monitor collection efforts of probation officers and that probation officers rarely requested assistance or notified the FLUs before an offender was released from probation. In general, the FLUs did not pursue collection until they determined that an offender had been released from probation. In commenting on our June 1998 report related to establishing offender’s payment schedules, the AOUSC stated that greater emphasis should be placed on Justice’s role in collecting fines and restitution because Justice has primary responsibility for collecting criminal debt. We believe that the current district guidance, which states that FLUs should assist if requested, adversely affects the FLUs’ ability to enforce debt collection and puts them in a reactive instead of a proactive role. For example, in March 1989, an offender was ordered to pay $26 million in restitution to hundreds of investors who had invested in the offender’s fraudulent company. In April 1995, the offender was released from prison and in 1997 made several payments before moving to a different district. In April 1999, he agreed to make $500 monthly payments to one of the financial institutions he owed money to plus 50 percent of the income from future speaking engagements and 100 percent of the income from the “movie rights” he sold pertaining to a published novel he wrote. As of June 2000, the FLU had not pursued collection because, according to the FLU, the offender is “under the supervision of probation,” and the probation office had not requested its assistance. There have been no recorded payments since 1997. An Internet search that we performed revealed that the offender is involved in many activities from which he is most likely deriving additional income, including publications, a spot on a radio program, and a full-time salary. Several months before an offender is to be released from probation with an outstanding debt, procedures at the four districts we visited require probation officers to notify the applicable FLU. However, there was rarely evidence of such notification in the FLU’s files. As we reported in chapter 3, the FLU’s tracking system does not adequately track the status of an offender; consequently, unless the probation officer notifies the FLU, the FLU will not always know when an offender is scheduled to be released from supervision. In 3 high-dollar and 7 random cases we reviewed, the offenders stopped making installment payments when they were released from supervision. In these instances, there was no indication in the file that the probation officer notified the FLU of the offender’s release or the status of the offender’s criminal debt obligations, including the terms of the installment agreement. For example, an offender was sentenced in October 1989 to 2 years in prison and 5 years of probation for income tax evasion. The offender was also ordered to pay a $100,000 fine. During supervision, the offender made over $17,000 in payments, with the last payment occurring in September 1997, 1 month before the offender’s release from supervision. There was no evidence in the FLU file that it had been notified of the release, and no collection actions were taken by the FLU for this case until it was selected for our review in April 2000. Lack of communication between the FLUs and the probation offices about offenders’ installment schedules, assets, and release dates hinders timely notification of the status of an offender’s compliance with payment arrangements and related events, thus decreasing the potential for collections. In each of the four districts we visited, the clerk’s office and the FLU maintain separate databases to account for criminal debt collections, resulting in duplicative and inefficient data entry for both entities. Although the courts are responsible for processing collections and disbursements for most criminal debt, clerk’s office officials have stated that they do not have the systems in place to calculate required interest. Instead, the clerk’s offices rely on the FLUs’ tracking systems to calculate interest, if assessed. Posting information to these databases typically requires the exchange of hardcopy information between the clerk and the FLU so that both databases can be updated to properly reflect collections and disbursements. The National Fine Center (NFC) was supposed to eliminate this duplication; however, since the NFC effort failed (as noted in chapter 1), both entities continue to maintain separate systems for tracking collections and disbursements. Highlighting this inefficiency is the fact that each month both entities must post payments received from offenders participating in the Bureau of Prisons (BOP) Inmate Financial Responsibility Program (IFRP), in which a portion of prisoners’ earnings is used to pay their outstanding debt. Hundreds of inmates typically participate in the program. Monthly or quarterly payments received from each inmate are generally small dollar amounts, but they are collectively large in volume. For example, a typical monthly report from BOP for the Eastern District of New York contains about 400 inmate debt payments. Since both the FLU and the clerk’s office track payments, each entity must determine what debt balance (i.e., special assessment, fine, or restitution) to apply these 400 payments to and then post each payment. If the payment is for restitution, amounts collected must be prorated to the victims (sometimes hundreds of victims) before checks can be disbursed. Maintaining these separate, nonintegrated systems also places greater emphasis on the need for timely coordination and communication so that data in these systems are accurate and the information is timely. For the cases we reviewed in which payments had been collected, there was typically a delay between the time that the clerk posted a payment and the time that the FLU posted the same payment. We also found that the FLUs typically did not inform the clerk of payments they received, resulting in several significant differences in the payment records. For example, an offender was convicted of Racketeer Influenced and Corrupt Organizations Act (RICO) violations, wire fraud, and bribery and was sentenced in 1996 to 60 months of incarceration and 3 years of supervised release. The offender was ordered to make restitution in the amount of $412 million. According to clerk’s office records, $3,050 had been paid as of September 30, 1999, while FLU records showed that over $11.7 million had been paid as of the same date. District guidance at the four districts we visited did not specifically require the FLUs to notify the clerk’s offices of payments they received or require the FLUs and the clerks to periodically reconcile payment data recorded in the two systems. Without timely notification of payments received or periodic reconciliations, differences between the two systems will continue to exist. We also identified inefficient practices involving the processing of disbursements to victims. In 76 districts, the clerk’s offices are receiving all types of criminal debt payments from offenders and disbursing checks to restitution victims; however, in 18 of the 93 USAOs, the FLUs receive restitution payments from offenders for offenses that occurred before the MVRA and are disbursing checks for restitution only to pre-MVRA victims. Having these two entities in 18 of the districts perform similar functions results in wasted resources. According to AOUSC officials, they are working with the remaining clerk’s offices to process pre-MVRA restitution. In addition, the four clerk’s offices we visited generally set a low or no threshold amount for disbursing a check to a victim. As a result, we found instances in which the clerk’s office issued checks for less than $10 to victims ranging from individuals to large financial institutions. In one example, restitution was owed to several companies ranging from $500 to $75,000. Once every month or so, checks were being disbursed to these companies. In October 1998, 12 checks were issued ranging from 20 cents to $62, and 4 of these checks were returned as undeliverable, including one for 20 cents and another for 53 cents. Disbursing such small amounts is not cost-effective unless these are the final checks to be issued (i.e., the offender most likely will not be submitting additional payments). Historically, management oversight of the criminal debt collection process has been divided between the executive and judicial branches with Justice responsible for enforcing collections and the courts responsible for receipting and disbursement of collections. This condition still exists today. In 1984, there was recognition of the increased need for centralized management of the collection process, and in 1987 efforts to establish the National Fine Center (NFC) began. The NFC was an attempt to automate and centralize the criminal debt collection process, which would have increased management oversight. However, since that effort was terminated in 1996, as noted in chapter 1, the collection responsibilities continue to be fragmented between Justice and the courts, with neither having a central management oversight role. Moreover, neither OMB nor Treasury has identified the need to take an active role in overseeing the federal government’s process for collecting the billions of dollars of outstanding criminal debt. While the collection of such debt has been a long-standing problem, the substantial growth in the outstanding balance is a relatively recent development. Because serious coordination and cooperation problems among the fragmented entities involved continue to exist and because of the low collection rates, such oversight is needed. Effective oversight of the collection of criminal debt could be achieved by leveraging OMB and Treasury’s current respective central agency roles. For example, a primary function of OMB as a central agency is to evaluate the performance of executive branch programs and serve as a catalyst for improving interagency cooperation and coordination. In its central role, OMB is also responsible for reviewing debt collection policies and activities. For example, OMB provides guidance to agencies in the form of circulars to assist them in meeting enacted legislation, such as the Debt Collection Improvement Act of 1996 (DCIA). As such, OMB could work with Justice and certain other executive branch agencies to ensure that these entities report and/or disclose relevant criminal debt information in their financial statements and subject such information to audit. In implementing provisions of the DCIA, Treasury, through its Financial Management Service, could assist Justice in identifying the types of delinquent criminal debt that would be eligible for reporting and referral to Treasury for collection actions. In turn, by better accounting for and reporting its delinquent criminal debt, Justice would enhance its own management oversight of this problem. Collectively, these efforts would place greater emphasis on the management and collection of criminal debt. Although Justice and the courts develop unaudited annual statistical data for informational purposes, neither entity is accounting for any of these debts as receivables, disclosing the debts in financial statements, or having the receivable information subjected to audit. In addition, neither entity is referring eligible criminal debt to Treasury for collection. Having Justice and the courts properly account for, report, and manage criminal debts, with assistance from OMB and Treasury, would heighten management awareness and ultimately result in a more effective collection process. According to Statement of Federal Financial Accounting Standards (SFFAS) No. 1, Accounting for Selected Assets and Liabilities, and SFFAS No. 7, Accounting for Revenue and Other Financing Sources and Concepts for Reconciling Budgetary and Financial Accounting, a receivable should be recognized once amounts that are due to the federal government are assessed, net of an allowance for uncollectible amounts. Also, in accordance with this OMB Circular No. A-129, Policies for Federal Credit Programs and Non-Tax Receivables, agencies are to (1) service and collect debts in a manner that best protects the value of the federal government’s assets and (2) provide accounting and management information for effective stewardship, including resources entrusted to the government (e.g., for nonfederal and federal restitution). Although both the courts and Justice have tracking systems in place, neither entity performs an analysis of criminal debts to estimate how much of the outstanding amounts are uncollectible (i.e., neither entity establishes an allowance for uncollectible accounts for amounts due to the federal government). Justice’s tracking system allows for amounts to be recorded as “in suspense”; however, these amounts do not necessarily represent amounts that are uncollectible. In EOUSA’s unaudited fiscal year 1999 annual statistical report, the FLUs classified as “in suspense” about $9.9 billion of the approximately $13.1 billion, or 75 percent of the reported uncollected criminal debt balance as of September 30, 1999. However, since the collectibility of outstanding criminal debt has not been assessed, the amount in suspense does not represent an estimate of the amount that is expected to be uncollected (see chapter 3). Unless FLUs or the courts assess the collectibility of this debt, set expectations as to the amount of debt that can be collected, and compare expectations against actual collections, management cannot effectively monitor program performance in debt collection. OMB oversees implementation of the Chief Financial Officers Act, as expanded by the Government Management Reform Act of 1994, which requires audited financial statements for the U.S. government, as well as the 24 major federal executive branch agencies and departments, including Justice. Justice prepares audited financial statements, but is not recording or disclosing receivables for relevant criminal debt in them, and the U.S. courts are not required to prepare financial statements or to disclose this information. Therefore, criminal debt is not being reported in the U.S. government’s financial statements. Financial statement disclosure by Justice would increase oversight of the process because reported amounts would be subject to audit under these acts. Such audits would include assessments of internal control and compliance with applicable laws and regulations related to the criminal debt process. Disclosure by the U.S. courts would also increase oversight, but the reported amounts would currently not be subject to audit. The DCIA requires executive, judicial, and executive branch agencies to transfer eligible nontax debt or claims over 180 days delinquent to Treasury for collection actions. Although referring delinquent criminal debt could increase collections and oversight of such debt, neither Justice nor the courts are currently referring delinquent criminal debts to Treasury. During our reviews, we found that prior to DCIA, the FLUs referred certain debts to the former Tax Refund Offset Program and were successful in collecting payments. For example, an offender was ordered in December 1987 to pay a $10,000 fine and $24,700 in restitution. For tax years 1993 through 1995, the FLU referred this debt to the offset program. In March 1996, the offender’s 1995 tax refund of $1,756 was offset and applied toward payment of the fine. Justice officials believe that the courts should be responsible for referring criminal debt to Treasury because the law specifies that the courts are responsible for accounting for criminal debt collection activities. Court officials indicated that they do not currently have the systems in place— and may not be aware of other collection actions or legal remedies being pursued by the FLUs—that could prohibit referral. The courts’ tracking systems are not complete because the courts (1) rely on the FLUs’ tracking systems to calculate interest due, (2) do not track pre-MVRA restitution cases in 18 of 94 districts, and (3) do not always record a debt (i.e., establish a receivable) until the first payment is received from an offender. Treasury officials stated that they rely on the agencies to notify them of delinquent debts that should be referred for collection. Justice has not been reporting this debt on its Report on Receivables and is not accounting for criminal debts as receivables or reporting them on its financial statements or other financial submissions. Treasury officials have stated that Treasury is willing to assist Justice and the courts in identifying types of criminal debts that would be eligible for referral and having the debt referred to Treasury for collection actions. The collection of criminal debt has been a long-standing problem for the federal government. Efforts over the past 15 years to centralize and automate the process have not been successful. Outstanding amounts continue to increase partly because many of the problems we reported on as far back as 1985 still exist. However, a dramatic increase in the balance of reported uncollected criminal debt is primarily attributable to the Mandatory Victims Restitution Act of 1996 (MVRA), which requires that restitution be assessed regardless of the ability of the offender to pay or the potential for collection. Major continuing problems are that the many entities involved in assessing and collecting criminal debt (1) do not always use available enforcement techniques or (2) do not coordinate efforts so that resources are used most effectively. Without additional high-level oversight and cooperation between the entities, criminal debt collection is likely to remain ineffective. Further, the assessment of criminal fines and restitution as an effective punitive tool may be in jeopardy. Addressing the long-standing problems in the collection of outstanding criminal debt—including fragmented processes and lack of coordination— will require a united strategy among the entities involved with the collection process. Therefore, we recommend that the Attorney General, the Director of the Administrative Office of the U.S. Courts (AOUSC), the Director of the Office of Management and Budget (OMB), and the Secretary of the Treasury work together in the form of a joint task force to develop a strategic plan to improve the criminal debt collection processes and establish an effective coordination mechanism among all entities involved in these processes. The strategy should address managing, accounting for, and reporting criminal debt. This strategy includes determining an approach for assessing the collectibility of outstanding amounts so that a meaningful allowance can be reported and used for measuring debt collection performance and having OMB work with Justice and certain other executive branch agencies to ensure that these entities report and/or disclose relevant criminal debt information in their financial statements and subject such information to audit. In the interim, while the task force is being established, we are making the following specific recommendations to the entities involved in criminal debt collection: To help improve collections and stem the growth in reported uncollected criminal debt, we recommend that the Secretary of the Treasury, through the Department of Treasury’s (Treasury) Financial Management Service, assist the Department of Justice and the courts in identifying the types of delinquent criminal debt that would be eligible for referral to Treasury for collection actions; the Attorney General and the Director of the AOUSC continue to work together to (1) reduce duplication of data entry for collections and disbursements, (2) require the Financial Litigation Units (FLUs) and the courts to periodically reconcile payment data recorded in their separate tracking systems, and (3) revise district guidance so that the FLUs can take a more proactive role in monitoring collection efforts of probation offices; establish policies and procedures that require Justice investigating case agents and prosecuting attorneys to share relevant financial information with the FLUs within an established time frame after an offender is sentenced, require FLUs to document correspondence with case agents and prosecuting attorneys in the FLU files, including whether and why efforts were not coordinated, require FLUs to use collectibility analyses to prioritize criminal debt collection efforts on debt types deemed through historical experience to be more collectible, reinforce current policies and procedures for entering cases into criminal debt tracking systems; filing liens; issuing demand letters, delinquent notices, and default notices; performing asset discovery work; using other enforcement techniques; and using event codes, including suspense codes, revise current policies for issuing demand letters, specifying when a demand letter should be sent and within what time frames, require FLUs to establish time frames for procedures related to criminal debt collection activities that do not currently have established time frames, require FLUs to document in their files instances where asset discovery work was not performed and why it was not performed, establish a policy for the FLUs to date stamp when Judgments in a Criminal Case are received, revise interest and penalty policies so that interest and penalties are consistently assessed and reported, adequately measure criminal debt collection performance against revise the FLU’s databases to (1) capture needed information such as terms of fine and restitution order, status of offender (expected release date from prison or probation) and (2) allow FLUs to allocate outstanding amounts between amounts likely to be collected and those that are not likely to be collected, and perform an analysis to assess whether the FLU’s human capital resources and training are adequate to effectively perform their collection activities; and the Director of the Administrative Office of the U.S. Courts ensure and monitor effective implementation of guidance for (1) developing pre-sentence reports, (2) establishing and monitoring offenders’ compliance with installment schedules, (3) providing financial information reported in the pre-sentence report to the FLUs within an established time frame after sentencing, and (4) notifying FLUs within an established time frame before an offender is released from supervision, revise guidance to encourage the clerk’s office to provide a copy of the Judgment in a Criminal Case to both the FLU and the prosecuting attorney within the established time frame, continue to work with the clerk’s offices to process all pre-MVRA restitution so that the same entity in all districts is responsible for receiving and disbursing pre- and post-MVRA restitution, revise the language in the Judgment in a Criminal Case forms to clarify that payment terms established by judges are minimum payments and should not prohibit or delay collection efforts, and establish cost-effective thresholds for disbursements made by check to victims for restitution payments. A draft of this report was provided to Justice, AOUSC, OMB, and Treasury for their review and comment. The following discussion highlights these agencies’ most significant comments and our evaluation. Letters from Justice, AOUSC, and Treasury are reprinted in the appendixes. OMB provided oral comments, which are incorporated into this section. Justice and the courts also provided us with technical comments that we considered and addressed, where appropriate. Justice and OMB agreed with our recommendation that they work together in a joint task force to develop a strategic plan to improve criminal debt collection processes and establish an effective coordination mechanism among all entities involved in the process. We recommended that this task force also address managing, accounting for, and reporting criminal debt, as well as developing an approach for assessing the collectibility of outstanding amounts so that a meaningful allowance can be reported and used for measuring debt collection performance. AOUSC and Treasury did not state whether they agreed or disagreed with the establishment of and their participation in this task force. We believe that the involvement in the task force of AOUSC and Treasury—given Treasury’s central agency role of preparing the federal government’s financial statements and implementing DCIA—is critical to the success of the task force. We recommended that one of the responsibilities of the task force be to address issues in accounting for and reporting criminal debt. As we note in the report, accounting standards require a receivable to be recognized once amounts due to the federal government are assessed, net of an allowance for uncollectible amounts. In addition, OMB guidance requires agencies to provide accounting and management information for effective stewardship, including resources entrusted to the government (e.g., nonfederal restitution). Treasury and OMB agreed that criminal debt should be reported on either Justice’s or the court’s financial statements. The courts did not specifically address accounting and reporting issues, and Justice stated that it would not be proper to report criminal debt receivables on Justice’s financial statements and that it believes administration and possession of the receivables is the responsibility of the courts. Justice’s comments related to this issue, plus the lack of a response from AOUSC regarding their position on this issue, illustrate the need for cooperation and coordination in the criminal debt collection area. We also recommended that OMB work with Justice and other executive branch agencies, while the task force is being established, to report and/or disclose criminal debt information in the agencies’ financial statements and to subject such information to audit. OMB disagreed with this recommendation, stating that these reporting issues would be better handled by the task force. In light of Justice’s and OMB’s responses, we have deleted the recommendation for OMB to work with Justice and other executive branch agencies, while the task force is being established, and incorporated this recommendation into the task force recommendation. Justice generally agreed with the premise of the report and recognized the need for improvements in the criminal debt collection area. Justice also agreed with 10 of our 12 recommendations specifically addressed to it and partially agreed with the other 2. The AOUSC commented that most of our recommendations directed to it had already been implemented and that it is pursuing those related to working with Justice to refer eligible debt to Treasury and reduce duplication of the recordkeeping function. Treasury agreed with our recommendation specifically addressed to it regarding assisting Justice and the courts in identifying eligible delinquent debt for referral to Treasury. Justice and the AOUSC also commented on the methodology used to develop the report findings. In addition, the AOUSC commented on the focus of the report and on the lack of recognition given to actions the courts have taken to improve the criminal debt collection process. Justice and AOUSC, in commenting on the methodology we used to select and review cases, stated that closed cases (i.e., debts paid in full) should have been reviewed and that many of the cases reviewed had already been determined by Justice to be uncollectible debt and had been placed “in suspense.” We disagree. To address the requestor’s objectives of determining the key reasons for the growth in reported uncollected criminal debt and whether adequate processes exist to collect criminal debt, we selected cases that involved debt amounts outstanding as of September 30, 1999. Since we used debts outstanding as of September 30, 1999, many of which were more than 3 years old, ample time for collection activity had passed before we reviewed the cases, enabling us to assess the level of collection efforts performed. Reviewing closed cases or focusing on those cases that had not been placed in suspense by the FLUs would not have addressed why debts have not been collected nor would it have provided a sound basis for determining whether there are adequate processes for collecting criminal debt at the four districts visited. The amount of outstanding criminal debt continues to grow and has grown substantially over the past several years. However, the collection rate for fiscal years ending September 30, 1995 through 1999, has averaged about 7 percent. The report clearly points out that this is partly due to the uncontrollable factors discussed in chapter 2, but also to the lack of (1) adequate collection processes, (2) coordinated efforts to collect such debt, and (3) management oversight. Thus, to determine why outstanding amounts continue to increase, we selected and reviewed cases with the largest outstanding debt balances as of September 30, 1999, at the four districts with the largest amounts of outstanding debt including debts in suspense as well as debts not in suspense. In addition, we reviewed a stratified randomly selected sample of 35 cases in each of the four districts. Selecting closed cases or cases that had not been placed in suspense would have provided anecdotal information about successful collections, but would not have addressed our objectives of determining the reasons for the growth and determining whether adequate processes exist, especially given the overall low collection rate. We also found that debts recorded as “in suspense” do not necessarily represent amounts that are uncollectible. For example, even if offenders were making monthly installment payments, the FLU must put either the entire debt balance in suspense or none of the balance in suspense. In addition, to determine why amounts had not been collected, regardless of whether they were in suspense, we assessed the collection efforts that had been performed and found that adequate steps, such as performing asset discovery work, were not always taken or documented prior to the FLU’s placing such debts in suspense. Further, we found little evidence that prosecutors and probation officers had shared financial information with FLUs, thus potentially weakening the FLUs’ ability to assess an offender’s ability to pay (i.e., determine collectibility). Justice also commented that many of the cases we reviewed involved incarcerated debtors and pre-date existing criminal debt policies. We point out in chapter 2 that incarceration may limit an offender’s ability to pay while in prison, however the high dollar cases we reviewed typically involved debtors who had defrauded innocent victims of millions of dollars, resulting in the large restitution amounts being owed. Although the offender’s earning potential may be limited while incarcerated, other debt collection techniques such as identifying and pursuing assets, should be performed. Further, as we point out in the report, only about 20 percent of the stratified randomly selected cases involved offenders who were incarcerated at the time of our review. In chapter 3 we point out that much of the outstanding criminal debt as of September 30, 1999, involved cases that were over three years old. However, many of the procedures that should be used are typical debt collection tools (e.g., filing liens, issuing demand letters) that should be applied to effectively collect criminal debt. We found that the FLUs were not always performing these procedures. In addition, we found that the FLUs we visited were not always following their prescribed procedures for reassessing an offender’s ability to pay. Had these cases been revisited as required, any new policies could have been applied to outstanding debts at that time. Finally, AOUSC also questioned why cases under $5,000 were not reviewed. Our review focused on the largest–dollar cases ($14 million or greater) as well as a stratified randomly selected sample of cases between $5,000 and $14 million. We excluded those under $5,000, which, as shown in table 7 in appendix I, comprised only $8.9 million of the $5.6 billion of outstanding debt at the four districts visited, or less than 0.2 percent of the total dollar amount of outstanding debt at such districts, an amount that we deemed immaterial. Justice agreed with 10 of the 12 recommendations specifically addressed to it. In addition, Justice partially agreed with the other 2 recommendations, which related to (1) requiring FLUs to use collectibility analyses to prioritize criminal debt collection efforts on debt types deemed through historical experience to be more collectible, and (2) adequately measuring criminal debt collection performance against established goals. Justice indicated that it is already performing the recommended functions, however we believe that the intent of these 2 recommendations should be further discussed so that additional improvements can be made in these areas. As to performing a collectibility analysis, Justice stated that it is already performing an analysis in accordance with its suspense policies. However, we found that the FLUs’ suspense policies are not the same as performing an effective collectibility analysis since debts may be placed in suspense without performing an adequate assessment of collectibility. Also, having historical collectibility analyses would allow the FLUs to prioritize new debts based on factors that indicate increased potential for collections. Justice also stated that it is already measuring criminal debt collection performance against established goals. However, it is our understanding that these efforts focus on reporting collection activity and analyzing collection practices, not on establishing goals and measuring performance against such goals, as we recommend. In addition, we believe that performing a collectibility analysis is an essential first step in adequately setting goals and measuring performance. In addition to commenting on our methodology, the AOUSC commented that the effect of the Mandatory Victims Restitution Act of 1996 (MVRA) should have received greater attention in the report and that the report should give greater recognition to actions that the courts have already taken to improve criminal debt collection. We believe that we have provided sufficient balance in the report as evidenced by an entire chapter devoted to uncontrollable factors, such as MVRA, that contribute to the growth in outstanding criminal debt. This chapter precedes chapters devoted to procedural and coordination issues so that the reader is made aware of the significance of uncontrollable factors and the context in which the adherence to required policies and procedures and coordination of efforts take place. In addition, mandatory restitution is listed as a factor in the transmittal letter at the beginning of this report and is discussed in many places throughout the report. The AOUSC also commented that more recognition should be given to actions it has taken to improve criminal debt collection. One such action includes a comprehensive policy and procedural manual issued in September 2000, several months after our district visits, and not widely distributed until December 2000. In our report we point out that if the AOUSC effectively implements its revised guidance related to (1) developing pre-sentence reports, (2) establishing and monitoring offenders’ compliance with installment schedules, (3) providing financial information reported in the pre-sentence report to the FLUs within an established time frame after sentencing, and (4) notifying FLUs within an established time frame before an offender is released from supervision, then reported weaknesses in these areas are likely to be addressed. Since the guidance was issued after our visits, we were not able to assess whether these policies have been effectively implemented and have therefore recommended that AOUSC ensure that such policies are effectively implemented. Finally, AOUSC stated that it had implemented most of our recommendations; however, the letter did not specifically address each recommendation. While we recognize that the revised guidance should help improve collections, the policies must be effectively implemented before our recommendation is satisfied. The policy and procedural manual does not address our recommendations to (1) revise the language in the Judgment in a Criminal Case forms to clarify that payment terms established by judges are minimum payments and should not prohibit or delay collection efforts and (2) establish cost-effective thresholds for disbursements made by check to victims for restitution payments. In addition, we were not provided with details of additional actions taken by the courts to address such recommendations. | The collection of outstanding criminal debt has been a long-standing problem for the federal government. Since October 1985, as reported in the U.S. Attorney's statistical reports, the balance of outstanding criminal debt has grown from $260 million to more than $13 billion. Currently, the receipting of collections and recordkeeping for criminal debt is primarily the responsibility of the U.S. Courts, while the Department of Justice is responsible for collecting criminal debt. This report reviews (1) the key reasons for the growth in reported uncollected criminal debt; (2) whether adequate processes exist to collect criminal debt; and (3) what role, if any, the Office of Management and Budget (OMB) and the Department of the Treasury play in monitoring the government's collection of criminal debt. GAO found that four key factors have contributed to the significant growth of uncollected criminal debt. These factors are (1) the nature of the debt, in that it involves criminals who may be incarcerated or deported or who have minimal earning capacity; (2) the assessment of mandatory restitution regardless of the criminal's ability to pay, as required by the Mandatory Victims Restitution Act of 1996; (3) interpretation by the Financial Litigation Units of payment schedules set by judges which limit collection activities; and (4) state laws that may limit the type of property that can be seized and the amount of wages that can be garnished. Financial Litigation Units do not always follow their policies and procedures to ensure that collection actions are prompt and adequate. The present management practices and processes do not ensure that offenders are deprived of their ill-gotten gains and that innocent victims are compensated for their losses to the fullest extent possible. Collection responsibilities continue to be divided between Justice and the courts, with neither having a central management oversight role. Neither OMB nor Treasury has identified the need to take an active oversight role in the collection of the growing balance of outstanding criminal debt. |
Section 404 of the Clean Water Act is the principal federal program that provides regulatory protections for wetlands, which include bogs, swamps, and marshes. It generally prohibits the discharge of dredged or fill material into waters of the United States, which include certain wetlands, without a permit from the Corps. In addition, under the Rivers and Harbors Act of 1899, the construction, excavation, or deposition of materials in, over, or under any navigable water of the United States, or any work which would affect the course, location, condition, or capacity of those waters is prohibited without a permit from the Corps. The Corps receives thousands of permit applications each year from individuals, businesses, and public agencies seeking to build houses, golf courses, and infrastructure projects, or to perform other activities that could destroy or degrade wetlands, streams, and rivers. The Corps’ decisions to allow particular activities to occur—and if so, under what conditions—are to reflect the national concern for both the use and protection of these important water resources. The Corps must balance the impacts that proposed projects may have on many factors, from wetlands and wildlife to recreation and the economy, and authorize projects only if it finds the projects are in the public interest. The Corps’ regulatory program is highly decentralized. Most of the authority to issue permits has been delegated from the Secretary of the Army to the Chief of Engineers who, in turn, has delegated the authority to 38 Corps districts. Regulatory program management and administration is focused at the district office level, with policy oversight at higher levels— including the Corps’ 8 division offices and headquarters. To obtain a permit, project proponents, who may be the property owner or the owner’s authorized agent, such as a consultant, must submit an application to the Corps. The application details the proposed project, its purpose, location, and likely impacts to the aquatic environment. The Corps reviews the application to ensure it contains the minimum required information. The amount and type of information the Corps requests from the applicant may vary by the type of applicant and project, as well as the extent and functional values of the water resources that may be impacted. If the information submitted does not sufficiently identify the location or nature of the project, the Corps will request additional information. Once the permit application is complete, the permit review process begins. This process is governed by federal regulations and guidance documents from Corps headquarters. The regulations set the overall review framework by describing the factors the districts must consider when deciding whether to issue a permit and the general evaluation procedures. The guidance documents, including the Corps’ “Standard Operating Procedures” and “Regulatory Guidance Letters,” describe in more detail the steps the districts must follow to implement the regulations, including documentation that should be maintained in the administrative record to support the permit decisions. The regulations allow the districts to issue different types of permits—including nationwide permits, letters of permission, and standard permits—depending on the scope and likely impacts of the proposed projects. The specific steps the districts must follow to review a permit application depend on the type of permit the Corps uses to approve the proposed project. The Corps approves most projects using nationwide permits. These permits authorize classes of activities throughout the nation such as minor dredging, road crossings, and bank stabilization that the Corps has determined are likely to have minimal impacts to water and wetland resources. The Corps has developed nationwide permits for 49 different classes of activities. Each permit contains specific terms and conditions that a proposed project must meet to ensure its impacts will be minimal. The purpose of nationwide permits is to allow certain activities to be performed in an expeditious manner with limited, if any, delay or paperwork. Most prospective permittees may proceed with their activity without ever contacting the Corps; they simply review the terms of the different nationwide permits and self-certify that their activity falls within the restrictions of one or more of the permits. However, for some nationwide permits, a prospective permittee must notify the Corps if the impacts of their proposed activity exceed a certain threshold, for example filling in more than 1/10 acre of wetlands or other federally regulated waters. The Corps then reviews the project outlined in the permit application to determine whether it meets the terms and conditions of one or more of the classes of activities authorized by nationwide permits. If it does, the Corps notifies the applicant that the project is approved under certain nationwide permit(s). The Corps can combine two or more different nationwide permits to approve a single project. The Corps can also approve projects using regional general permits—which are similar to nationwide permits, but cover smaller geographic areas, such as a single state. In fiscal year 2006, the Corps issued about 67,000 nationwide and regional permit authorizations. For projects likely to have more substantial impacts on waters and wetlands, the Corps can issue standard permits. Given the potentially larger impacts of these projects, federal regulations and related guidance require a more extensive review for these permits. Specifically, the Corps must evaluate the proposed activity’s impact on a wide range of factors, from wetlands and fish habitat to public safety and energy needs. If the proposed project will adversely impact one or more of these factors, the Corps can place conditions on the issued permit, such as limiting work during particular times of the year to reduce impacts on wildlife or requiring the applicant to undertake mitigation activities to compensate for wetlands they damage or degrade. The Corps may issue a permit only if it concludes, after carefully weighing the project’s costs and benefits, that the project is not contrary to the public interest. As part of this public interest review, the Corps must notify the public of the proposed project, request comments, and incorporate any comments they receive into their review of the overall public value of the project. In addition, the Corps must determine that the proposed project will (1) not adversely impact endangered or threatened species, (2) not discharge pollutants into federally regulated waters that violate state water quality standards, and (3) comply with guidelines developed by the Environmental Protection Agency to protect wetlands and other federally regulated waters. In making these determinations, the Corps often coordinates with other federal and state agencies, such as the U.S. Fish and Wildlife Service. In fiscal year 2006, the Corps issued about 4,000 standard permits. The Corps may use letters of permission in lieu of a standard permit when it determines that the proposed work would be minor, not have significant individual or cumulative impacts on the environment, and is not expected to be controversial. In these situations, an abbreviated standard permit review process, involving coordination with other federal and state regulatory agencies and adjacent property owners—but not the general public—is used to expedite the permit’s approval. Some concerns have been expressed that the Corps’ permitting process takes too long and has significantly delayed public works projects, such as constructing and repairing ports. For example, the Pacific Northwest Waterways Association, which represents ports and businesses, believes that delays in permit processing in the Northwest have put U.S. ports at a competitive disadvantage to ports in Canada, where they argue permit requirements are not as strict. In 2000, the Congress included a provision in the Water Resources and Development Act to expedite permit processing for nonfederal public agencies. Specifically, section 214 of the act authorizes the Secretary of the Army, after public notice, to accept and expend funds contributed by nonfederal public entities, such as cities and port authorities, to expedite the evaluation of permit applications under the jurisdiction of the Department of the Army. The act also requires the Secretary to ensure that the funds accepted will not impact impartial decision making with respect to permits. Originally set to expire at the end of fiscal year 2003, this authority has been extended four times and is currently set to expire in December 2008. The authority to accept section 214 funds has been delegated to the Corps’ 38 districts that have regulatory responsibilities. In 2001, and again in 2004, Corps headquarters issued guidance that described the procedures that Corps districts must follow to accept and use section 214 funds. Specifically, the guidance directs any district accepting such funds to issue for public comment notices announcing the Corps’ intent to accept funds from a nonfederal public entity that include the reasons for accepting the funds and what activities the funds will be expended on, after review of comments, notify the public of the District Commander’s decision to accept these funds, and establish separate accounts to track the acceptance and expenditure of these funds. The guidance also calls for strict upward reporting to ensure that the section 214 funds will be used for their intended purpose. Specifically, the Corps’ divisions are to submit annual reports to headquarters that (1) document the acceptance and expenditure of funds, along with any public notices, (2) assess how the use of the funds expedited the permit review process, and (3) highlight any issues regarding impartial decision making. The guidance also specifies two steps that Corps districts must take to ensure that the permit decisions they make using section 214 funds are impartial and transparent. In addition to following the permit review process described above, the districts must (1) ensure that a Corps official senior to the decision maker reviews the final permit decision before issuing the permit and (2) post permit decisions to the district’s Web site. Finally, the guidance requires that section 214 funds be expended only to expedite the final permit decision; funds cannot be expended on the higher level review. Four of the 38 Corps districts that have regulatory responsibilities had entered into agreements with 11 nonfederal public entities to receive section 214 funds and had evaluated 187 projects using these funds as of August 2006. Almost all of the section 214 applicants were city or county departments, port authorities, or regional water authorities. However, two applicants were private companies that were allowed to submit permit applications for expedited review under a nonfederal public entity’s agreement with the Corps. In addition, of the 34 Corps districts with regulatory responsibilities that had not used section 214 authority to evaluate permit applications, 7 had entered into agreements, or had begun negotiations to enter into such agreements, with nonfederal public entities at the time of our review. An additional 19 districts told us that they would consider using the authority if it were made permanent. Since 2000 when Congress gave the Corps the authority to accept funds from nonfederal public entities, four Corps districts—Jacksonville, Fla.; Los Angeles and Sacramento, Calif.; and Seattle, Wash.—have entered into agreements with 11 nonfederal public entities and processed permit applications using funds received from these entities. These agreements generally specify the duration of the agreement, the amount of funds to be received, and how the funds are to be used. Although the agreements are generally between the Corps and one entity, such as a city or county, various departments within that entity may submit permit applications for expedited review under the agreements. The Seattle District was the first to enter into an agreement with a nonfederal public entity and has entered into more agreements than the other three districts. Table 1 shows the nonfederal public entities with whom the four districts have agreements and the effective dates of the agreements. Using the funds received under the section 214 authority, the four districts evaluated and approved 187 permit applications, as of August 2006. As table 2 shows, 82 percent of these applications were for projects seeking approval under a nationwide permit. The districts approved the remaining applications with standard permits, regional permits, or letters of permission. The districts did not deny any permit applications processed using section 214 funds. According to district officials, the Corps rarely denies any permit applications, regardless of the source of funding used to evaluate the applications. Instead, the Corps frequently requires applicants to redesign their projects to reduce impacts to the aquatic environment before receiving permit approvals. The types of projects for which permits were sought under the section 214 authority varied by Corps district. For example, nearly half the projects evaluated and approved by the Seattle District were for ecological restoration and pier and port repair, while the projects evaluated and approved by the Los Angeles District were mainly for maintaining sewer lines. Table 3 shows the number of projects that fell into each category, and table 4 provides examples of the different types of projects evaluated by the Corps districts using section 214 funds. Under the section 214 authority, the Corps received applications from 31 different applicants. Most of the permit applications were from city or county departments, port authorities, or regional water authorities. In general, the applicants were either the entities, or departments within the entities, that entered into the section 214 agreement with the Corps. For example, San Diego’s Department of Engineering and Capital Projects and Metropolitan Waste Water Department both submitted applications for expedited review under the city’s agreement with the Corps’ Los Angeles District. However, in the Corps’ Sacramento District, we found that two applications were submitted by private companies that were not part of the nonfederal public entities with whom the Corps had an agreement. One project was for a large, multiuse development that would fill 1.8 acres of wetlands. The other project was to fill in .46 acres of streams as part of a larger ecological restoration effort to compensate for wetlands and other waters that may be modified or destroyed by other construction projects. According to Corps officials, in each case, a nonfederal public entity requested that the Corps process the private company’s application under the section 214 agreement. The legislation does not expressly prohibit the practice of allowing private companies from requesting permit approval under a nonfederal public entity’s section 214 agreement with the Corps. Thirty-four Corps districts had not yet used the section 214 authority to evaluate permit applications at the time of our review, but many are considering doing so in the future. Officials from 28 Corps districts that had not entered into section 214 agreements cited two primary reasons for not yet doing so: (1) nonfederal public entities in their districts had not expressed an interest in entering into such agreements and (2) the districts were concerned that the section 214 authority was not permanent and could expire in the future. In addition, district officials identified two other disadvantages of using the section 214 authority. First, officials were concerned about the public’s perception of the objectivity of permit decisions made using section 214 funds. Second, officials were concerned that because the authority was not permanent and they could not guarantee a prospective employee’s tenure, it would be difficult for them to hire and retain qualified staff to process these types of applications. Despite these concerns, many of the districts are considering the use of the section 214 authority soon or in the future. Seven districts— Huntington, W.Va.; Louisville, Ky.; Mobile, Ala.; Omaha, Neb.; Portland, Ore.; San Francisco, Calif.; and Savannah, Ga.—had already entered into agreements or had begun negotiations with nonfederal public entities but had not completed the evaluation of any permit applications at the time of our review. Nineteen districts told us that they would consider entering into agreements with nonfederal public entities if the section 214 authority were made permanent. In addition, two of the four districts included in our review that had used section 214 funds to review permit applications have expanded their use of the authority. The Los Angeles District has entered into three new agreements—San Bernardino County in September 2006 and Port of Los Angeles and San Diego Water Authority in October 2006. The Sacramento District entered into two new agreements—one with the City of Roseville in September 2006 and one with the City of Rancho Cordova in October 2006. From December 2001 through September 2006, nonfederal public entities provided over $2 million in section 214 funds to the four Corps districts with whom they had section 214 agreements. The districts hired additional project managers to process permit applications and primarily used the section 214 funds received to cover personnel-related costs, such as salaries and benefits. As figure 1 shows, of the four districts that received section 214 funds from nonfederal public entities from December 2001 through September 2006, the Sacramento District received the most, $932,000, and the Jacksonville District the least, $225,324. Table 5 shows the amounts provided by each of the 13 nonfederal public entities. Each of the four Corps districts that received section 214 funds was able to increase its regulatory staff by either (1) combining the section 214 funds with appropriated funds to hire new project managers to process section 214 applications or (2) paying existing employees to process section 214 applications and using the offsets in regular program expenditures to hire new project managers to process non-section 214 applications. Although the districts initially thought that project managers would work full-time on section 214 permits, this has not happened. None of the project managers added using section 214 funds worked full-time on processing section 214 permits; instead they split their time between evaluating section 214 permits and permits for other applicants. Table 6 shows the number of additional project managers added using section 214 funds and the full-time equivalent staff devoted to processing section 214 permits in each of the four districts. Through September 2006, the districts used $1.398 million of the section 214 funds that they received for costs associated with the project managers assigned to process section 214 permits. Specifically the funding was used for the following purposes: $858,000, or 61.4 percent, was used to pay for personnel costs, including the salaries and cost of benefits, for project managers processing section 214 permits; $522,000, or 37.3 percent, was used to cover overhead costs, such as office space, utilities, and administrative support associated with the section 214 authority; and $18,000, or 1.3 percent, was used to pay for equipment, transportation costs associated with site visits, and legal advice from the Corps for processing applications under the section 214 authority. Permit processing times have increased in some districts and decreased in others since the Corps began using section 214 funds. Although officials from both the Corps and nonfederal public entities said they believe the use of the section 214 authority has been effective in expediting permit applications of nonfederal public entities, other factors may have also impacted processing times. Nonetheless, officials from both the Corps and nonfederal entities believe the authority provides significant other benefits. In addition, Corps officials identified some challenges in implementing the section 214 authority in their districts. Although a main goal of the Corps in using the section 214 authority is to expedite permit processing for section 214 applicants, the processing times for these applicants have not consistently decreased. For example, the median processing times for nationwide permits decreased by 37 percent in the Sacramento District, from 41 to 26 days, but increased by 21 percent in the Seattle District, from 76 to 92 days. Similarly, another Corps goal is to ensure that the section 214 program does not delay permit processing for non-section 214 applicants; however, the processing times for these applicants have not consistently remained the same. For example, the median times for permits processed without section 214 funds remained constant in the Los Angeles District but increased by 29 percent in the Sacramento District, from 41 days to 53 days. Figure 2 shows changes in nationwide permit processing times for the three districts that had data sufficiently reliable for this analysis: Los Angeles, Sacramento, and Seattle. The data in the Jacksonville District were not reliable because the district experienced difficulties entering data during its participation in a pilot project for a new permit database. We did not conduct a similar analysis for standard permits because the districts had not processed enough of these permits using section 214 funds to calculate a reliable estimate of processing times under the authority. We believe that several factors may have influenced the permit processing times and masked the effect, if any, that the use of the section 214 authority had on them. Specifically: Seattle District officials told us that the applications from section 214 permittees were considerably more complex than typical applications. For example, these officials said the section 214 applicants frequently sought permission for activities in or near Superfund sites, which required the Corps to consult with the Environmental Protection Agency before issuing the permits. According to Seattle officials, these consultations add several weeks or months to the typical permit review process. These officials said the extra time the Corps needed to process these applications because of their complexity exceeded the time savings that resulted from the section 214 authority. As a result, they said, the net processing times for these applicants increased, obscuring the benefit of the authority to section 214 applicants. Seattle District officials also told us that there are many threatened and endangered species within their district and that they must consult with the Fish and Wildlife Service (FWS) or the National Marine Fisheries Service (NMFS) for almost every permit the district issues to ensure that the proposed activity will not harm these species. According to these officials, the consultation process can add several months to the overall permit review process. In 2000, the district entered into agreements with FWS and NMFS to streamline the consultation process for endangered species. Seattle officials said that this streamlined consultation is the main reason the permit processing times for non-section 214 applications decreased—as compared with the median processing time for the 3 years prior to the section 214 program. Therefore, while the decrease for these applicants appears consistent with the goal of ensuring that the section 214 authority does not introduce delays for non-section 214 applicants, it is not necessarily proof that the district met this goal. The section 214 authority may have introduced delays, but these delays could have been more than offset by the streamlined consultation process that began close to when the section 214 authority became available. The Los Angeles District had processed only 11 permit applications using section 214 funds at the time of our review. These few permits may be outliers and may not accurately represent what processing times will be in the long-term for permits processed using section 214 funds. The impact the section 214 authority has, if any, on processing times may become more apparent as the district processes a larger number of permit applications using section 214 funds. Officials from the Corps and from nonfederal public entities that entered into section 214 agreements with the Corps told us that they believe the use of the section 214 authority has significantly expedited processing of permits for these applicants. For example, Sacramento officials said the project managers dedicated to working on section 214 applications typically work on two to three times fewer permits, at any given time, than the other project managers. As a result, they have more time to review section 214 permit applications and determine more quickly whether they are complete. These officials said that, by contrast, it can take several weeks for other project managers to review permit applications for completeness. Officials from the Corps and nonfederal public entities that we spoke with also cited other benefits of the section 214 authority including the following: Project prioritization. Nonfederal public entities that enter into section 214 agreements with the Corps may specify which permit applications they want the Corps to complete first and which projects can wait. Officials from participating nonfederal public entities told us that being able to set priorities for projects in this manner has allowed them to receive permits for their most important projects quickly. For example, officials from Elk Grove, Calif., said that, in 2004, city employees discovered that an old culvert was at risk of collapsing during a heavy rainstorm. City officials told the Corps’ Sacramento District that repairing the culvert was a top priority for them and, as a result, were able to get the permit needed to complete the repairs before the next large storm. Enhanced communication. Officials from both the Corps and nonfederal public entities said that the section 214 authority has helped improve communication between them. For example, Corps officials in the Seattle District said that the section 214 funding has enabled project mangers to meet with the applicants before they submit their applications. During these preapplication meetings, the Corps officials and the applicants discuss ways to design the project to avoid impacting important resources and increasing the likelihood of receiving a permit. Officials from participating entities said that these conversations have reduced the overall costs of completing their projects because these conversations have enabled them to submit initial project designs that are more likely to receive approval, thereby avoiding costly revisions and project delays. Increased staffing. Corps district officials said the section 214 funds have provided a valuable way for them to augment their regulatory staff, particularly given the large permit workloads these districts face. As we discussed earlier, each district has used the section 214 funds received from the nonfederal public entities to add between one and four project managers to its regulatory staff. Corps officials have faced the following challenges when implementing the section 214 authority: Insufficient permit workloads. Officials in each of the four districts said that, when they first started using the section 214 authority, they expected each participating entity to submit enough applications to keep one project manager busy full-time. However, this has not happened in any of the four districts. Insufficient section 214 permit workloads have caused particular problems in two districts. In the Seattle District, the permit workloads have been so small from some entities that the revenues generated from the agreement have not justified the costs of negotiating and establishing the agreement to accept section 214 funds. As a result, the Seattle District decided eventually not to renew agreements with some entities and is considering not entering into any new agreements unless they can sustain at least half a full-time equivalent staff worth of work from each agreement. Similarly, in the Sacramento District, an insufficient section 214 permit workload has meant that section 214 project managers have had to work on some non-section 214 permit applications to maintain a full workload. According to officials in the Sacramento District, this arrangement has meant that non-section 214 applications experienced some processing delays because project managers stopped working on them when higher priority section 214 applications came in. To avoid the need to make choices between section 214 and other applications, the Sacramento District is currently considering assigning all section 214 applications to a small pool of project managers who will work exclusively on section 214 permit applications. Delays in replacing project managers. Officials in the Sacramento District said that they were unable to hire new project managers to replace the ones they had transferred to work primarily on section 214 applications as quickly as they had anticipated. This lag in hiring project managers delayed permit processing for some non-section 214 applications because it meant that fewer staff hours could be devoted to processing these applications. According to these officials, one main reason for the hiring lag was that the district did not begin looking for new employees until after it had signed the section 214 agreements and transferred experienced project managers into the new positions. The district is considering adjusting its hiring policy to transfer experienced project managers into the new section 214 positions only after it has hired employees to fill the non-section 214 positions. Decrease in project manager expertise. The Corps districts that received section 214 funds typically replaced more experienced project managers that were transferred to work primarily on the section 214 permit applications with new staff. Sacramento District officials said that this practice has decreased the overall level of expertise devoted to processing non-section 214 permit applications, which has both delayed processing for some of these applications and overburdened the experienced project managers who have remained to process non-section 214 applications. To help increase the skill level of the new staff, the district now requires experienced project managers to mentor new employees. Officials from the Seattle and Los Angeles districts said that, while their section 214 application workload is not yet large enough to significantly deplete the expertise devoted to non-section 214 applications, this could become a problem if the number of applications and agreements continues to rise. Three districts that used section 214 funds to process standard permits generally followed the permit review process, but one district did not follow all the required steps. Specifically, the Sacramento District did not comply with the Corps’ process that requires the districts to sufficiently demonstrate why the projects they approve are in the public interest. The four districts also used section 214 funds to approve projects using nationwide permits, however, we could only confirm that the Jacksonville and Seattle districts had generally followed the review process for these types of projects and could not make this determination for the Los Angeles and Sacramento districts because their files had limited permit documentation. Detailed results of our file reviews are presented in appendix II. The Jacksonville and Seattle districts followed all of the six key steps in the permit review process for standard permit applications that they processed using section 214 funds. Specifically, these districts (1) ensured that the project proposed in the permit application would not harm threatened or endangered species; (2) analyzed whether alternative designs that would have fewer impacts to aquatic resources were feasible; (3) ensured the project would not violate state water quality standards; (4) evaluated likely impacts to historic properties; (5) evaluated likely impacts to a wide range of other factors, from recreation to energy needs; and (6) balanced the project’s benefits against its detriments, when applicable, and concluded that the project would not be contrary to the public interest. In contrast, the Sacramento District followed five of the six steps but did not follow the last step for the standard permits it processed using section 214 funds. As a result, for the projects for which it approved standard permits, the Sacramento District did not show that the adverse effects of the projects were outweighed by the positive impacts of the projects and did not conclude that the projects were in the public interest. Officials in the Sacramento District recognize that they did not complete the sixth step of the review process, as required, and said that this happened because the section 214 project managers who processed these applications were relatively new to the district and were not fully aware of the requirement. These officials also said that a major reason why the project managers were unaware of the requirement is that, while the Corps guidance describes documentation requirements for standard permits in general terms, Corps headquarters has not provided explicit guidance that would clearly show project managers how to document their decisions. Corps headquarters recognizes that more explicit guidance would help ensure consistency across its districts and is in the process of developing a template for a standard decision document for all districts to use. However, since the template is not yet complete, we could not assess whether it will provide sufficient detail to prevent the types of lapses we observed in the Sacramento District from occurring again. For the projects that the four districts reviewed and approved using section 214 funds under the Corps’ nationwide permits, we found that the Jacksonville and Seattle districts generally followed the steps that are key to the review process. Specifically, of their nationwide permit decisions, the Jacksonville and Seattle districts (1) evaluated 100 percent and 79 percent, respectively, of the proposed projects to ensure they met the terms and conditions of the relevant nationwide permit(s) and (2) ensured that 90 percent and 96 percent, respectively, of the projects they evaluated would not harm endangered species. For the remaining permit applications, there was not enough documentation in the permit files for us to determine whether Jacksonville and Seattle district officials had complied with these two requirements. In contrast, we were unable to make a determination of the extent to which the Los Angeles and Sacramento districts had evaluated projects for compliance with the terms and conditions of the nationwide permit(s) because only 3 percent of their files contained enough evidence. In addition, 31 percent of the permit files in the Sacramento District also did not contain evidence that the district had considered the impacts of the proposed projects on endangered species. In the Los Angeles District, however, most files did contain evidence that officials had considered the impacts of the proposed projects on endangered species. We found that the districts vary in their level of documentation for projects approved using nationwide permits because, unlike standard permits, Corps headquarters has not developed uniform documentation standards for the districts to follow when making these decisions. In the absence of such guidance, the Seattle and Jacksonville districts have developed local standards that are more stringent than those in Los Angeles or Sacramento. Corps headquarters officials recognize that consistent documentation is needed to ensure permit decisions are both transparent and legally defensible and have begun to develop Corps-wide standards. However, because the Corps has not completed these standards, we could not determine to what extent they will require districts to fully document the basis for their determinations that the projects meet the terms and conditions of the nationwide permit(s) they used to approve the projects, and whether these requirements will alleviate the concerns we identified. The districts were uneven in their adherence to the additional requirements established by the Corps to ensure that permit decisions made using section 214 funds were impartial and transparent. Two of the districts—Sacramento and Seattle—more often met both requirements, while the Jacksonville and Los Angeles districts rarely did. Corps officials cited several reasons for the variance in their adherence to the additional requirements. In addition to following the established permit review processes discussed in the prior section of this report, Corps districts must meet two other requirements designed to ensure the impartiality and transparency of decisions made using section 214 funds. First, a Corps official senior to the decision maker must review the final permit decision (higher level review) and second, the district must post its final decision to the district’s Web site. However, as shown in table 7, our review of the applications reviewed and approved by the four districts that used section 214 funds, indicates significant variations in the extent to which each district complied with these two additional requirements. According to district officials, the following factors contributed to why the districts varied in the extent to which they adhered to the additional Corps requirements for ensuring that section 214 permit decisions are impartial. Different interpretations of the applicability of the requirements. Officials in the Los Angeles and Jacksonville districts told us that they had believed the additional requirements did not apply to projects approved using nationwide permits, which constitute the bulk of the permit applications processed in their districts using section 214 funds. According to these officials, they had believed the requirements applied only to new permit decisions and, since approvals under existing nationwide permits did not count as new permit decisions, the requirements did not apply to such approvals. Corps headquarters officials and legal counsel do not agree with the districts’ interpretation and said that the additional requirements apply to all permit types including approvals using nationwide permits. Los Angeles District officials told us that they have subsequently changed their position and plan to apply the requirements to nationwide permits in the future, but Jacksonville District officials have not changed their position. Varying awareness of the requirements. The project manager responsible for processing section 214 permit applications in the Los Angeles District told us that he was unaware of the higher level review requirement and, therefore, did not adhere to it. Similarly, the project manager responsible for section 214 applications in the Jacksonville District told us she was unaware of the Web-posting requirement. In contrast, project managers in the other two districts were aware of both of the requirements and, as a result, did comply with them more often. Lack of specificity as to what higher level review entails. Corps headquarters guidance does not specify which documents the senior level officials must review and sign to meet the higher level review requirement. While Seattle District officials thought that it was sufficient for senior officials to review and sign the documents supporting standard permit decisions, Corps headquarters officials told us that it is the final permit document itself—not the supporting documents—that must be reviewed and signed. We noted during our review, that while many standard permits in the Seattle District did not receive higher level review in accordance with headquarters requirements, they frequently did have decision documents reviewed by a Corps official senior to the official who would typically review those documents. According to Seattle District officials, it is more important for a reviewer to review the decision documents than the issued permit since the decision documents provide the rationale for why the project manager arrived at the decision to issue the permit while the permit itself is largely pro forma. Lack of compliance with annual reporting requirement. Corps guidance calls for annual reports on the districts’ implementation of the section 214 authority to be submitted to headquarters. However, according to the head of the Corps’ Regulatory Branch, no reports have been submitted since the section 214 authority was first used in 2001. If this guidance had been followed, we believe that Corps headquarters may have been alerted to the fact that some districts were not fully meeting the additional requirements and could have taken actions needed to resolve this lack of compliance. When the Congress enacted the section 214 authority in 2000, it was to help expedite the permit review process for nonfederal public entities. Since that time, a handful of nonfederal public entities have taken advantage of the authority and believe that it has been beneficial to them. These nonfederal entities had entered into agreements with four Corps districts that had actually received and used section 214 funds to process permit applications at the time of our review. However, the experiences of these four districts indicate that implementation of the section 214 authority has been uneven. We identified a number of areas where improved oversight is needed to ensure that decisions made using the authority adhere to established permit processing regulations and guidance and are also impartial and transparent. Specifically, we found evidence to suggest that the district officials do not know what guidance they are to follow, do not know how to document the decisions that they make, and do not know which special requirements apply to the permit applications that they review under the section 214 authority. Because it appears that there is significant potential for many more Corps districts to begin accepting funds under the section 214 authority, and many are already poised to do so, we believe that it is imperative for Corps headquarters to address the concerns that have already been identified at the four districts that have used the section 214 authority to process permit applications. To ensure that the permits processed under the section 214 authority comply with federal regulations and guidance, we are recommending that the Secretary of the Army direct the Corps of Engineers to take the following four actions: clarify the guidance that the districts must follow when evaluating permit applications under the section 214 authority, clarify the documentation that district officials must include in project files to justify and support their decisions, provide training to district officials to ensure that they are aware of the requirements that apply to permits processed under the section 214 authority, and develop an effective oversight approach that will ensure that the districts are following all the appropriate requirements when evaluating projects under the section 214 authority. We provided a draft of this report to the Secretary of the Department of Defense for review and comment. The Department of Defense generally concurred with the report’s recommendations and described actions that it is implementing to address them. In its written comments, the department stated that by December 2007 the Corps plans to issue revised guidance for the districts to follow when using section 214 funds that clarifies, among other things, the types of permit decisions that require higher level review and what documents must be reviewed and signed by an official senior to the decision maker. The department also indicated that the Corps is developing a national template that standardizes the documentation required to support standard permit decisions and will develop similar documentation requirements for projects approved with nationwide permits. In addition, the department noted that project managers that evaluate permit applications using section 214 funds and their management will be required to attend annual briefings on Corps guidance for implementing section 214 and that the Corps will conduct annual reviews that will focus on the districts’ compliance with the guidance and documentation protocols. The Department of Defense’s written comments are presented in appendix III. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution for 30 days from the report date. At that time, we will send copies of this report to the Secretary of Defense; the Secretary of the Army; the Chief of Engineers and Commander, U.S. Army Corps of Engineers; and interested congressional committees. We will also make copies available to others upon request. In addition, the report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In 2000, the Congress, through section 214 of the Water Resources and Development Act, authorized the Secretary of the Army, after providing public notice, to accept and expend funds from nonfederal public entities to expedite the evaluation of permit applications that fall under the jurisdiction of the Department of the Army. The act also requires the Secretary to ensure that the funds accepted will not impact impartial decision making with respect to permit approvals. This responsibility has been delegated by the Department of the Army to the Corps of Engineers (Corps). In this context, we were asked to review the (1) extent to which Corps districts have entered into agreements with nonfederal public entities to receive section 214 funds since 2001 and how many permit applications the Corps has evaluated using these funds, (2) amount of section 214 funds the Corps has received and how it has used these funds, (3) extent to which permit processing times have changed since the Corps began using section 214 funds, (4) extent to which the Corps districts have followed the basic permit review processes when evaluating applications using section 214 funds, and (5) extent to which the districts have met the additional requirements for ensuring that permit decisions made using the section 214 funds are impartial and transparent. To determine the extent to which Corps districts entered into section 214 agreements with nonfederal public entities, we contacted each of the 38 Corps districts responsible for issuing regulatory permits under section 404 of the Clean Water Act to identify those districts that had entered into such agreements since the authority was available. We visited each of the four districts that had entered into such agreements and evaluated the permit applications that had been processed under these agreements as of August 2006. These districts were Jacksonville, Fla.; Los Angeles and Sacramento, Calif.; and Seattle, Wash. At each district, we reviewed the legal agreements between the Corps and nonfederal entities to identify the entities that had entered into such agreements and the date the agreements went into effect. To determine how many permit applications the districts evaluated using section 214 funds, we obtained and reviewed the Corps’ files for all but one of the permit applications that the districts evaluated using section 214 funds. We did not review one of the applications because the Seattle District was unable to locate the file. Table 8 shows the number of permit files we reviewed at each of the four districts. We used a data collection instrument (DCI) to extract key pieces of information from each permit file, including the name of the applicant, the type of project seeking approval, and the type of permit the Corps used to authorize the proposed project. An independent analyst verified the accuracy of the data we entered for each permit file. We also interviewed Corps officials in each of the four districts, as well as representatives from at least one of the nonfederal entities participating in the section 214 program in each district, to gain their perspectives on the benefits and any challenges of implementing the section 214 authority. To determine whether the use of the section 214 authority may expand in the future, we surveyed the 34 districts that had not used section 214 funds to evaluate permit applications to determine their reasons for not doing so and their plans, if any, for using such funds in the future; 28 districts (82 percent) responded to our survey. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted can introduce unwanted variability into the survey results. To minimize nonsampling error, an independent survey specialist reviewed the survey for clarity and independence before we sent it out. We also pretested the survey with Corps officials in two districts. During these pretests, we asked each official to complete the survey as they would when they received it. We then solicited feedback to ensure that the questions were clear and unambiguous and that the survey was independent and unbiased. Based on pretest feedback, we made changes to the survey, as appropriate. To determine the amount of section 214 funding the Corps has received and how it has used these funds, we obtained and analyzed financial data covering fiscal years 2001 through 2006 from each of the four participating districts. The data, which came from the Corps of Engineers’ Financial Management System (CEFMS), specify the amounts of section 214 funding the districts had received since the program began and the major categories of expenditures. District officials told us the number of full- time-equivalent staff the districts procured using these funds. A full-time- equivalent staff generally consists of one or more employed individuals who collectively complete 2,080 work hours in a given year. Therefore, either one full-time employee, or two half-time employees, equal one full- time equivalent staff. We reviewed the financial data related to the receipt and expenditure of section 214 funds and determined they were sufficiently reliable for our purposes. We did not, however, audit the Corps’ financial statements to verify that the Corps expended funds as recorded in the CEFMS reports. To determine the extent to which permit processing times have changed since the Corps began using section 214 funds, we obtained permit processing data from three of the four participating districts: Los Angeles, Sacramento, and Seattle. The data in these districts, which came from the Corps’ Regulatory Analysis and Management System database, were sufficiently reliable for our purposes. We did not analyze processing times in the Jacksonville District because officials in this district told us their processing time data were for not reliable for permits issued in 2002 and 2003. The processing time data that we used from the three districts that had reliable data included permits that each district issued in the 3 years prior to the district’s first use of section 214 funds to the time of our review. We calculated median permit processing times before and after the districts began using section 214 funds. We chose the median over the mean because the median is more resistant to the effects of outliers; for example, a few permits that took a relatively longer amount of time to process will impact the mean more than the median. We assigned a permit to the before or after category by comparing the date a Corps district issued a permit with the date it first began using the section 214 authority—permits that a district issued after it began using the authority were assigned to the after category. We also analyzed an alternative approach: assigning before or after based on the date that the Corps began processing the permit. However, we determined this method made the post-section 214 processing times appear artificially low because it excluded permits that were still ongoing at the time of our review. Therefore, we did not present the results from this alternative analysis in the report. Our results show the processing time for nationwide permits, which constituted 82 percent of the total number of permits the districts processed using section 214 funds. We did not analyze processing times for other types of permits (e.g., standard or letters of permission) since the Corps had not processed enough of permits of these types for us to calculate accurate processing times under the section 214 authority. We defined processing times to be the number of days between when the Corps first received a permit application and when it issued a final permit. This definition is different from the Corps’ because the Corps defines processing time as the number of days between when it receives a complete application and when it issues a permit. We chose a definition that would allow us to maximize our chances of observing the effects of the section 214 program. To determine the extent to which the Corps followed the existing permit review processes, we first identified key steps for processing permits. Specifically, we identified six key steps for standard permits and two key steps for nationwide permits. We selected these steps because the districts must complete each one before issuing a permit and, for the standard permits, the Corps identified the steps as important “safeguards” for ensuring objectivity in its permit decisions. We did not include some steps that the Corps identified as “safeguards” because (1) the districts do not have to complete these steps for every permit application or (2) the steps are outside of the districts’ responsibility, e.g., the Environmental Protection Agency can, at its discretion, review and revoke the Corps’ permit decisions. In each district, we reviewed all but two of the files for projects the Corps authorized using nationwide and standard permits. We did not review one file because the Corps processed the permit application under emergency procedures, which are substantially different from the regular review procedures. We did not review the other file because the Seattle district was unable to locate the file. Table 9 shows the number of permit files reviewed for adherence to existing Corps review processes. Table 10 presents a complete list of the information we collected with our DCI. We did not review the files for other types of permits that the Corps may also use—letters of permission and regional general permits— because they undergo different review procedures and constituted less than 5 percent of permits evaluated using section 214 funds. During our file reviews, we used our DCI to record whether the permit file contained evidence that the district followed each of the key steps in the permit review process. We also reviewed the files for evidence that the Corps met the two additional requirements to ensure that decisions for permits processed with section 214 funds were made impartially and were transparent–-that is, that the permit decision received higher level review and was posted to the Corps district’s Web site. An independent analyst verified the accuracy of the data entered for each permit file. We performed our work between April 2006 and April 2007 in accordance with generally accepted government auditing standards. This appendix presents the results of our file review at the four Corps districts—Jacksonville, Fla.; Los Angeles and Sacramento, Calif.; and Seattle, Wash.—that used section 214 funds to evaluate permit applications between December 2001 and August 2006. The results of our review for permit applications approved using standard permits are presented in table 11. The Los Angeles District is not included in table 11 because it had not used section 214 funds to evaluate any standard permit applications at the time of our review. Results of our review for permit applications approved using nationwide permits are presented in table 12. In addition to the individual named above, Sherry McDonald, Assistant Director, and Greg Peterson made key contributions to this report. Nancy Crothers, Melinda Cordero, Brian Chung, Jonathan Dent, Doreen Feldman, Diana Goody, Janet Frisch, Laura Kisner, Amanda Randolph, and Rebecca Shea also made important contributions to this report. We also wish to give special tribute to our dear friend and colleague, Curtis Groves, who died many years too soon after a long battle with multiple myeloma near the conclusion of our work. | When a nonfederal public entity such as a city or county wants to build a public works project that could degrade or damage federally regulated waters and wetlands, it must obtain a permit from the U.S. Army Corps of Engineers (Corps) before proceeding. To help expedite the permit process for these entities, the Congress enacted section 214 of the Water Resources and Development Act of 2000, providing the Corps with temporary authority to receive funds from such entities and use the funds to process permits. To ensure the impartiality and transparency of section 214 permit decisions, the Corps requires its districts to adhere to all existing permit review processes, as well as some additional requirements. GAO was asked to identify (1) how many districts have used the section 214 authority, (2) the amount of funds they have received, (3) how permit processing times have changed, (4) the extent to which districts have adhered to the existing review processes and the additional requirements. As of August 2006, 4 of the Corps' 38 districts had agreements with 11 nonfederal public entities to receive section 214 funds, which have been used to evaluate permit applications. These districts received, evaluated, and approved 187 applications using section 214 funds. The types of projects for which permits were requested included ecological restoration, water storage, transportation, and port construction. Most of the section 214 applicants were city or county departments, port authorities, or regional water authorities, but two applicants were private companies that were allowed to submit applications under section 214 agreements with the Corps. The legislation does not expressly prohibit private companies from submitting applications under section 214 agreements. The use of the section 214 authority may become more prevalent in the future because 7 additional districts are in the process of entering into such agreements, and 19 other districts told GAO that they would consider using the authority if the Congress makes it permanent. The Corps received more than $2 million in section 214 funds from nonfederal public entities between December 2001 and September 2006 and used these funds primarily to hire additional project managers to process permits. About 61 percent of the funds were used to cover personnel costs for the project managers who processed section 214 permits; the remainder covered overhead and other costs incurred to implement the authority. Since the Corps began using the section 214 authority, permit processing times have increased in some districts and decreased in others for both section 214 applicants and non-section 214 applicants. However, it is difficult to attribute the changes in processing time directly to the use of the section 214 authority because many other factors may have influenced processing times and may have masked the effects of the authority. For example, the complexity of 214 permit applications may have resulted in greater processing time for these applicants. Generally, Corps officials and nonfederal public entities who used the authority believe that it has expedited permit processing, saved them cost and time, and improved communication between the Corps and the section 214 applicants. The four districts varied in the extent to which they adhered to the existing permit review process and the additional requirements to ensure impartiality of section 214 permit decisions. For example, one district did not follow a key step in reviewing certain types of section 214 permits because officials did not know they were required to do so. In two other districts, lack of documentation in the permit files prevented GAO from determining whether they followed the existing review processes for another type of permit. With regard to the additional requirements imposed by the Corps for section 214 permits, some districts did not comply with these requirements because they were not aware of them, and others did not comply with them because they interpreted the requirements differently than Corps headquarters intended. |
Since the early 1970s, we and others have reported on redundancies and excess capacity in DOD depots. The excess capacity problem has been exacerbated in recent years by reductions in military force structure and related weapon system procurement; changes in military operational requirements due to the end of the Cold War; and the increased reliability, maintainability, and durability of military systems. We recently determined that excess capacity in the DOD depot maintenance system is about 40 percent for fiscal year 1996. Additionally, the private sector, which has seen its production workload for new systems and equipment decline and has significant excess production capacity, is seeking an increased share of the depot maintenance workload. The BRAC process of closing or realigning depots and transferring their workloads either to remaining depots or to the private sector has, in the past decade, been the most effective way of addressing DOD’s problem of excess capacity. During the 1995 BRAC process, one of DOD’s recommendations was to close the Louisville depot and transfer its workload to other Navy facilities—primarily the naval gun workload to the Norfolk Naval Shipyard, Virginia; the Phalanx workload to the Naval Surface Warfare Center, Crane, Indiana; and the engineering support functions to the Naval Surface Warfare Center, Port Hueneme, California. “transfer(ing) workload, equipment, and facilities to the private sector or local jurisdiction as appropriate if the private sector can accommodate the workload on site; or relocate necessary functions along with necessary personnel, equipment, and support to other naval technical activities, primarily the Naval Shipyard, Norfolk, Virginia; Naval Surface Warfare Center, Hueneme, California; and the Naval Surface Warfare Center, Crane, Indiana.” The Louisville Detachment of the Naval Surface Warfare Center, Crane Division, is located on a 142-acre site within the city limits of Louisville, Kentucky. The Louisville depot is responsible for providing engineering and technical support as well as overhaul and remanufacturing capability for naval surface ship gun and missile systems, including the 5-inch Mark 45 and Mark 75 guns, missile launchers, gun computer and fire control systems, torpedo tubes, and the Navy’s anti-missile Phalanx Close-In-Weapon System. Louisville has nine major production buildings with approximately 1.4 million square feet of plant space. The facility provides a wide range of mechanical capability, including weapon system disassembly and assembly, gun manufacturing, machining, component fabrication, welding, metal plating, and surface finishing. The depot has 3.8 million direct labor hours of maximum potential capacity to perform 1.3 million hours of work, leaving the facility with 2.5 million hours of excess capacity and only 34 percent utilization. At the time of the BRAC decision, the depot employed approximately 1,600 civilian personnel. The Navy is planning to privatize the workload in place in the Louisville facility. On June 13, 1996, the Secretary of the Navy notified Congress that, under the Competition in Contracting Act, the Navy intended to award contracts restricting competition in the public interest to the two defense contractors selected by the local redevelopment authority. The Navy awarded contracts to Hughes Missile Systems Company and United Defense Limited Partnership on July 19, 1996. Because the Navy plans to privatize the Louisville depot’s current workload in place, neither excess capacity nor associated maintenance costs will be reduced at other DOD depots or the private sector. The 1994 Defense Science Board Task Force on Depot Maintenance Management, which included representatives from the public and private sectors, stated that divestiture of excess infrastructure is a key element of reducing overall depot maintenance costs. Private industry representatives pointed out that through consolidations, mergers, and closures, the defense industry has attempted to address its significant excess capacity problem and DOD needs to do the same. Privatizing-in-place transfers excess capacity to the private sector but does not eliminate it. DOD pays for this excess capacity, whether it is in the public or the private sector. DOD has had little success eliminating underused industrial facilities except through the BRAC process. In making its recommendations to the BRAC Commission on the Louisville closure, the Navy proposed transferring 402,500 direct labor hours to the Norfolk Naval Shipyard and about 925,750 and 187,250 direct labor hours to the Crane and Port Hueneme locations, respectively, of the Naval Surface Warfare Center. Based on an evaluation of maximum potential capacity and programmed workload for fiscal year 1996, naval shipyards had 35-percent excess capacity, representing 18.5 million direct labor hours. The Norfolk Naval Shipyard has 34-percent excess capacity, representing about 5.4 million direct labor hours, and Crane has 69-percent excess capacity, representing about 1.7 million direct labor hours. The Navy has started to privatize these workloads in place at Louisville. Consequently, an opportunity to reduce excess capacity at the other Navy industrial activities is missed. In developing its privatization-in-place plans, the Navy did not consider consolidating the Louisville workloads with comparable workloads in contractor facilities. United Defense Limited Partnership and Hughes Missile Systems Company, which will operate the Louisville site, also have extensive excess capacity in their own facilities. Although these are manufacturing rather than repair facilities, they have significant excess capacity the contractors believe could be adapted for repair. The Navy believes that privatization of the Louisville depot will minimize the impact of the closure on the local community and will produce substantial savings. However, the Navy’s position on savings is not well supported. Navy officials cited several reasons why privatization would produce savings. First, the Commission on Roles and Missions concluded that privatizing depot maintenance activities could lower DOD depot maintenance costs by 20 percent. Second, although Navy officials said they would not complete their cost analysis until the day before they expect to award contracts for the Louisville workloads, their preliminary analysis indicates that privatization is less expensive. Third, the city of Louisville required contractors to commit to reducing labor rates below the current Navy rates. The Commission’s assumption that privatization can reduce costs by 20 percent is not well supported, as we reported in July 1996. The Commission’s assumption was based primarily on reported savings from public-private competitions for commercial activities under Office of Management and Budget Circular A-76. Unlike depot maintenance activities, which require large investments in capital equipment, technical data, and highly trained and skilled personnel, these commercial activities involved simple, routine, and repetitive tasks. Also, public activities won about half of these competitions. The A-76 competitions generally had many competitors—unlike depot maintenance where most contracts are awarded without competition. Further, reports by us and defense audit groups indicate that projected savings from the A-76 competitions were often not fully achieved. Lastly, there was no competition between Hughes and United Defense and other contractors for the Louisville workloads. In its June 13, 1996, letter to Congress, the Navy stated its awards are in the public interest since the Louisville redevelopment authority had already competitively selected the two contractors. However, our review does not support the Navy’s position that a competition occurred. In June 1995, prior to the completion of the BRAC process, the city of Louisville entered into an agreement with Hughes and United Defense to operate the Louisville depot maintenance facility in the event of a BRAC decision. On July 1, 1995, the BRAC Commission forwarded its closure and realignment recommendations to the President, who forwarded the report to Congress on July 13, 1995. Congress completed its review and accepted the Commission’s recommendations in September 1995. In a December 1, 1995, letter, the Navy asked the Louisville redevelopment authority to reaffirm that it was finalizing its agreements with Hughes and United Defense. In February 1996, another contractor submitted an unsolicited business concept offering to the city of Louisville for the management and operation of the Louisville facility. In a February 24, 1996, letter to the local redevelopment authority, the Navy expressed concern that the community might open its selection process to competition. The Navy letter stated that the “introduction of a competitor in Louisville will complicate the interface between the depot and the original equipment manufacturers.” On March 5, 1996, the Navy wrote the redevelopment authority urging it to make its decision no later than March 7, 1996. According to redevelopment authority officials, on March 7, 1996, they advised the three contractors that the redevelopment authority board had decided not to hold a competition and the workloads would be awarded to Hughes and United Defense. As of July 10, 1996, Navy officials said they had not fully developed a cost model for evaluating the two options planned for consideration and had not determined what cost elements would be evaluated. However, they expected to complete their final analysis prior to contract award, which was expected on July 15, 1996. Under the first option, which the Navy indicated in its June 13, 1996, congressional notification letter, the Navy planned to award contracts for the Louisville workloads, assuming it determines that privatization is the more cost-effective alternative. The second option was to transfer workloads to Navy facilities the 1995 BRAC process identified as candidates to receive the Louisville workload. Navy officials expected the first option to be more cost-effective based on a preliminary comparison of the estimated one-time transition costs and their assumption that a contractor will have lower recurring costs. However, the data used in this comparison were inaccurate and incomplete. In June 1995, the Navy estimated that transferring the workload to other naval facilities would cost about $302 million. In March 1996, the Navy estimated that it would cost $132 million, or $170 million less, to privatize-in-place and retain a small Navy engineering support activity at Louisville. Based on these preliminary estimates, the Navy concluded that privatization-in-place was more cost-effective. This analysis did not consider all recurring costs and savings. Some factors in the Navy’s June 1995 one-time cost estimates for transferring the workload to other Navy sites are overstated. For example: The average cost factor used for permanent change-of-station moves was higher for the transfer option than the privatization-in-place option. Under the March 1996 privatization option cost estimate, the Navy estimated it would cost an average of $26,400 to move each of the 409 employees projected to take government jobs, which was about $3,000 lower than the figure developed by the Navy for the BRAC 1995 process. In contrast, the June 1995 estimate for transferring the workload included an average cost of $48,145 to move 819 employees. The two estimates have a difference of $21,745 per person. We could find no reason why the Navy used higher costs for the depot transfer option. However, assuming that $26,400 is the accurate cost factor, the transfer option is overstated by about $17 million. The overstatement of the permanent change-of-station costs also overstated the estimate for DOD’s relocation income tax allowance program. The program, which compensates individuals for federal income taxes incurred on permanent change-of-station payments, is based on a percentage of the payments received. The Navy’s estimate overstated the program’s cost estimate by $2.4 million. The workload transfer cost estimate included $36 million to overhaul larger numbers of spares than normally required to satisfy demands. This was included because the Navy believed extra stock would be needed as a cushion during the transition period. However, normal customer sales would generate about $32 million; therefore, this figure should not have been included in the transfer option. The appropriate cost estimate was an additional $4 million that will not be recovered through customer sales. The workload transfer cost estimate included $37.6 million for military construction at the Norfolk Naval Shipyard. During its fiscal year 1997 budget review, the Navy later determined this estimate was overstated by $11.2 million. The workload transfer estimate included about $2.2 million for DOD’s homeowners assistance program, under which DOD offers to buy an employee’s house if it cannot be sold and provides compensation for some property value losses. According to Navy officials, since this program will not be available for Louisville depot employees, their initial cost estimate should not have included this cost for the transfer option. The one-time cost estimate of $302 million for transferring the Louisville workload is overstated by about $66 million, as summarized in table 1. We attempted to analyze other cost factors used in the depot transfer option and noted other costs were potentially overstated, such as those for the employee assistance program and equipment shipment and reinstallation. However, we have not yet developed a more realistic cost for these factors. Nonetheless, because the Navy used preliminary estimates developed for budget purposes, further evaluation is needed. In March 1996, the Navy estimated it would cost $132 million to privatize the Louisville depot in place, but this estimate appears to be understated. Based on our preliminary analysis, understated costs include (1) consolidation of equipment to reduce the facility’s size and more efficiently use capacity, (2) incentives paid to contractors for hiring displaced Navy employees, and (3) separation incentives. On July 15, 1996, when we provided a draft of this report to DOD for comment, the Navy was revising its estimates for the costs of privatization-in-place. Subsequently, in a July 19,1996, letter to the House Committee on National Security and the Senate Committee on Armed Services, the Navy reported that its updated cost analysis showed that privatizing the Louisville workload rather than transferring it to other Navy depots should save $60 million—a reduction of about $110 million in savings from its earlier estimate. Our preliminary analysis of the initial cost estimates are presented below. As you requested we are in the process of reviewing the Navy’s updated cost analysis and will report the results of that work later this year. Our previous BRAC work shows that consolidating depot maintenance workloads can significantly reduce recurring costs for all workloads in the remaining facilities. However, choosing the best option requires assessing both one-time transition costs and recurring costs of operation after the transition period. As of July 10, 1996, Navy officials had not yet determined the impact of transferring the Louisville workload on recurring maintenance costs for all workloads at other Navy activities. At that time, they noted that, although they may consider this factor in their final cost analysis, no decision had yet been made to do so. We estimated the potential savings if the Louisville workloads were consolidated at other Navy facilities by (1) using labor rate data from the three potential receiving locations and (2) revising it to reflect the overhead costs that would be reduced by spreading fixed overhead costs over a larger workload base. Using hour and rate information from contractor proposals, we also calculated the estimated annual costs for the privatization option. Based on these estimates, we projected that the Navy could achieve recurring annual savings of about $47.8 million through workload transfers to the Navy activities originally identified to receive these workloads. Our calculations are based primarily on lower rates for workloads currently at Norfolk and Crane, which occurred by increasing the use of these facilities. This projection does not include the $31 million the Commission estimated would be saved by eliminating personnel and operating costs at Louisville. Combining these savings with the $47.8 million resulting from transferring workloads results in an annual savings of about $78.8 million. Therefore, even with one-time transfer costs of $236 million (see table 1), the Navy could recoup the transition costs within 3 years and begin to save $394 million over 5 years. Given this opportunity for savings, privatization-in-place does not appear to be the most cost-effective approach. The Navy expects Hughes and United Defense to lower hourly rates as they gain experience and add comparable commercial work to their Louisville operations. Accomplishing this goal is uncertain because, although the contractors have already agreed with the Louisville reuse authority to reduce labor costs, they also agreed to retain a certain level of the existing workforce and to guarantee current pay and benefits. The Navy did not incorporate these agreements into its contract solicitations. It opted for cost-type contracts. Further, the contractors might also have trouble attracting commercial workload to the Louisville facility, particularly in light of the age and condition of the equipment and facilities. One contractor official told us the Louisville facility is in worse condition than any other owned or operated by his company. Thus, it is questionable whether the expected efficiency gains will be achieved. Title 10 U.S.C. 2464 provides that DOD activities should maintain a core logistics capability sufficient to provide the technical competence and resources necessary for effective and timely response to a mobilization or other national defense emergency. Navy data submitted during the 1995 BRAC process indicated that about 95 percent of the Louisville workload was mission essential, needed to support contingency requirements, and considered necessary to sustain core capabilities. In an April 1996 report to Congress on depot maintenance policy, DOD required the military services to conduct a risk assessment before privatizing mission-essential workloads. DOD officials stated that qualitative factors have been established for conducting a risk assessment and that privatization is determined to be an acceptable risk when an adequate number of private sector sources exist, and those sources are economical, possess the capability and capacity to do the work, and have demonstrated proven past performance. According to Navy officials, they did not perform a risk assessment for the Louisville depot maintenance workloads. As we previously reported, various statutory restrictions may affect how much DOD depot-level workloads can be converted to private-sector performance, including 10 U.S.C. 2464, 10 U.S.C. 2466, and 10 U.S.C. 2469. Title 10 U.S.C. 2464 provides that the Secretary of Defense must identify a “core” logistics capability and DOD must maintain it unless the Secretary waives DOD performance as not required for national defense. Titles 10 U.S.C. 2466 and 10 U.S.C. 2469 limit the extent to which depot-level workloads can be converted to private-sector performance. Title 10 U.S.C. 2466 specifies that not more than 40 percent of the funds allocated in a fiscal year for depot-level maintenance or repair can be spent on private sector performance—the so-called “60/40” rule. Title 10 U.S.C. 2469 prohibits DOD from transferring in-house maintenance and repair workloads valued at not less than $3 million to another DOD activity without using “merit-based selection procedures for competitions” among all DOD depots or to contractor performance without the use of “competitive procedures for competitions among private and public sector entities.” Although each statute affects the allocation of DOD’s depot-level workload, 10 U.S.C. 2469 is the primary impediment to privatization without a public-private competition. The current competition requirements of 10 U.S.C. 2469 were enacted in 1994 and apply to all changes to depot-level workload valued at not less than $3 million currently performed at DOD installations, including the Navy depot at Louisville. The statute does not provide any exemptions from its competition requirements and, unlike most of the other laws governing depot maintenance, does not contain a waiver provision. Further, there is nothing in the Defense Base Closure and Realignment Act of 1990—the authority for the BRAC recommendations—that, in our view, would permit the implementation of a recommendation involving privatization outside the competition requirements of 10 U.S.C. 2469. “to the private sector or local jurisdiction as appropriate if the private sector can accommodate the workload on site; or relocate necessary functions along with necessary personnel, equipment and support to other technical activities, primarily the Naval Shipyard, Norfolk, Virginia; Naval Surface Warfare Center, Hueneme, California; and the Naval Surface Warfare Center, Crane, Indiana.” The Navy is privatizing Louisville’s depot-level workload in place by awarding two contracts to private firms selected by the local redevelopment authority. The Navy concluded that privatizing Louisville’s workload will be more cost-effective than transferring it to the naval facilities identified in the BRAC recommendation. In reviewing the Navy’s privatization-in-place plan, we asked Navy officials to explain how the plan complied with existing statutory restrictions. They said they were “seeking to execute” the first alternative of the BRAC recommendation and would not award a contract until they evaluated the relative cost of the two alternatives. They did not provide details to support their position that the privatization plan conformed to existing statutory restrictions, and we were not able to identify any element of the plan that addressed the 10 U.S.C. 2469 requirement for a public-private competition. We recommend that the Secretary of Defense direct the Secretary of the Navy, before exercising any contract options for the Louisville depot maintenance workloads, to ensure military depots have the required capability needed to sustain core depot repair and maintenance capability and adequately document a risk assessment for privatizing mission-essential work being considered for privatization; at a minimum, revise the Navy’s cost analysis to reflect the annual cost savings from workload transfers on the workloads currently performed at those locations by spreading the fixed costs over the increased workload; and use competitive procedures, where applicable, to ensure the cost-effectiveness of the Louisville privatization-in-place initiative. DOD provided oral comments on our draft report. DOD disagreed with our conclusion that privatization transfers excess capacity to the private sector rather than eliminates it. DOD officials stated that privatization-in-place allows private industry to rightsize the facility and workforce, eliminating the excess capacity that may have existed in the government-run facility. We agree that privatizing-in-place allows private industry to eliminate excess capacity at that facility. Our concern is that substantial excess capacity exists in both DOD’s depot system and in existing private sector industrial facilities. Privatization-in-place does not reduce this excess capacity. For example, closing the Louisville facility and transferring the workloads to other underutilized public or private facilities would result in a greater reduction in total system excess capacity than privatizing-in-place. DOD also disagreed with our conclusion that the Navy’s privatization plan did not address the requirement, as specified in 10 U.S.C. 2469, that the Navy hold a public-private competition to determine the most cost-effective method of workload allocation. In its July 19,1996, letter to the House Committee on National Security and the Senate Committee on Armed Services, the Navy asserted that its plan is consistent with 10 U.S.C. 2469 because (1) 10 U.S.C. 2469 does not generally apply to actions implementing a BRAC recommendation and (2) it already complied with this requirement in its cost analysis. We have found nothing in the Closure Act or in its legislative history that would support the Navy’s view that it may implement a BRAC recommendation involving privatization without complying with 10 U.S.C. 2469. While the Navy suggests that support for its position can be found in the legislative history of 10 U.S.C. 2469, the report language it cites deals only with the separate statutory requirement that DOD use merit-based selection procedures when transferring depot workloads between DOD facilities. Furthermore, we do not believe that the Navy’s cost analysis constituted a public-private competition under 10 U.S.C. 2469. As previously mentioned, 10 U.S.C. 2469 requires that “competitive procedures for competitions among public and private sector entities” be used. The statute does not prescribe the elements that make up a competition, and we believe that, in any given case, the extent of competition may be affected by the legitimate mission-related needs of DOD. However, in our view, a “competition” fundamentally entails a process that provides public depots with a reasonable opportunity to offer their services and facilities and uses established criteria to compare their proposed performance with that of private firms. In this case, we are not aware of any attempt by the Navy to provide existing depots with an opportunity to offer their services to perform the Louisville workload, and there were no established criteria for comparing contractor and depot performance. Our draft report included a recommendation for the Navy to complete a cost analysis considering the savings potential from consolidating the Louisville workload at other DOD depots and defense contractor facilities. Subsequently, DOD provided us a copy of its cost analysis, which we are currently reviewing in detail. While our analysis is still in process, it is clear that the Navy’s final analysis did not factor in the savings that could be achieved annually for the workload currently performed at potential receiving locations by spreading fixed costs over the increased workload. The Navy also awarded contracts that privatized-in-place the work at Louisville. Therefore we revised our recommendations to address Navy actions prior to exercising any contract options, to include revising its cost analysis to reflect the workload transfer savings impact on existing workloads. DOD made other technical comments on this report, and we incorporated them where appropriate. Appendix I provides our scope and methodology. We are sending copies of this letter to the Secretaries of Defense and the Navy; the Director, Office of Management and Budget; and interested congressional committees. Copies will be made available to others upon request. If you would like to discuss this matter, please contact me at (202) 512-8412. Major contributors to this report are listed in appendix II. We obtained documents and interviewed officials from the Offices of the Secretary of Defense and the Secretary of the Navy in Washington, D.C.; the Naval Sea Systems Command and Naval Surface Warfare Center headquarters, Arlington, Virginia; Naval Surface Warfare Center field locations at Louisville, Kentucky; Port Hueneme, California; and Crane, Indiana; and the Norfolk Naval Shipyard, Virginia. Whenever possible, we relied on information previously gathered as part of our overall review of the Department of Defense’s (DOD) depot maintenance operations. To evaluate the impact on excess capacity, we compared maximum potential capacity and programmed workload forecast data, as certified to the Joint Cross Service Group for Depot Maintenance prior to the 1995 Commission on Base Closure and Realignment. We determined current excess capacity percentages by comparing maximum potential capacity and workload forecasts for fiscal year 1996. To determine the impact of workload reallocation plans on recurring operating costs at remaining Navy facilities, we obtained direct labor hour rates for the Navy’s Norfolk, Port Hueneme, and Crane sites recalculated based on the total workload transfers of 1.3 million direct labor hours from the Louisville site. To determine the cost-effectiveness of the Navy’s planned privatization in Louisville, we held discussions with local redevelopment representatives, responsible Navy management and contracting officials, and contractor representatives. Although we reviewed available documentation from the Naval Surface Warfare Center, we could not fully evaluate this analysis because the Navy had not completed its cost model at the time our field work was completed. Therefore, we reviewed historical documents and based our analysis and conclusions on available data. Since our review of the Navy’s analysis is ongoing and was constrained by the preliminary nature of some cost estimates and the absence of some cost data, our analysis is based on assumptions that may change as better data becomes available. Subsequent to the completion of our field work, we were provided the Navy’s updated cost analysis. As agreed with the requesters, our review of that analysis will be reported on separately. To evaluate compliance with statutory requirements, we identified the applicable requirements and how they could affect the Navy’s plans to privatize depot-level maintenance workloads. We also obtained a letter from the Naval Sea Systems Command General Counsel explaining how the Navy intends to comply with applicable statutes. We conducted our review from May 1996 through July 1996 in accordance with generally accepted government auditing standards. Bobby R. Worrell, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) plans to privatize-in-place the Navy's Louisville, Kentucky, depot maintenance workload, focusing on the: (1) impact on excess depot capacity and operating costs at remaining industrial facilities; (2) cost-effectiveness of this planned privatization-in-place option; and (3) statutory requirements affecting transfers of depot maintenance workloads to the private sector. GAO found that: (1) privatization-in-place is not cost-effective given the excess capacity in the DOD depot maintenance system; (2) the Navy's privatization plan for the Louisville depot will not reduce excess capacity at the remaining depots or in the private sector and may be more costly than transferring the work to other depots; (3) DOD pays for the excess capacity whether it is in the public or private sector; (4) privatizing the facility may not comply with statutory requirements for public-private competitions, since the Navy plans to use noncompetitive procurement procedures; (5) the Navy overstated the cost of transferring the Louisville workload to other depots by at least $66 million and generally assumed that privatization would save 20 percent, which is not likely to be realized; (6) the Navy's projection is based on conditions that are not relevant for most depot maintenance workloads and does not reflect the cost of excess capacity in the public sector; (7) the Navy did not assess the risk associated with contracting the depot's core workload, since the majority of the workload is mission essential; and (8) in July 1996, the Navy awarded contracts to Hughes Missile Systems Company and United Defense Limited Partnership for work in progress, but it did not verify that the privatization plan conformed with statutory requirements for public-private competition. |
As the nation’s principal conservation agency, Interior has responsibility for managing most of our nationally owned public lands and natural resources. This includes fostering the wisest use of our land and water resources, protecting our fish and wildlife, and preserving the environmental and cultural values of our national parks and historic places. Interior employs about 70,000 full-time equivalent staff and delivers a wide range of services through its bureaus, services, and offices at over 2,000 field locations across the country. Three Interior bureaus—the Bureau of Land Management (BLM), the National Park Service, and the Fish and Wildlife Service (FWS)—and USDA’s Forest Service share responsibilities for managing public lands. While all have separate missions, they manage adjacent lands in many areas throughout the country and have some responsibilities that overlap. Over the last several years, Interior and the Forest Service have collocated some offices or shared space with other federal agencies, and have pursued other means of streamlining, sharing resources, and saving rental costs. To carry out its broad missions, Interior and its bureaus spend more than $62 million each year on telecommunications resources that are used to provide a wide array of voice, data, radio, and video services. These include a variety of telecommunications services acquired under the General Services Administration’s Federal Telecommunications System (FTS) 2000 contract as well as from local and long-distance telephone carriers and commercial vendors. Interior was unable to provide us with its total fiscal year 1996 telecommunications costs because commercial telecommunications costs and costs for some other services are paid directly by Interior bureaus; these costs are not aggregated or tracked at the Department level. However, for 1 year—fiscal year 1995—in response to a survey we conducted, Interior estimated that it spent about $62 million on telecommunications equipment and services. This included about $29 million for FTS 2000 services and about $33 million for commercial and other services. Estimated fiscal year 1995 costs for radio equipment and service are not included in these totals. The Forest Service, which employs more than 30,000 full-time staff, spent about $33 million for telecommunications in fiscal year 1996. The Forest Service and Interior expect to collectively spend up to several hundred millon dollars over the next 8 years acquiring new radio equipment and services to convert to narrowband requirements by 2005. As we previously reported at USDA, consolidating and optimizing telecommunications offers organizations a way to reduce costs by combining resources and services where sharing opportunities exist and by eliminating unnecessary services. For example, the cost to access FTS 2000 services can sometimes be greatly reduced where there are multiple FTS 2000 service delivery points that can be combined to increase the volume of communications traffic among fewer points, thereby, obtaining volume discounts. Additional savings can be achieved by selecting more efficient service and equipment alternatives. Because there can be additional equipment and transmission costs associated with implementing consolidation and optimization alternatives, such costs will offset some of the savings. The 1996 Clinger-Cohen Act and federal guidance highlight the need for federal organizations to acquire and use information technology in the most cost-effective way and identify and act on opportunities to reduce costs by sharing resources where possible. In so doing, federal organizations should identify areas of duplication and work together to best utilize telecommunications and other information technology that can reduce expenditures and redundant functions. Not taking steps that maximize use of telecommunications resources and achieve optimum service at the lowest possible cost can result in the needless waste of government dollars. Our 1995 report noted that USDA had hundreds of field office sites where multiple agencies, located within the same building or geographic area, obtained and used separate and often redundant telecommunications services. Because of this and because USDA had not acted on opportunities to consolidate and optimize telecommunications services, the Department wasted millions of dollars each year paying for redundant services it did not need. Interior’s acting chief information officer (CIO) is responsible for advising and assisting the Secretary and other senior managers to ensure that the Department’s information technology investments are acquired and managed consistent with federal law and the priorities of the Secretary. Under the direction and leadership of the acting CIO, Interior’s Office of Information Resources Management (OIRM) is responsible for ensuring that the Department’s telecommunications resources are cost effectively managed and for overseeing and guiding Interior bureaus in the acquisition, development, management, and use of such resources. In addition, heads of Interior bureaus are responsible for implementing a program that will ensure compliance with Interior policies and for designating telecommunications managers who plan, implement, and manage telecommunications activities within their respective organizations. The bureaus are also responsible for determining whether their telecommunications requirements can be satisfied through existing resources and for sharing telecommunications services and equipment with other bureaus and agencies to the maximum extent practical. As with Interior, the Forest Service delegates responsibility for telecommunications management and the sharing of these resources to its regional and field components. To address our objectives, we reviewed documentation reporting telecommunications usage and costs for Interior and USDA’s Forest Service and we interviewed Interior and USDA officials to discuss consolidation and sharing activities. To review consolidation and sharing activities, we selected four locations where Interior officials told us offices were collocated and actions were underway to consolidate and optimize telecommunications resources and services. Specifically, we visited Interior bureau offices in Lakewood and Durango, Colorado; Farmington, New Mexico; and Cheyenne, Wyoming. In addition, we discussed sharing projects underway or planned with Interior and Forest Service officials at offices in Lakewood and Durango. We also reviewed reports and billing information showing FTS 2000 and commercial carrier costs to confirm our results. Appendix I provides further details on our scope and methodology. We conducted our review from August 1996 through March 1997, in accordance with generally accepted government auditing standards. We provided a draft of this report to Interior and USDA for comment. Interior’s and USDA’s comments are discussed in the report and are included in full in appendixes II and III, respectively. To its credit, Interior has undertaken a number of cost-saving initiatives to eliminate some unused telephone lines and unnecessary data services. While significant savings have been achieved in some cases, such efforts have generally been isolated and ad hoc rather than departmentwide. Savings are being missed because Interior is not systematically identifying and acting on opportunities to consolidate and share telecommunications resources within and among its bureaus or its 2,000-plus field locations. At just four of these field locations, we found that bureaus and offices were paying thousands of dollars annually for telecommunications services that were redundant and unnecessary. Interior does not know to what extent similar telecommunications savings may exist at its other offices because it lacks the basic information necessary to make such determinations. Interior is not systematically identifying opportunities among collocated bureau offices to consolidate and optimize telecommunications resources. Interior’s multiple bureaus have numerous field office sites in the same building or geographic area, but they obtain and use telecommunications equipment and services independently. This can result in the use of redundant and/or more costly telecommunications services than necessary at these sites. Nevertheless, OIRM—which has responsibility for managing and overseeing Interior’s telecommunications activities—has not exercised effective leadership by establishing a departmentwide program for systematically identifying telecommunications inefficiencies that may exist and achieving savings among bureaus and offices across the Department. Instead, OIRM relies on each of Interior’s separate bureaus to identify and act on such opportunities. Yet, according to bureau telecommunications officials, this is rarely done and savings opportunities may be lost. Although we only visited a few of Interior’s field sites during our review, we found that bureau offices in Lakewood, Durango, and Farmington spent thousands of dollars over the last several years for unnecessary commercial long-distance telephone and redundant FTS 2000 data services. In one case, billing records show that two bureau offices in the one building in Lakewood were spending about $4,400 annually for unnecessary FTS 2000 services because these services had not been consolidated and shared. In addition, we found bureau offices in Farmington located close by one another yet still using separate data connections to the same cities; opportunities to share these services in order to reduce costs had not been investigated. While many bureau telecommunications managers and staff told us they would like to take advantage of savings by consolidating and sharing resources at locations, given their other duties, it is not a priority. Specifically, these officials said that they spend most of their time maintaining current operations and providing their bureau field offices with technical assistance. As a result, they assert that they rarely have time to look for such consolidation opportunities among bureau offices. Even if pursuing consolidation opportunities was a priority, the bureaus do not have the information necessary to identify where Interior offices are collocated and determine whether telecommunications savings opportunities exist at these locations. Specifically, at the time of our review, neither OIRM nor the bureaus had determined which of the Department’s 2,000-plus field sites are located within the same building or geographic area, and OIRM was unable to provide us with a current list of all Interior office sites. Following our exit briefing with Interior at the end of February 1997, the Department began to develop this information by extracting and analyzing data from several of its administrative management databases. These are positive steps that should help the Department begin to identify savings opportunities. Interior bureaus also lack information needed to adequately analyze cost-savings opportunities that may exist at collocated bureau sites. According to Interior policy, bureaus are required to maintain inventories of all of their telecommunications resources. However, at the time of our review, the bureaus did not maintain up-to-date and complete inventories of all their telecommunications resources and OIRM has not followed up to ensure that they do so. Without information such as this that describes types of telecommunications equipment and services at individual bureau offices, Interior cannot easily determine where it has opportunities to consolidate and optimize telecommunications resources among multiple bureau offices. In addition, neither OIRM nor the bureaus have used telecommunications tools such as USDA’s network analysis model to help identify potential savings opportunities across the Department. USDA developed and successfully used this model to identify millions of dollars in cost-effective options for reducing telecommunications costs at the Department and at other agencies. USDA gave its model to Interior over a year ago, but Interior never used it. Until OIRM and the bureaus develop the basic information and use the tools available to systematically identify cost-reduction opportunities at collocated bureau offices, Interior will not be able to determine where and to what extent similar savings opportunities may exist in Washington, D.C., and at the Department’s hundreds of offices across the country. Some bureau officials said that, through the normal course of their duties, they have sometimes become aware of opportunities to consolidate and share telecommunications resources. Even in such cases, however, savings opportunities may not be pursued because getting the separate Interior bureaus to agree to make changes in telecommunications arrangements is difficult and time-consuming. In one case, for example, three small bureau offices in Cheyenne, Wyoming, gave up trying to consolidate and share services because no one bureau was willing to spend the approximately $2,000 needed to purchase the required equipment, even though services would have been upgraded and overall bureau savings would have paid for this equipment in a few months. During our review, OIRM and the bureaus began identifying some cost-savings opportunities using available FTS 2000 reports and other information. For example, on October 21, 1996, OIRM and the bureaus initiated an agreement with American Telegraph and Telephone (AT&T) to take advantage of FTS 2000 intra-LATA (local access transport area) savings opportunities for local toll call telephone service. In November 1996, further positive steps were taken by OIRM and the bureaus to begin identifying opportunities where Interior bureaus could aggregate some of their FTS 2000 services to obtain volume discounts. As of the end of our review in March 1997, these efforts were still underway and no savings had yet been reported. Another effort by the Department that was designed to improve data communications by establishing a backbone communications network (DOInet) is also being used to help identify opportunities to reduce costs by eliminating some redundant data services. By building on existing networks and establishing common network node locations at high traffic sites, Interior is establishing DOInet to provide improved interconnectivity and interoperability among its bureaus. Under this initiative, OIRM recently began working with the bureaus to identify and try to eliminate data communications circuits that duplicate DOInet capabilities at high traffic sites. Also, as part of this effort, OIRM began to review FTS 2000 billing records to identify some opportunities for eliminating redundant and unnecessary data circuits in the bureaus themselves. However, OIRM has not acted to ensure that savings on all opportunities identified as part of the DOInet initiative will be realized. Consequently, some savings opportunities have been missed and others could not be confirmed. For example, after determining from FTS 2000 billing data during the months of August through November 1995 that Interior bureaus were paying over $1.1 million annually for over 100 duplicate data circuits, OIRM recommended that bureaus either disconnect these data lines or explain why they are needed. But OIRM did not follow up on all its recommendations and, near the end of our review in February 1997, documentation showed that only about $200,000 of the $1.1 million in potential savings identified had been achieved. In March, OIRM officials said that further action was underway to eliminate more of these unnecessary circuits and that 19 additional duplicate circuits with an annual cost of over $100,000 had been eliminated. OIRM did not, however, provide the billing data necessary to confirm any of these reported cost-savings. Some Interior bureaus have taken positive steps to reduce telecommunications costs within their own organizations and have achieved significant savings by doing so. For example, according to FWS telecommunications officials, they have reduced FTS 2000 usage costs throughout the bureau. In one example, FWS reported saving about $66,000 annually by moving some commercial telephone service to FTS 2000 Virtual On-Net service to achieve lower cost-per-minute charges at many of its office locations. However, in many cases, bureau efforts to reduce telecommunications costs were done ad hoc, not systematically applied throughout the bureau or replicated among other bureaus. For example, a BLM office in Cheyenne reviewed telecommunications services and associated charges 2 years ago, finding that it had paid an extra $90,000 because the local carrier had incorrectly applied tariff rates to some of its services. The office received a total reimbursement for these erroneous charges. However, according to the BLM official who completed the review, this was a onetime initiative undertaken after the office upgraded its telecommunications services, and no additional reviews had been performed. In another case, a Bureau of Reclamation telecommunications official who initiated a review of telephone lines at the office in Lakewood, in September 1995, found that it was paying as much as $20 per line in monthly charges for lines that were no longer being used. In this case, bureau officials found that the office had 2,656 telephone lines for 1,060 staff and that at least 1,405 of these lines were unnecessary. In July 1996, bureau officials completed work reducing the number of lines to 1,251 and reported annual savings totaling more than $320,000. Again, however, despite the significant cost-savings achieved by the Bureau of Reclamation, we were unable to find any cases during our review where other bureaus had undertaken similar attempts to identify and eliminate unused telephone lines. March 1997 records from AT&T show that, after downsizing, Interior bureaus currently have almost twice as many telephone lines as staff—about 137,000 lines for about 70,000 people.Until similar reviews are done throughout the Department, Interior will not know to what extent other headquarters, bureau, and field offices may be paying for lines they do not use. Because efforts to reduce costs, such as the ones discussed above, are not systematically applied and replicated throughout the Department, some bureaus and offices may also be paying for other telecommunications services that are not used, are uneconomical, or are otherwise not cost-effective. For example, two bureau offices we visited were spending several thousand dollars annually paying for redundant FTS 2000 services they did not need or know they had. In one case, billing records showed that one bureau office in Durango spent about $4,000 more than necessary during the past year paying for redundant FTS 2000 services that should have been consolidated with other services at that office. Office officials told us they were not aware of the redundant services because bills are not reviewed to identify this kind of problem. In another similar case, one bureau office in Cheyenne paid several thousand dollars annually for unnecessary local telephone services and redundant FTS 2000 services that should also have been consolidated. Interior and USDA may likewise be missing opportunities to save millions of dollars by not sharing telecommunications resources among Interior bureaus and the Forest Service. While the two departments have a 2-year old agreement to identify and act on sharing opportunities, they have taken little action on this agreement and accordingly, only limited savings have been realized. Moreover, while Interior and USDA’s Forest Service plan to spend several hundred million dollars to acquire separate radio systems over the next 8 years, the Departments have not jointly determined the extent to which they can reduce these costs by sharing radio equipment and services. Interior’s bureaus (e.g., BLM, FWS, and the Park Service) and USDA’s Forest Service recognize that opportunities to share telecommunications resources among their offices exist. While these organizations acquire and use separate telecommunications resources and services to fulfill their individual missions, they work in many of the same geographic areas, overseeing adjacent public lands and natural resources. Because of this, savings may be achieved by sharing resources and services where opportunities exist among these agencies to do so. In recognition of such sharing opportunities, Interior and USDA established a memorandum of agreement in January 1995 to support interdepartmental cooperative efforts “to seek aggressively, opportunities for sharing telecommunications resources” and institute steps necessary to act on these opportunities. While Interior and USDA were to work together to identify potential candidate sites for aggregating and sharing telecommunications resources, they never did. In fact, they have not yet identified Interior bureaus and the Forest Service sites that are in common areas where it may be possible to share telecommunications resources to reduce costs. Senior Interior and USDA managers could not provide a valid basis for not implementing this sharing agreement. Despite inaction on the agreement, we found isolated cases in which Interior and Forest Service offices are reducing their telecommunications costs by sharing some resources. For example, as part of a National Performance Review (NPR) pilot called Trading Post, BLM and the Forest Service said they are achieving thousands of dollars in annual savings by sharing voice communications and local telephone services in Durango.In another case, Interior bureaus and the Forest Service have begun sharing common network and telecommunications resources at several Alaska sites under an NPR initiative known as ARTnet. According to initial results, three Interior bureaus and the Forest Service have said they reduced their annual telecommunications costs over 44 percent (from about $197,000 to $110,000). While such initiatives are positive, they so far involve only a few sites and are not being replicated across the country in other areas where Interior and USDA likely have similar kinds of sharing opportunities. Interior bureaus and USDA’s Forest Service plan to collectively spend up to several hundred million dollars over the next 8 years to purchase new radio systems required under new federal narrowband standards. Under a directive from the National Telecommunications and Information Administration, all federal radio users are required to begin implementing new narrowband technologies to make additional radio channels available to federal agencies. These new narrowband capabilities are expected to be fully implemented governmentwide by January 1, 2005. Interior bureaus plan to spend about $270 million making this transition. While the Forest Service has not determined how much its actual transition to narrowband systems will cost, budget estimates show that it expects to spend tens of millions of dollars replacing radio equipment over the next several years. According to Interior documentation, its bureaus and the Forest Service run parallel radio systems in some areas, with opportunities to share portions of these systems. Further, Interior and Forest Service officials at headquarters and some field locations said they are interested in sharing radio communications; in some cases, Interior and Forest Service field locations have begun to share mountaintop maintenance, radio frequencies, and dispatch operations. Both agencies have also studied, to some degree, implications of sharing radio communications resources. In fact, Interior determined that sharing radio resources as part of the effort to transition to narrowband standards could reportedly bring about a 25 percent overall cost reduction (including equipment and personnel).However, at the time of our review, no decisions about this had been reached and Interior and the Forest Service are each proceeding with plans to acquire separate radio equipment and services that address their individual needs. While Interior has taken some positive steps to reduce telecommunications costs, it has not done what is necessary to take advantage of departmentwide opportunities to eliminate unnecessary services and maximize savings, and has no systematic approach for doing so. Until OIRM and the bureaus develop basic information and use the tools available to them to systematically identify cost-savings opportunities, Interior will not be able to determine where and to what extent sharing opportunities may exist throughout the Department and its hundreds of offices across the country. Similarly, until Interior and USDA follow their 1995 agreement to actively pursue opportunities for sharing telecommunications resources among bureaus and the Forest Service, millions of dollars in potential savings will not have a chance of being realized. In order to help bring about significant potential savings from consolidated and shared telecommunications resources, we recommend that the Secretary of the Interior direct—and hold accountable—the Department’s acting CIO to immediately establish and fully implement among Interior’s bureaus, a departmentwide program for systematically identifying and acting on all opportunities to consolidate and optimize telecommunications resources, including voice, data, video, and radio equipment and services, where it is cost-effective to do so. At a minimum, the acting CIO should: Determine and maintain a current list of Department field locations that are collocated and the extent to which telecommunications resources and services are shared. Direct and ensure that all Interior bureaus and offices establish and maintain up-to-date and complete inventories of their telecommunications resources and services at collocated sites. Direct and ensure that all Interior bureaus and offices review and analyze telecommunications bills at regular intervals, using a cost-effective approach to ensure that all charges are appropriate and services needed. Identify potential savings opportunities at these sites using inventories and telecommunications tools, such as USDA’s network analysis model. Monitor these activities and follow up as needed to ensure that all identified savings opportunities are acted upon. In addition, we recommend that the Secretary of the Interior direct—and hold accountable—each of the Department’s assistant secretaries to cooperate with the acting CIO and immediately establish and fully implement bureauwide programs for similarly identifying and acting on all opportunities to consolidate and optimize telecommunications resources within each bureau, using the steps discussed. We also recommend that the acting CIO report to the Secretary every 6 months on the progress of these efforts and savings achieved. We further recommend that the Secretary of the Interior and the Secretary of Agriculture ensure that their respective acting CIO’s are responsible and accountable for implementing the 1995 joint sharing agreement. At a minimum, the acting CIOs should: Determine where Interior and USDA field sites are collocated and the extent to which services are shared. Identify potential savings opportunities for all telecommunications equipment and services at these sites using the information specified above and telecommunications tools such as USDA’s network analysis model. Stop further radio system purchases, except those necessary for meeting immediate technology needs that are critical to ongoing operations, until both departments jointly determine and document where radio equipment and services can be cost-effectively shared and savings achieved. Monitor these activities and follow up where needed to ensure that all identified savings opportunities are acted upon. The Department of the Interior’s Assistant Secretary for Policy, Management and Budget provided written comments on April 7, 1997, on a draft of this report. Written comments were also provided by USDA’s acting CIO on April 8, 1997. These comments are summarized below, and are reproduced in appendixes II and III, respectively. Interior’s Assistant Secretary for Policy, Management and Budget stated that the Department will use our report to focus additional efforts on eliminating unnecessary telecommunications services and to implement sharing opportunities. Specifically, the Assistant Secretary stated that Interior will use the results of our review to develop guidance and direction needed by bureau telecommunications managers to better manage their acquisition and sharing of telecommunications services and develop a telecommunications management improvement strategy for the Department. The Assistant Secretary also stated that Interior’s strategy will be implemented by identifying projects, staffing them with Departmental and bureau managers, prioritizing actions, and monitoring results. She also said that actions have already begun on several of these improvement projects. We are encouraged by Interior’s statements to better manage its acquisition and sharing of telecommunications services. It will now be important for the Department to develop specific actions it plans to take on each of our recommendations as it moves ahead on efforts to better manage and share telecommunications resources. Given Interior’s decentralized telecommunications management structure and its reliance on bureaus to identify and act on savings opportunities, it is especially important for the Secretary to implement our recommendations to direct—and hold accountable—the Department’s acting CIO and assistant secretaries for establishing and fully implementing, both among and within Interior’s bureaus, programs for systematically identifying and acting on all opportunities to consolidate and optimize telecommunications resources—including voice, data, video, and radio equipment and services—where it is cost-effective to do so. Interior’s Assistant Secretary and USDA’s acting CIO stated that their departments plan to work together to share telecommunications resources and achieve savings. While these statements are encouraging, neither department responded to our specific recommendations relating to implementing the 1995 joint sharing agreement. Given the little action taken on this agreement, we believe it is especially important that the Secretary of the Interior and the Secretary of Agriculture ensure that their respective acting CIOs are both held responsible and accountable for fully implementing the 1995 agreement as well as our other recommendations for increasing levels of telecommunications resources sharing between the departments. Interior’s Assistant Secretary stated that Interior did not agree with our recommendation to stop further radio purchases, except those necessary for meeting immediate technology needs that are critical to ongoing operation, until both departments jointly determine and document where radio equipment and services can be cost effectively shared and savings achieved. The Assistant Secretary did state, though, that Interior supports the goal of implementing shared radio systems, and will implement procedures within Interior and with the Forest Service to ensure that all land mobile radio systems designs are reviewed for sharing and other savings potential prior to radio purchase. USDA’s acting CIO stated that additional work on sharing radio systems is needed by the Forest Service and Interior, but did not comment on our specific recommendation. We are also encouraged by Interior’s and USDA’s statements indicating their willingness to work toward increased levels of radio sharing. Nevertheless, we stand by our recommendation that Interior and USDA should stop further radio system purchases, except those necessary for meeting immediate technology needs that are critical to ongoing operations, until both Departments jointly determine and document where radio equipment and services can be cost effectively shared and savings achieved. Regarding the costs of Interior’s transition to narrowband radio systems, Interior’s Assistant Secretary also stated that the Department now plans to spend $270 million for the narrowband radio system transition; not the $200 million we were told during our review. We have amended the report to reflect Interior’s revised estimate of $270 million. Interior’s Assistant Secretary also provided several specific comments. Regarding use of USDA’s network analysis model tool to identify cost savings opportunities, the Assistant Secretary stated that the Department had used USDA’s model to eliminate $100,000 in addition to what is stated in the report and that the tool had also been used to identify $750,000 in redundant data circuits at the Bureau of Indian Affairs. Our information, however, continues to indicate otherwise. Specifically, in February 1997, and again on April 8, 1997, the official responsible for the network analysis model at USDA stated that while he had given Interior a copy of the model in September 1995, Interior had never used it. Even so, our report does recognize the more than $1.1 million in duplicate circuits that Interior said it identified by reviewing FTS 2000 billing records and this amount includes circuits at the Bureau of Indian Affairs. However, as the report also states, OIRM did not provide us with the billing records necessary to confirm that the Department had actually achieved any of these savings, despite several requests during our review for these records. The Assistant Secretary also commented that Interior believes AT&T’s records include some telephone lines that are not active or being billed to the Department. However, the Assistant Secretary agreed with the report’s premise that Interior may have unused telephone lines and, as a result, the Department will conduct a thorough review of AT&T’s records to verify the number of telephone lines it has and take corrective action where necessary. We agree that AT&T’s records may include inactive lines in some cases. However, as we discuss in our report, hundreds of thousands of dollars in savings have been achieved at one Bureau of Reclamation office where AT&T’s records were used to identify and eliminate unnecessary telephone lines. Given this and the fact that Interior does not know to what extent it may be paying for unnecessary or inactive telephone lines, we believe that this action by Interior, if fully carried out across the Department, could achieve additional savings by helping to identify and eliminate further unnecessary telephone lines and services. Finally, Interior’s Assistant Secretary named numerous examples that were not cited in our report in which radio service is being shared between Interior bureaus and USDA agencies. Our report recognizes that Interior and USDA have taken some steps to share radio services, but have done little to ensure that radio and other telecommunications resources are shared in all cases throughout the country where there are opportunities to do so. As agreed with your offices, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from the date of this letter. At that time we will send copies to the Secretary of the Interior; the Secretary of Agriculture; the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs, the Senate and House Committees on Appropriations, and the House Committee on Government Reform and Oversight; the Director of the Office of Management and Budget; and other interested parties. Copies will also be made available to others upon request. Please contact me at (202) 512-6408 if you or your staff have any questions concerning this report. I can also be reached by e-mail at [email protected]. Major contributors to this report are listed in appendix IV. To address our objectives, we reviewed Interior policies on telecommunications management, various memoranda and reports discussing telecommunications management activities at the Department, vendor billing data showing Interior telecommunications usage and costs, and other materials outlining plans and efforts by OIRM and the bureaus to identify opportunities to consolidate and optimize telecommunications resources and services and implement cost-savings solutions. To identify Interior’s overall telecommunications costs, we obtained the Department’s estimated costs for fiscal year 1995, as it does not track costs for voice, data, video, and other services. We also reviewed documentation relating to interagency efforts by Interior and USDA’s Forest Service to combine and share resources and obtained current estimated radio replacement costs for Interior bureaus and the Forest Service. To determine whether Interior had consolidated and optimized telecommunications resources to eliminate unnecessary services and maximize savings, we interviewed OIRM officials responsible for Departmentwide telecommunications management activities as well as telecommunications managers and/or staff in Interior’s major bureaus. In addition, we reviewed internal correspondence and other documents describing actions taken to identify Departmentwide opportunities to consolidate and optimize telecommunications services. Because Interior did not have a current list of sites where its bureau offices are collocated with one another and with Forest Service offices, we attempted to develop this information by contacting USDA’s National Information Technology Center in Fort Collins, Colorado, which assists the General Services Administration in managing the government’s FTS 2000 billing database. Because all Interior bureaus and the Forest Service obtain services under the government’s FTS 2000 contract, in September 1996, we asked the National Information Technology Center to develop information from the FTS 2000 billing database showing addresses for Interior and Forest Service offices, from which collocated sites could be identified. Initial lists were provided to us in November 1996, but programming problems that caused some data irregularities precluded us from using this information. To review consolidation and sharing activities, we selected four locations where OIRM and bureau officials told us bureau offices were collocated and where some actions had been taken to consolidate and optimize telecommunications resources and services. Specifically, we visited Interior bureau offices in Lakewood and Durango, Colorado; Farmington, New Mexico; and Cheyenne, Wyoming. To determine the extent to which telecommunications resources and services had been consolidated at these locations, we interviewed bureau officials and observed ongoing operations. At our site visits, we found cases in which Interior and the bureaus had additional opportunities to consolidate and optimize telecommunications services and had lost savings because no one had identified and acted on these opportunities. However, we were unable to identify precise dollar amounts for these lost savings because up-to-date, comprehensive information describing telecommunications services and costs were generally not available at these offices. Therefore, in the absence of this information, we attempted to estimate the lost savings by analyzing Interior telecommunications usage and cost data that we had also obtained from USDA’s National Information Technology Center and commercial telephone company vendors. To determine whether Interior and the Forest Service were sharing telecommunications services where possible, we interviewed telecommunications managers involved in these activities and reviewed the status of plans intended to expand sharing. In addition, we discussed sharing projects underway or planned with Interior and Forest Service officials at offices in Lakewood and Durango and opportunities for sharing voice, data, and radio equipment and services. We also reviewed telecommunications usage and cost data obtained from USDA’s National Information Technology Center and commercial telephone company vendors to determine the extent to which telecommunications resources had been consolidated and optimized. We performed our audit work from August 1996 through March 1997, in accordance with generally accepted government auditing standards. Our work was primarily done at Interior and USDA headquarters offices in Washington, D.C. We also worked at Interior offices for the National Park Service, the Bureau of Reclamation, and the Office of Surface Mining Reclamation and Enforcement in Washington, D.C.; the Bureau of Land Management, the Bureau of Reclamation, and the U.S. Fish and Wildlife Service in Lakewood; the Minerals Management Service in Herndon, Virginia; and the U.S. Geological Survey in Reston, Virginia. Our work also included visits to selected Interior bureau offices in Farmington and Cheyenne; Interior and USDA Forest Service offices in Durango; and Forest Service offices in Lakewood. Stephen A. Schwartz, Senior Assistant Director William D. Hadesty, Technical Director Mark D. Shaw, Assistant Director Mirko J. Dolak, Technical Assistant Director Patricia Macauley, Senior Information Systems Analyst Michael P. Fruitman, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed efforts by the Department of the Interior and the Forest Service to reduce costs by consolidating their telecommunications services, focusing on whether Interior: (1) has consolidated and optimized telecommunications services to eliminate unnecessary services and maximize savings; and (2) and the Forest Service are sharing telecommunications services where they can. GAO noted that: (1) to its credit, Interior has undertaken a number of telecommunications cost-savings initiatives that have produced significant financial savings and helped reduce the Department's more than $62-million annual telecommunications investment; (2) however, Interior is not systematically identifying and acting on other opportunities to consolidate and optimize telecommunications resources within and among its bureaus or its 2,000-plus field locations; (3) the cost-savings initiatives that have been undertaken have generally been done on an isolated and ad hoc basis, and have not been replicated throughout the Department; (4) GAO did not review consolidation and sharing opportunities at all of Interior's field locations; (5) however, at the four sites GAO visited, GAO found that telecommunications resources were often not consolidated or shared, and bureaus and offices were paying thousands of dollars annually for unnecessary services; (6) Interior does not know to what extent similar telecommunications savings may exist at its other offices because it lacks the basic information necessary to make such determinations; (7) Interior and the Department of Agriculture (USDA) may also be missing opportunities to save millions of dollars by not sharing telecommunications resources; (8) even though the Departments have a 2-year old agreement to identify and act on sharing opportunities, little has been done to implement this agreement and, accordingly, only limited savings have been realized; and (9) moreover, while Interior and the Forest Service currently plan to collectively spend up to several hundred million dollars to acquire separate radio systems over the next 8 years, the Departments have not jointly determined the extent to which they can reduce these costs by sharing radio equipment and services. |
In the Veterans Entrepreneurship and Small Business Development Act of 1999, as amended, Congress established various programmatic requirements for The Veterans Corporation to address perceived shortfalls in federally provided services for veterans. The Veterans Corporation is required to, among other things, (1) expand the provision of and improve access to technical assistance regarding entrepreneurship; (2) assist veterans with the formation and expansion of small businesses by working with and organizing public and private resources; (3) establish and maintain a network of information and assistance centers for use by veterans; (4) establish a PCAB to create uniform guidelines and standards for the professional certification of members of the armed services; and (5) assume the duties, responsibilities, and authority of the Advisory Committee on Veterans Business Affairs from the SBA by October 1, 2004. To fund The Veterans Corporation, Congress authorized $12 million in federal appropriations over 4 fiscal years—$4 million in the first year, $4 million in the second year, and $2 million in each of the following 2 years— with the expectation that The Veterans Corporation would become financially self-sufficient. The Veterans Corporation received its first appropriation in March 2001. The Veterans Corporation is a nonprofit corporation chartered in the District of Columbia and has authority to, among other things, manage the manner in which it conducts business, enter into contracts, hire and dismiss officers and employees, and solicit, disburse, and manage its funds and assets. The Act requires The Veterans Corporation to raise funds in order to match its federal appropriations. For the first fiscal year (fiscal year 2001), no matching requirement applied. For the second fiscal year (fiscal year 2002), The Veterans Corporation was required to raise $1 for every $2 of federal appropriations. For the remaining 2 fiscal years, The Veterans Corporation is required to raise matching funds on a dollar-for-dollar basis. A 12-member board of directors governs The Veterans Corporation. Nine voting members are presidential appointees, with not more than five members of the same political party. The three remaining members are nonvoting, representing the Administrator of the SBA, the Secretary of Defense, and the Secretary of Veterans Affairs. Voting members serve 6- year terms; however, the terms of the initial appointees are staggered: three for a term of 2 years and three for a term of 4 years. The chairperson is one of the nine voting members and is elected by these members for a 2-year term. The chairperson supervises and controls all affairs of The Veterans Corporation in accordance with policies and directives approved by the board of directors. The board is organized into four committees: (1) executive, (2) corporate governance, (3) audit, and (4) business development. The corporate governance committee is responsible for, among other things, overseeing the strategic and business plans. The Veterans Corporation staff, of which there are 14, use these plans to help define their overall strategy and assess how well they are achieving their goals and objectives. Goals and objectives are then evaluated at the board meetings. The board met for the first time in September 2000; the board currently meets approximately on a quarterly basis. The Veterans Corporation has several initiatives under way to provide small business education, training, and entrepreneurial services to veterans. Officials additionally have identified some initial challenges that have slowed the progress of these efforts, including (1) the inability to collect data on the veteran population, (2) limited government participation in The Veterans Corporation activities, (3) delays in appointing management, and (4) unclear corporate legal status of The Veterans Corporation. The Veterans Corporation has broad performance measures in place to monitor its programs at this early stage and are planning to develop more refined measures to assess effectiveness as the programs mature. The Veterans Corporation has established a PCAB—a body mandated by Congress to assist service members transition from the military to private-sector employment—but the issues surrounding private- sector recognition of military experience and training are large and complex, according to some officials. According to an official at The Veterans Corporation, their corporate strategy has been to organize, coordinate, enhance, and expand existing business programs and services to military veterans interested in entrepreneurship. Additionally, their strategy is to provide programs not otherwise available to veteran-owned businesses. Officials at The Veterans Corporation said that they have been careful not to duplicate existing services but rather to leverage existing services whenever possible. The Managing Director of Operations and Government Relations at The Veterans Corporation stated that their approach is to develop public and private resources that may include coordinating with local business services where appropriate. In response to veteran needs for small business education and training, The Veterans Corporation has offered classroom instruction, seminars, and on- line educational resources. The Veterans Corporation has hosted three initial Veterans Entrepreneurial Training (VET) programs, which produced 64 graduates, for veterans interested in starting a business or seeking to improve their current business. The initial locations included Riverside, California; Portland, Maine; and Arlington, Virginia. The VET program incorporates classroom instruction, mentoring, networking, and technology training. An official at The Veterans Corporation stated that program participants pay $350 of the program’s $1,850 cost for 45 hours of classroom instruction and, as an added benefit, receive a voucher valued at $675 to purchase a Gateway computer upon successfully completing the program. The Veterans Corporation officials said that the program is a partnership with the Ewing Marion Kauffman Foundation’s FastTrac Program, a successful entrepreneurship-training program. Statistics from The Veterans Corporation’s Web site makes reference to the Kauffman Foundation’s overall success, which indicates that 60,000 people have completed the FastTrac program since 1987. Additionally, of this number, 88 percent were still in business 2 years later, and 77 percent were still in business and turning a profit 5 years later; overall, 64 percent have seen their sales increase. An official at The Veterans Corporation said that for fiscal year 2003, they are planning a total of 30 VET courses. The official added that the program draws support from local Service Corps of Retired Executives chapters and Small Business Development Centers, whose members and staff serve as either mentors or classroom speakers. The Veterans Corporation has also piloted two, 1-day Veterans Business Success Seminars on the skills needed to start a business. The seminars were held in Boise, Idaho, and Cleveland, Ohio, and included discussions on business plans, marketing analysis, and financing. According to an official at The Veterans Corporation, 30 veterans participated in the first two seminars. The official explained that they are adopting a new strategy to more consistently meet the mandate to establish and maintain a network of information and assistance centers for use by veterans and the public. Under this strategy, The Veterans Corporation will utilize community-based organizations to provide veterans support with a combination of workshops, seminars and courses tailored to local needs. The official added that The Veterans Corporation is in discussions with the SBA on funding four test sites to be launched by May 2003. In April 2002, The Veterans Corporation’s Web site became operational; it contains information on training, capital, and other business resources. A board member of The Veterans Corporation told us that The Veterans Corporation views this Web site as helping to fulfill the requirement under the Act to establish and maintain a network of information and assistance centers for use by veterans and the public. The board member explained that building brick and mortar development centers would be prohibitively expensive and that the board’s initial goal was to focus on leveraging existing services rather than duplicating private-sector services. For example, the Web site has links to other small business resources, including Entreworld, an on-line small business resource library. Entreworld, which is sponsored by the Ewing Marion Kauffman Foundation, was one of the first Web sites assisting small businesses since 1996, according to the Entreworld Web site. Additionally, The Veterans Corporation’s Web site has links to other on-line resources for veterans, such as those of the Department of Defense (DOD), SBA, and Department of Veterans Affairs (VA). The Veterans Corporation has launched or started to develop various initiatives to provide entrepreneurial services, such as access to capital through a micro loan program, business insurance, and on-line buying and selling of veteran-owned goods and services. While these services are intended to assist veterans with formulating or expanding businesses, some of the services also provide revenue to The Veterans Corporation. (We discuss The Veterans Corporation’s efforts to become self-sufficient later in this report.) Micro loan program. To help veterans gain access to capital, The Veterans Corporation established a regional micro loan program for start-up businesses. The Veterans Corporation is working with regional banks to provide the loans. Participating banks in the micro loan program may also use SBA loan guarantees to help veterans obtain access to capital. An official told us that as of January 2003, the first early stage loan of $25,000 was made to a start-up, veteran-owned business and two other SBA lines of credit up to $150,000 were close to being finalized. As of April 2003, The Veterans Corporation has an agreement with Newtek Small Business Finance, Inc., to offer SBA loans and other services. The Veterans Corporation will be able to provide nationwide service in conjunction with its existing lenders. Veterans Marketplace. The Veterans Marketplace is an on-line purchase program for products and services produced by veteran-owned small businesses. The Veterans Corporation is partnering with eScout, a company that operates a similar electronic procurement business. The Veterans Marketplace targets procurements of $2,500 or less using an electronic purchase card system. Although the system is operational, The Veterans Corporation is in the process of building their customer lists of government and private companies. As of January 2003, there were 16 veteran-owned business sellers and 150 veteran-owned business buyers listed on The Veterans Marketplace. The Veterans Corporation plans to earn income from this effort through a revenue- sharing agreement with eScout that is based on volume of transactions, new member agreements, on-line purchases, and auction events hosted. Business Insurance Program. In December 2002, The Veterans Corporation started offering insurance services through the Aon Financial Institution Alliance to veteran-owned businesses. The services include health insurance for employees, legal representation, and, for small businesses, computer protection assistance against viruses and hackers. The Veterans Corporation anticipates producing revenue from this effort by collecting commissions from the Aon Financial Institution Alliance. An official at The Veterans Corporation stated that as of April 2003, the first small group health insurance policy was sold as well as over 100 quotes requested or applications completed. Veterans Corporation Platinum BusinessCard. The Veterans Corporation began offering a veterans business credit card in January 2003. The card includes features such as a business credit line and cash back on purchases. According to an official at The Veterans Corporation, 165 credit cards have been approved and issued, as of April 2003. Veterans Capital Fund. The Veterans Corporation is also seeking to establish a venture capital fund to invest in both veteran-owned and other businesses. The fund will be structured as a Small Business Investment Company (SBIC), which is licensed by SBA and features opportunities to leverage private equity investments for government guarantees. Once operational, The Veterans Corporation will own 10 percent of the limited partnership and 17.5 percent of the general partnership. According to an official at Equisource, a management investment firm that is acting as a placement agent for the Veterans Corporation, the fund will seek to invest partly, but not only, in veteran- owned businesses. The official said that The Veterans Corporation would use profits realized from the fund to provide for veteran programs and services. Figure 1 shows The Veterans Corporation’s status of key initiatives. Additionally, appendix II contains a chronology of The Veterans Corporation’s key activities, and appendix III lists The Veterans Corporation’s activities that address the statutory requirements of the Act. Officials at The Veterans Corporation describe their outreach as targeting all veterans, including service-disabled veterans. Generally, they do not have separate efforts for the service-disabled population. The officials, however, referenced efforts to make certain programs and services available to the service-disabled population. For example, the VET program reserves 10 spaces in each class for the service-disabled. In the first three VET courses completed, 19 of 64 graduates, or 30 percent, registered as service-disabled veterans. In another effort just recently launched, service- disabled veterans who purchase insurance products through The Veterans Corporation will receive an additional discount. Officials further said that in the future, they would like to offer distance learning in their entrepreneurial training program to provide greater access to the physically disabled veteran. The Veterans Corporation officials said that progress on programs has been hampered by their inability to collect information from government sources on military personnel transitioning to civilian life and existing veteran-owned businesses. One of the officials explained that the success of programs such as The Veterans Marketplace and VET program is largely dependent on their ability to identify and reach transitioning service members and veteran-owned businesses. The Veterans Corporation officials said that if they were not successful in obtaining this data, they would have to rely on developing data from attendance lists from their training and education programs and other available sources. Officials stressed that this would slow the development of a client database. The Veterans Corporation has requested information from DOD and VA, respectively, on (1) military service members nearing retirement or separation and (2) veteran-owned and service-disabled, veteran-owned businesses. Both DOD and VA officials said that privacy laws prohibit them from providing personal information such as names and addresses of the military and veteran population. A DOD official stated that their policy prohibits them from releasing private information on enlisted military to any public or private organization. The DOD official further cited a November 9, 2001, memorandum for DOD Freedom of Information Act Offices that supports the withholding of personally identifiable information for security reasons in response to the events of September 11, 2001. VA’s Office of the General Counsel (OGC) issued a legal opinion on December 12, 2002, which states that the Act does not direct the Secretary of VA to construct a database for use by The Veterans Corporation. Furthermore, VA’s OGC stated that there are no provisions within existing confidentiality laws that would permit the sharing of information as proposed. However, in response to our draft report, VA concluded that it could disclose a list of names and addresses of veterans and their small businesses to the public, including The Veterans Corporation. Further, VA officials stated that arrangements are under way to make this information available on their Web site. However, it remains to be seen whether the information that will be available on VA’s Web site will meet The Veterans Corporation’s needs. The Veterans Corporation has obtained access to some government databases as well as other publicly available information on veterans—for example, SBA’s Procurement Marketing and Access Network (PRO-Net) database, which contains information on veteran-owned business. The Veterans Corporation has also gained access to DOD’s Central Contractor Registration (CCR) database that contains information on prime and subcontractors of the federal government. CCR contains over 200,000 business listings of which 30,000 were listed as veteran-owned. DOD has required that The Veterans Corporation sign a standard nondisclosure agreement. But, an official at The Veterans Corporation said that the agreement contains language that they “shall not use such data for commercial purposes;” the agreement is currently under legal review at The Veterans Corporation. According to SBA officials, PRO-Net is currently merging with CCR, and current registrants from both databases are being asked to reregister into the combined database. The Veterans Corporation has also utilized some publicly available information on veterans, but the information is in aggregate form and does not enable them to identify individuals seeking entrepreneurial assistance. According to officials at The Veterans Corporation, they were not aware of any public sources of data with names and addresses that could be used to identify veterans who may be seeking entrepreneurial assistance. For instance, The Veterans Corporation officials said they used public data from the VA Web site for information on where veterans live, by state and county and for age and gender. This information was used to help determine locations for VET classes and Veterans Business Success Seminars. Additionally, The Veterans Corporation identified a private data source that lists about 190,000 veteran-owned businesses. An official said that the private data source does not collect E-mail addresses and questioned whether the records have current mailing addresses. Officials said that this effort has been put on hold because it was not viewed as worth the $90,000 acquisition cost. The Veterans Corporation is required to work with and organize public and private resources, including those of the federal government. An official at The Veterans Corporation indicated that collaboration with other federal agencies has been limited because of other priorities at these agencies and because agencies are not required to carry out these multiagency initiatives. As stated previously, due to privacy issues The Veterans Corporation has had difficulty in obtaining data from DOD on military personnel transitioning to civilian life and from VA on veteran-owned businesses. The Veterans Corporation official suggested that a federal directive, such as a presidential executive order or Office of Management and Budget guidance, would help federal agencies understand The Veterans Corporation’s mission and provide the agencies with instructions for assisting in these efforts. Government officials with whom we spoke provided some examples of early collaboration with The Veterans Corporation. For instance, an SBA official stated that they have been active participants at board meetings, helped develop initiatives such as The Veterans Capital Fund (see fig. 1), and provided technical assistance. According to the official, SBA envisions that there will be additional, mutually beneficial relationships with other programs. A VA official stated that collaboration between VA and The Veterans Corporation has included establishing links on the respective Web sites, and invitations to speak at VA conferences. A DOD official also mentioned that the DOD Web site has a link to The Veterans Corporation’s Veterans Entrepreneurial Training Program. Officials at The Veterans Corporation said that progress on their programs was initially hampered by delays in management appointments for positions such as the Chief Executive Officer (CEO) and board members. The officials explained that much time was spent searching for a permanent CEO. The CEO was appointed in October 2001. Until August 2001, the staff at The Veterans Corporation were temporary employees, operating as contractors. Subsequently, the entire management team was hired in fiscal year 2002. Additionally, the Act called for the initial board members to be appointed by the President of the United States no later than 60 days after the legislation was enacted on August 17, 1999. The initial presidential appointments, however, did not occur until a year after enactment. Eight of the nine voting members were appointed between August and December 2000, while the ninth member was appointed in November 2001. Although initial board members had diverse backgrounds such as banking, engineering, and social services, Veterans Corporation and board officials said they would like to have board members with specific qualifications such as connections to corporations for fund-raising or political clout, experience on other boards of successful businesses, or first-hand entrepreneurial experience. Further, The Veterans Corporation staff believes that once government funding ends, they may benefit from a board whose voting members are not wholly presidentially appointed. They explained that the discretion to recruit board members from the private sector would allow The Veterans Corporation to augment the board’s membership with the required business expertise necessary for The Veterans Corporation’s long-term success. The Act does not include any specific rules or guidance for how The Veterans Corporation is to make the transition from a largely government-funded to a private, self-sufficient corporation. As one step in this transition, The Veterans Corporation has proposed that the Act creating the corporation be revised to give The Veterans Corporation input into the selection of the board after government funding ends. Specifically, the proposal calls for a board structure similar to that of Fannie Mae, a government-sponsored enterprise that engages in secondary loan market activity, in which only one-third of the directors are presidentially appointed. Officials at The Veterans Corporation have indicated that differences in interpretation regarding the legal status of The Veterans Corporation as either a public agency or private corporation have, at times, complicated organizational and program development efforts. The Veterans Corporation has obtained various legal opinions on its corporate legal status with respect to personnel and procurement requirements with differing results. They referenced an opinion from the Office of Personnel Management on whether the provisions of Title 5 of the U.S.C. applied to The Veterans Corporation. In a letter dated November 13, 2001, the Office of Personnel Management concluded that The Veterans Corporation was a government- controlled corporation and is subject to most provisions of Title 5, including provisions related to premium pay, awards, leave, and health benefits, among other things. In contrast, a law firm performing pro bono legal assistance to The Veterans Corporation—Fried, Frank, Harris, Shriver & Jacobson—issued a memorandum dated December 5, 2001, that stated “considering all the relevant factors, we believe that a court would find the NVBDC is not a Government-controlled corporation under 5 U.S.C. §103 to which the 5 U.S.C. §5373 pay cap applies.” In another instance, another law firm that also represents The Veterans Corporation—Hale and Dorr LLP—issued a memorandum dated April 2, 2002, that stated that The Veterans Corporation “does not meet the definition of an executive agency [executive department, military department, wholly-owned government corporation, or independent establishments] triggering FAR mandates for procurement.” It is too early to determine the effectiveness of The Veterans Corporation programs to the veteran population because the programs are relatively new and, in some cases, just under way. Officials indicated that they have broad performance measures for programs such as participants’ satisfaction ratings of the VET program and quantitative measures, such as the number of credit cards and insurance policies issued, and dollar volume of transactions for the Veterans Marketplace, which are used to determine whether they are meeting early program objectives. The Veterans Corporation’s business plan has outlined some corporate objectives for fiscal year 2003, including delivering VET programs to at least 500 veterans and transitioning military personnel. Other objectives identified in the business plan include constructing a database that contains accurate information on at least 250,000 veteran business owners and expanding the micro loan program nationwide. According to The Veterans Corporation officials, corporate objectives will be reviewed quarterly. As programs mature, The Veterans Corporation intends to assess program effectiveness periodically. Officials indicated that they do not yet have refined and tested measures to assess the extent their programs impact Veterans who seek to develop or expand their own businesses. The officials explained that at this early stage, there is a lack of historical information against which to measure progress. Additionally, they plan to continue developing performance measures that assess overall program effectiveness. As mandated by the Act, The Veterans Corporation formed a Professional Certification Advisory Board (PCAB) to (1) create uniform guidelines and standards for the professional certification of military personnel transitioning to civilian occupations and (2) remove potential licensure and certification barriers. Officials from another certification group told us that veterans traditionally have a hard time transitioning into private-sector employment because prospective employers have difficulty understanding military experience and training. Private sector employers are increasingly requiring proof or certification of certain skills. Licensing and certification are the two primary types of credentialing for individuals seeking civilian positions that are equivalent to enlisted military occupations. Occupations within the military that require private-sector certification or licensing include, among other things, automotive mechanic, dental assistant, electrician, flight engineer, medical laboratory technician, plumber, police officer, and truck driver. Licenses are granted by federal, state, and local government agencies while certification is the process by which a nongovernmental agency, association, or private sector company recognizes certain qualifications. PCAB officials agreed that the task at hand is quite large, involving multiple government entities. The PCAB held its first meeting in October 2001. Subsequent, initial meetings were spent identifying the scope of issues and key players. The PCAB meets quarterly and has 26 members that serve voluntarily. The board established three committees, including the (1) Barriers Identification Committee, which is tasked with reviewing studies and research to identify barriers that affect transitioning military personnel; (2) Information Clearinghouse Committee, which is responsible for obtaining and disseminating certification, licensure, and small business development information; and (3) Research and Legislative Action Committee, which will analyze barriers and develop recommendations. According to the PCAB chairman, the committees are developing their goals and have not yet produced deliverables. The chairman explained that the Research and Legislation Action Committee would use information from the other two committees to develop recommendations. One PCAB member acknowledged that while progress has been slow, he was uncertain whether the PCAB committees could work any faster. He stressed that the task at hand is quite large and that the pace of work is dependent on the collective efforts of 26 members who serve on a voluntary basis. For instance, one of the PCAB’s committees established to identify certification and licensing obstacles is looking at which of the 105 identified military occupations have barriers, and it is reviewing the licensing procedures of 53 states and jurisdictions. Some PCAB members also represent other certification groups, such as the Council of Licensure, Enforcement, and Regulation and the Commission for Certification in Geriatric Pharmacy. A few board members told us that representation from other certification efforts helps to avoid duplication and complements the efforts of other groups. For instance, one board member who also oversees the Department of Labor’s “Use Your Military Experience and Training” (UMET) Web site on certification and licensing information stated that there is no overlap of effort. In fact, he said that the PCAB is utilizing UMET as a resource to obtain information on certification issues. Another board member, who also chairs VA’s Professional Certification and Licensing Advisory Committee (PCLAC), agreed that the groups did not duplicate each other’s efforts and explained that VA offers financial assistance to service members to cover the cost of certification, up to $2,000. PCLAC advises VA on the certification requirements that entities must meet in order to qualify for payment. A Veterans Corporation board member with whom we spoke identified some concerns about communication between the PCAB and The Veterans Corporation board of directors. For instance, the official commented that there has been limited interaction between the PCAB and The Veterans Corporation board of directors. Others, including an official at The Veterans Corporation and a veterans group with whom we spoke, question whether The Veterans Corporation was the appropriate organization to carry out the PCAB’s mission. They stated that the PCAB might distract The Veterans Corporation’s management and board of directors from their principal activities. An official at The Veterans Corporation explained that producing uniform standards and guidelines for certification was a large and complicated task and inconsistent with the overall goals of The Veterans Corporation, which are to provide entrepreneurial services. In its first 2 years of operations, The Veterans Corporation received $8 million in federal appropriations and spent about $4.7 million of the federal funds primarily on start-up costs. In fiscal year 2001, The Veterans Corporation spent about $985,000 for salaries, professional services, and other start-up costs. In fiscal year 2002, The Veterans Corporation spent approximately $3.7 million in appropriations for that year on expenditures related to establishing its programs, as well as salaries, professional services, and other start-up costs. The Veterans Corporation has implemented various controls over its obligation and expenditure payment processes, including limits on the ability of management officials to make check disbursements without board of director approval. According to The Veterans Corporation’s external auditor, The Veterans Corporation had internal control issues in fiscal year 2001. However, the external auditor determined that these deficiencies did not constitute material weaknesses and that all but one of the deficiencies had been corrected in fiscal year 2002. The Veteran Corporation’s management officials stated that their approach was to spend conservatively on program and operating expenses in the start-up period so that unused federal appropriations could be spent in future periods. During fiscal year 2001, The Veterans Corporation’s sole sources of funding were from federal appropriations and related interest earnings. Of the $4 million in appropriations received during fiscal year 2001, it spent less than $1 million on start-up costs such as salaries, professional services, and other administrative costs. In fiscal year 2002, The Veterans Corporation spent approximately $3.7 million of its federal funds to establish its Veterans Marketplace—an on-line service for selling goods and services of veteran-owned businesses—as well as for other program activities, salaries, professional services, and other start-up costs. Beginning in fiscal year 2002, The Veterans Corporation also began to receive other revenue, such as cash pledges, contributed services and in- kind contributions from nonfederal sources. As of September 30, 2002, The Veterans Corporation had approximately $3.3 million in unexpended federal appropriations—approximately 40 percent of its $8 million in total appropriations. The Veterans Corporation’s federal appropriations are provided on a “no year” basis; therefore, unused appropriations can be carried forward and applied to expenses in future fiscal years. Federal appropriations have been a major source of revenue to The Veterans Corporation since its inception. In fiscal year 2001, The Veterans Corporation’s sole sources of funding were from federal appropriations and related interest earnings. Beginning in fiscal year 2002, The Veterans Corporation recognized cash contributions and pledges of approximately $1.3 million and contributed services and in-kind contributions of approximately $1.5 million as revenue from other sources. Contributed services included legal services, Web site design, and use of a proprietary Web site. As a result, the federal appropriations used in fiscal year 2002 made up approximately 57 percent of The Veterans Corporation’s total revenues. Appendix IV provides more detail on The Veterans Corporation’s revenue and expenses for fiscal years 2001 and 2002. As shown in table 1, the Corporation incurred various start-up costs for its programs in 2001 and 2002. The Veterans Corporation’s expenses increased significantly in 2002 primarily due to it hiring permanent employees and the fees related to establishing The Veterans Marketplace. Since its inception, The Veterans Corporation has spent about $4.7 million of the $8 million total received to date in federal appropriations. In fiscal year 2001, The Veterans Corporation spent approximately $985,000 for salaries, professional services, and other start-up costs. In fiscal year 2002, The Veterans Corporation used federal funds to pay for expenses related to an on-line service for selling goods and services of veteran-owned businesses, as well as its other program activities, salaries, professional services, and other start-up costs. For further analysis of salaries, bonus, and payments to staff for fiscal years 2001 and 2002, see appendix V. Figure 2 shows The Veterans Corporation’s expenses for both fiscal years 2001 and 2002 by function (program, administrative, and fund-raising). Financial reporting under U.S. generally accepted accounting principles requires expenses by type and function. The majority of The Veterans Corporation’s federally funded functional expenses pertain to program activities—59 percent for fiscal year 2001 and 64 percent for fiscal year 2002. Fund-raising costs were less than 20 percent for both fiscal years: 3 percent for fiscal year 2001 and 13 percent for fiscal year 2002. Administrative costs were 39 percent for fiscal year 2001, which primarily represented legal fees and recruitment costs, and were 23 percent for fiscal year 2002, which primarily represented salaries and board expenses. As The Veterans Corporation’s operations expand, we expect that the amount of program activities relative to total expenses will grow and the ratio of administrative and fund-raising to total expenditures will decrease. The board of directors is required to prescribe the manner in which the obligations of The Veterans Corporation may be incurred and how its expenses are allowed and paid. To fulfill this responsibility, the board approved a financial policy in December 2000, before it received its first appropriations; officials of The Veterans Corporation were unable to locate the text of the policy. However, minutes from the March 2001 board meeting show that the board established initial disbursement authority for executive-level staff in March 2001, the same month in which they were hired. The board authorized the acting CEO and the acting associate director to sign checks, drafts, or orders (1) in amounts no greater than $50,000 without further action of the board; (2) in amounts greater than $50,000 but less than $100,000 with the additional signature of one member of the executive committee; and (3) in amounts greater than $100,000 with the additional signature of one member of the executive committee and to notify all board members in writing of the disbursement, at least 7 days prior to issuance for checks, drafts, or orders. Minutes of an executive committee meeting in May 2001 show that the executive committee reduced the limit on expense authority from $50,000 to $10,000. All amounts in excess of $10,000 would require the signature of one executive committee member and also require notification to the chair of the executive committee. In January 2002, the board again amended the expense authority based upon a proposal of the Chief Financial Officer (CFO). Since January 2002, the board has retained authority to approve expenditures in excess of $25,000 and has delegated disbursement authority to executive-level staff. For example, the board authorizes the CEO to disburse up to $25,000 per transaction; single transactions in excess of $25,000 and contracts with a total value greater than $25,000 require the approval of either the executive committee or the full board of directors. In addition, the board resolved that checks written in amounts of $5,000 or less require one authorized signature; those in excess of $5,000 require two authorized signatures. Both the CEO and the Managing Director of Operations are authorized to sign checks. According to The Veterans Corporation’s external auditor, The Veterans Corporation had internal control issues that could have adversely affected its ability to administer a major federal program in accordance with applicable laws, regulations, contracts, and grants. However, the external auditor determined that these conditions did not cause The Veterans Corporation to misrepresent its financial condition or operating results for fiscal year 2001. Specifically, the external auditor found in its fiscal year 2001 audit that The Veterans Corporation did not reconcile bank accounts on a timely basis and segregate cash duties; maintain adequate internal controls surrounding payroll processing; provide supporting documentation marked with an indication of review, approval, and payment for all cash disbursements; and maintain a filing system for accounting records. The external auditor classified these internal control matters as reportable conditions, and did not identify any instances of material weaknesses, which would indicate a potentially greater detrimental effect on an entity’s internal controls. These reportable conditions were detailed in a letter to management. The partner of The Veterans Corporation’s external auditor, who oversaw the audit, stated that such accounting deficiencies are not unusual for start-up small businesses. According to The Veterans Corporation’s external auditor, the reported deficiencies have been addressed in fiscal year 2002, with one exception—reconciliation of bank accounts on a timely basis. To address the requirement to become a self-sustaining entity, The Veterans Corporation has developed a plan to become self-sufficient based on four major sources of revenue—an electronic marketplace, a credit card program, an insurance program, and fund-raising. According to an official at The Veterans Corporation, the revenue assumptions were developed based on discussions and input from their partners such as eScout, Advanta, and Aon Financial Institution Alliance. Revenue assumptions contained in the self-sufficiency plan cover fiscal years 2003 and 2004. At the time of our review, three of the four efforts—the electronic marketplace, credit card and insurance services—were just starting to produce revenue. According to the CFO, fund-raising goals are targeted toward supporting education and training efforts. In fiscal year 2002, The Veterans Corporation earned approximately $2.8 million to satisfy federal matching requirements. Additionally, the plan calls for quarterly reviews to assess targeted projections. Officials said that if projections are not met, unsuccessful programs may be discontinued and alternative revenue sources will be developed. The Act requires that The Veterans Corporation raise private funds and become a self-sustaining corporation. The Veterans Corporation has implemented a plan to achieve financial self-sufficiency by September 30, 2004, that is based on four major sources of revenue: Veterans Marketplace. According to the plan, The Veterans Marketplace is expected to generate the greatest share of revenue—approximately 43 percent—to The Veterans Corporation in fiscal year 2004, the final fiscal year of federal funding. The revenue sharing agreement between The Veterans Corporation and eScout, which operates the on-line marketplace, allows for The Veterans Corporation to collect 49 percent of revenues received from on-line purchases and other transactional services purchased by members of The Veterans Marketplace, as well as 20 percent of the fees paid by members who access products. Veterans Platinum BusinessCard. About 19 percent of fiscal year 2004 revenue will come from the credit card program for each new activated account as well as a share (0.2 percent) of eligible purchases made with the card. Veterans Affinity Insurance Program. Approximately 19 percent of revenue will come from sales of business insurance and other products to veteran-owned businesses. According to its agreement with Aon Financial Institution Alliance, The Veterans Corporation receives commissions or fees, which are structured differently for each insurance product. Fund-raising. The Veterans Corporation has implemented a multiyear, multimillion-dollar, fund-raising campaign primarily to support The Veterans Entrepreneurial Training program. The self-sufficiency plan includes only a part of their fund-raising goals (15 percent of funds raised that are retained for overhead costs) plus any interest income. In fiscal year 2004, this is expected to account for 19 percent of revenue. Although The Veterans Corporation has other initiatives under way that are expected to generate revenue, such as The Veterans Capital Fund or micro loan program, they were not considered to be primary revenue sources for meeting self-sufficiency. The CFO at The Veterans Corporation said that the revenue assumptions were based on input from partners that operate similar programs. For instance, revenue assumptions for The Veterans Marketplace were based on a discussion with eScout personnel on (1) building similar private exchanges and (2) customer and revenue projections. The process was similar for the credit card and insurance programs, and included discussions with Advanta and Aon Financial Institution Alliance, respectively. The official indicated that both Advanta and Aon were reluctant to offer revenue projections, but they provided enough information to enable The Veterans Corporation to project revenue. The self-sufficiency plan is based on revenue assumptions over fiscal years 2003 and 2004. It is too early to determine if The Veterans Corporation will become financially self-sufficient by September 30, 2004. At the time of our review, three of its efforts were just beginning to produce revenue. For instance, The Veterans Marketplace, while operational since June 2002, was in the process of building a customer list. The other two efforts, the credit card and insurance services, were just launched in January 2003 and December 2002, respectively. Further, according to the plan, total revenue from these activities is not expected to exceed expenses until the fourth quarter of fiscal year 2004. Because The Veterans Corporation’s federal appropriations are provided on a “no year” basis, unused appropriations can be carried over into future fiscal years and, thus, are available to cover future years’ expenses. An official at The Veterans Corporation stated that they expect to have a surplus of funds at the end of the fourth year of government support which, if necessary, would cover their operating costs in the following year. The Veterans Corporation has a fund-raising goal of $2.5 million in fiscal year 2003 and $3 million in fiscal year 2004 to support education and training efforts, primarily the VET program. In fiscal year 2002, The Veterans Corporation earned about $2.8 million, exceeding its goal of $2 million. To help raise funds, they contracted with Changing Our World, a fund-raising organization, and are establishing a fund-raising advisory board of approximately 12 to 15 individuals. A Veterans Corporation official explained that it initially planned to rely on fund-raising to support operations until other revenue sources were in place, but the corporation refocused in light of current economic conditions and limited success in raising funds for operations. The corporation’s revised fund-raising strategy focuses on financing VET program costs. The official further explained that money raised would be used for direct program expenses and not for The Veterans Corporation administrative expenses. VET course administration and materials cost The Veterans Corporation about $1,850 per student, of which enrollees pay $350. As identified in its business plan, the VET corporate objective for fiscal year 2003 is to deliver the program to 500 participants. While The Veterans Corporation only had two sources of income for fiscal year 2001, which were federally appropriated funds and the interest earned on them, sources of income for fiscal year 2002 included federal appropriations and interest income plus cash donations, pledges for future cash donations, contributed services, in-kind donations, contract revenue from the federal government, and other sources. It is important to note however, that approximately $1.2 million of The Veterans Corporation’s fiscal year 2002 revenues were pledges for future payments to The Veterans Corporation. Figure 3 shows The Veterans Corporation’s income for fiscal year 2002, exclusive of federally appropriated funds and interest earned on those funds. Contributed services and in-kind contributions ($1,517) Percentages may not total to 100 because of rounding. Most of the other funds raised in fiscal year 2002 were in the form of contributed services, such as legal services and ability to provide the EntreWorld on-line library through The Veterans Corporation’s Web site at no cost to The Veterans Corporation, as well as pledges for future payments of cash. Ten pledges were made, two of which are collectible over a period of 10 years. The Veterans Corporation raised approximately $66,000 in cash, $5,100 in contract revenue from the federal government, and $5,900 in other funds in fiscal year 2002. The Veterans Corporation intends to evaluate the self-sufficiency plan on a quarterly basis to assess whether its strategies are sufficient to meet targeted projections. The CFO of The Veterans Corporation said that management would review the progress of the plan, including decisions to discontinue unsuccessful programs. In the event that projections are not met for 2003, a Veterans Corporation official stated that they would then consider alternative revenue sources to allow them to meet their self- sufficiency goal. In addition, officials at The Veterans Corporation told us that they continuously look for potential business opportunities to complement their efforts and have had some early discussions on other possible ventures. We received written comments on a draft of this report from The Veterans Corporation. We also obtained technical comments from SBA and VA that have been incorporated into this report where appropriate. The Veterans Corporation commented that their programs have broad measures, quantitative and/or qualitative, that are used to assess early program objectives. In addition, corporation representatives pointed out that they have not yet refined and tested measures to assess whether their programs ultimately have a positive effect on veterans who own or want to start their own businesses. We discussed this issue with The Veterans Corporation and obtained additional documentation supporting these broad measures and noted this in the report. Representatives of The Veterans Corporation expressed their concern with the inability to obtain information about transitioning service members and Veterans from federal agencies. In response to our draft report, VA concluded that they could disclose a list of names and addresses of veterans and their small businesses to the public, including The Veterans Corporation. Further, VA officials stated that arrangements are under way to make this information available on their Web site. However, it remains to be seen whether the information that will be available on VA’s Web site will meet The Veterans Corporation’s needs. The Veterans Corporation reiterated that the Professional Certification Advisory Board would be more appropriately led by an entity other than The Veterans Corporation and that it has not been provided adequate funding or appropriate authority to achieve the goal of creating uniform standards for professional certification. However, The Veterans Corporation stated their commitment to carrying out the Professional Certification Advisory Board’s mission as mandated in the Act. In reference to The Veterans Corporation’s reported accounting deficiency for fiscal year 2002, it submitted a copy of management’s response, which outlines the steps that it plans to take in response to this issue. We will send copies of this report to interested congressional committees and the President and CEO of The Veterans Corporation. We will make copies available to others on request. In addition, this report will also be available at no charge on our homepage at http://www.gao.gov If you or your staff have any questions on this report, please contact me at (202) 512-8678, [email protected] or Harry Medina at (415) 904-2000, [email protected]. Key contributors are listed in appendix VII. To describe The Veterans Corporation’s efforts in providing small business assistance to veterans, we collected and analyzed program information such as planning documents, contracts, legal opinions, and program literature. Additionally, we interviewed staff and board officials from The Veterans Corporation, as well as partnering organizations including officials from eScout, Changing Our World, Equisource, and Southern Financial Bank. We also interviewed officials from federal agencies, including the Small Business Administration, Department of Defense, Department of Veterans Affairs, and Department of Labor, and officials from a veteran service organization, the Vietnam Veterans of America, as well as a consultant—Halsey, Rains, and Associates. To meet our objective to describe The Veterans Corporation’s use of and controls over federal funds, we Obtained and analyzed The Veterans Corporation’s fiscal year 2001 and 2002 financial statements and audit reports, and management letter for 2001. We did not evaluate the quality of the other auditor’s work on the financial statement or conduct our own tests of the financial statement balances; Reviewed The Veterans Corporation’s contract with the external auditor for the 2002 financial statement audit to understand the nature of the audit services to be provided and the extent of the auditor’s proposed work on internal control; Obtained and reviewed minutes of meetings of the board of directors and the board’s executive committee to determine the board’s policies as they related to the disbursement and use of federal funds; Communicated with The Veterans Corporation’s external auditor to, among other things, determine the extent of financial management deficiencies in The Veterans Corporation; and Interviewed the Chief Financial Officer (CFO) of The Veterans Corporation. To determine what efforts The Veterans Corporation made to become financially self-sufficient, we reviewed their self-sufficiency plan and discussed it with The Veterans Corporation’s CFO. We did not independently assess the financial assumptions presented in the plan. Report of the Small Business Administration Veterans Affairs Task Force for Entrepreneurship, “Leading the Way: What Veterans Need From the SBA,” presented to Congress Veterans Entrepreneurship and Small Business Development Act (Public Law 106-50) enacted National Veterans Business Development Corporation is incorporated in the District of Columbia President appoints eight board members First Board of Directors meeting in September 2000 The Veterans Corporation receives initial federal funding ($4 million) The Veterans Corporation receives second installment of federal funding ($4 million) Charles R. Henry hired as CEO and president First meeting of the Professional Certification Advisory Board (PCAB) Assist veterans, including service-disabled veterans, with the formation and expansion of small businesses. Organize public and private resources, including those of federal agencies. Establish and maintain a network of information and assistance centers for use by veterans and the public. Establish Professional Certification Advisory Board. Assume duties, responsibility, and authority of the Advisory Committee on Veterans Affairs on October 1, 2004. Institute and implement a fund-raising and self-sufficiency plan. Raise matching funds to fulfill conditions for receipt of federal funds. Changing Our World Transmit an annual report to the President and to Congress. Board of Directors oversight of Corporation’s obligations and expenses. As noted in table 2, The Veterans Corporation received federal appropriations of $4 million in each of fiscal years 2001 and 2002 and used approximately $1 million and $3.7 million in fiscal years 2001 and 2002, respectively. At the end of fiscal years 2001 and 2002, The Veterans Corporation had approximately $3 million and $3.3 million, respectively, in unexpended appropriations. As shown in table 3, federal appropriations were the major source of revenue to The Veterans Corporation in fiscal years 2001 and 2002. Beginning in fiscal year 2002, The Veterans Corporation began to realize revenue from cash contributions and pledges, as well as contributed services and in-kind contributions. The Veterans Corporation reported approximately $1.3 million in cash contributions and pledges in 2002 as revenue. The majority of the revenue, $1.2 million, pertained to unconditional pledges that The Veterans Corporation recognized as temporarily restricted revenue when the corporation was notified of the pledges. The Veterans Corporation recorded the pledges it expects to receive in future years as contributions receivable at their present value in accordance with U.S. generally accepted accounting principles for not-for-profit organizations. See table 4 for a schedule of The Veterans Corporation’s contributions receivable as of September 30, 2002. Table 5 presents The Veterans Corporation’s federally funded expenses by functional area for fiscal years 2001 and 2002. Expenses related to program activities represent the majority of the Corporation’s expenses and we expect them to grow, as the percentage of fund-raising and administrative expenses would decrease over time relative to total expenditures. Table 6 shows The Veterans Corporation’s aggregate compensation amounts for executive management and all other staff for fiscal years 2001 and 2002. Six employees comprised executive management and all other staff consisted of 13 employees, however not all staff were employed concurrently. For fiscal year 2001, the data are disaggregated by salary and payments to contract workers for the provision of services. Prior to August 2001, the board of The Veterans Corporation did not hire permanent employees. Instead, they executed contracts with individuals to provide services. These payments are represented as payments to contract workers, as shown in table 6 below. For fiscal year 2002, the salary data is disaggregated by wage and bonus payments. In addition to the persons named above, Janet Fong, Jeanette M. Franzel, Marc W. Molino, Charles E. Norfleet, Julie T. Phillips, Barbara M. Roesmann, Kathryn M. Supinski, and Paul G. Thompson made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading. | The Veterans Entrepreneurship and Small Business Development Act of 1999 (Act) created the National Veterans Business Development Corporation (The Veterans Corporation) to address perceived gaps in providing small business and entrepreneurship assistance to veterans. The Act requires GAO to review The Veterans Corporation. GAO described The Veterans Corporation's (1) efforts to provide small business assistance to veterans, including service-disabled veterans; (2) use of and controls over federal funds in providing these services; and (3) efforts to become financially self- sufficient. The Veterans Corporation is providing veterans with entrepreneurial training, on-line educational resources, micro loans, business insurance, and an on-line marketplace. The Veterans Corporation identified initial challenges that slowed program progress, including getting information on transitioning military personnel; and veteran-owned businesses; and delays in making management appointments. Because the programs are new, it is too early to determine their effectiveness. During its first 2 years of operation, The Veterans Corporation spent about $5 of $8 million in total federal appropriations; about $1 million in fiscal year 2001; and about $4 million in fiscal year 2002, with the largest part of the increase due to salaries and program costs. An external audit for fiscal year 2001 identified internal control issues, such as the lack of adequate supporting documentation for disbursements and untimely reconciliation of bank accounts. According to the external auditor, all but one of the deficiencies was addressed in 2002. The Veterans Corporation has developed a financial self-sufficiency plan based on four major revenue sources--an on-line marketplace, a credit card program, an insurance service program, and fund-raising. At the time of GAO's review, most of these efforts were just beginning to produce revenue. According to the plan, The Veterans Corporation is not expected to achieve self-sufficiency until the fourth quarter of fiscal year 2004. If outcomes do not meet projections, Veterans Corporation officials stated that they would explore alternatives. |
The Missile Defense Agency’s mission is to develop an integrated and layered BMDS to defend the United States, its deployed forces, allies, and friends. In order to meet this mission, MDA is developing a highly complex system of systems—land, sea and space based sensors, interceptors and battle management. Since its initiation in 2002, MDA has been given a significant amount of flexibility in executing the development and fielding of the BMDS. To enable MDA to field and enhance a missile defense system quickly, the Secretary of Defense in 2002 delayed the entry of the BMDS program into the Department of Defense’s traditional acquisition process until a mature capability was ready to be handed over to a military service for production and operation. Therefore, the program concurrently develops, tests and fields assets. This approach helped MDA rapidly deploy an initial capability. On the other hand, because MDA can field assets before all testing is completed, it has fielded some assets whose capability is uncertain. Because MDA develops and fields assets continuously, it combines developmental testing with operational testing. In general, developmental testing is aimed at determining whether the system design will satisfy the desired capabilities; operational testing determines whether the system is effective, survivable, and suitable in the hands of the user. MDA conducts testing both on the ground and in flight. The most complex of these is an end-to-end flight test that involves a test of all phases of an engagement including detecting, tracking and destroying a target with an interceptor missile. An end-to-end intercept involves more than one MDA element. For example, a recent intercept test involved a target flown out of Kodiak, Alaska, tracked by the AN/TPY-2 radar located in Alaska, and the Beale upgraded early warning radar located in California, the Sea-based X-band radar and an Aegis radar located at different points in the Pacific. All of the radars communicated with fire control centers in Alaska to guide an interceptor launched from California to hit the target over the Pacific Ocean. Due to the complexity, scale, safety constraints, and cost involved, MDA is unable to conduct a sufficient number of flight tests to fully understand the performance of the system. Therefore, MDA utilizes models and simulations, anchored by flight tests, to understand both the developmental and operational performance of the system. To ensure confidence in the accuracy of modeling and simulation the program goes through a process called accreditation. The models are validated individually using flight and other test data and accredited for their intended use. Models and simulations are used prior to a flight test to predict performance, the flight test is then run to gather data and verify the models, and then data is analyzed after the flight and reconstructed using the models and simulations to confirm their accuracy. MDA intends to group these models into system-level representations according to user needs. One such grouping is the annual performance assessment, a system-level end-to-end simulation that assesses the performance of the BMDS configuration as it exists in the field. The performance assessment integrates element-specific models into a coherent representation of the BMDS. Fundamentally, performance assessments anchored by flight tests are a comprehensive means to fully understand the performance capabilities and limitations of the BMDS. In addition to testing, modeling and simulation, and performance assessments, MDA also has a formal process for determining when a newly fielded asset or group of assets can be declared operational—that is, cleared for use by the warfighter in operational situations. MDA uses a variety of information as a basis to assess a new capability for declaration. For example, MDA will define in advance tests, models, and simulations it will use to base a specific decision on whether an asset or capability can be declared ready for fielding. Each capability designation so designated represents upgraded capacity to support the overall function of BMDS in its mission as well as the level of MDA confidence in the system’s performance. To assess testing related progress in fiscal year 2008, we examined the accomplishments of ten BMDS elements that MDA is developing and fielding. Our work included examining documents such as Program Execution Reviews, test plans and reports, and production plans. We also interviewed officials within each element program office and within MDA functional directorates. In addition, we discussed each element’s test program and its results with DOD’s Office of the Director, Operational Test and Evaluation. We also interviewed officials from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. MDA continues to experience difficulties achieving its goals for testing. During fiscal year 2008, while several tests showed progress in individual elements and some system level capabilities, all BMDS elements experienced test delays or shortfalls. Most were unable to accomplish all objectives and performance challenges continued for many. Table 1 summarizes test results and target performance for the BMDS elements during the year. Because of delays in flight test and a key ground test, MDA was unable to achieve any of the six knowledge points the MDA Director had scheduled for fiscal year 2008. In May 2007, the MDA Director established key system-level and element-level knowledge points, each based on an event that was to provide critical information—or knowledge—for a decision requiring his approval. For example, two knowledge points that MDA had to defer because of testing problems were confirmation of a new target’s performance and assessment of the SM-3 Block 1A missile’s ability to engage and intercept a long range target. GMD in particular continues to experience testing problems and delays. Based on its September 2006 plan, MDA had expected to conduct 7 GMD interceptor flight tests from the start of fiscal year 2007 through the first quarter of fiscal year 2009. MDA however was only able to conduct two, as shown in figure 1. GMD was unable to conduct either of its planned intercept attempts during fiscal year 2008 – FTG-04 and FTG-05. MDA first delayed and then later cancelled the FTG-04 test in May 2008 due to a problem with a telemetry component in the interceptor’s Exoatmospheric Kill Vehicle. The cancellation of FTG-04 removed an important opportunity to obtain end-game performance data needed to develop GMD models and to verify the capability of the fielded Capability Enhancement I (CE-I) EKV. Moreover, MDA planned to test the CE-I EKV against a dynamic target scene with countermeasures in both the FTG-04 and FTG-05 flight tests. However, since FTG-04 was cancelled and the target failed to release the countermeasure in FTG-05, the fielded CE-I’s ability against countermeasures still has not been verified. According to MDA no more CE-I EKV flight tests have been approved. The test delays led MDA to restructure its flight test plan for fiscal year 2009, increasing the number of tests, compressing the amount of time to analyze and prepare for subsequent tests, and increasing the scope of individual tests. For example, MDA plans to conduct 14 of 18 flight tests in the third and fourth quarter of fiscal year 2009. Past testing performance raises questions about whether this is realistic. In fiscal year 2008, MDA had planned to conduct 18 flight tests, but it only accomplished 10, and delayed several flight tests into 2009. In the next GMD end-to-end flight test—FTG-06 in fourth quarter fiscal year 2009 to first quarter fiscal year 2010 —MDA is accepting a higher level of risk than it previously expected in conducting this first test of an enhanced configuration of the Kill Vehicle called the Capability Enhancement II (CE-II) because it will include several objectives that had planned to be previously tested, but have not been. For example, the FTG-06 flight test will be the first GMD test assessing both a CE-II EKV and a complex target scene. Adding to the risk, it will be only the second test using a newly developed FTF LV-2 target. Moreover, MDA in January 2008 had merged FTG-06 and FTG-07, thereby eliminating an additional opportunity to gather important information from an intercept. FTG-07 will instead be an intercept test of the two-stage interceptor intended for the European site. Problems with the reliability and availability of targets (which are themselves ballistic missiles) have increasingly affected BMDS development and testing since 2006. As MDA recently acknowledged, target availability became, in some cases, a pacing item for the overall test program. As was noted in Table 1, problems with targets have reduced testing of GMD, Sensors, and THAAD during 2008. Repeated target problems and test cancellations have particularly reduced opportunities to demonstrate the ability of sensors to discriminate the real target from countermeasures. In the mid-course of flight, a more sophisticated threat missile could use countermeasures in an attempt to deceive BMDS radars and interceptor sensors as to which is the actual reentry vehicle. In order to improve the effectiveness of the BMDS against evolving threats, MDA elements are developing advanced discrimination software in their component’s sensors to distinguish the threat reentry vehicle from countermeasures and debris. The cancellation of FTG-04 and subsequent target problems during FTX-03 and FTG-05 prevented opportunities to gather data to test how well discrimination software performs in an operational environment. The current fielded configuration of the GMD kill vehicle has not been tested against countermeasures. To address the growing need for more sophisticated and reliable targets for the future BMDS test program, MDA has been developing a new set of targets called the Flexible Target Family (FTF), which was intended to provide new short, medium, and long-range targets with ground, air, and sea launch capabilities. It was viewed as a family in the sense that the different target sizes and the variants within those sizes would use common components. MDA embarked on this major development without estimating the cost to develop the family of target missiles. MDA proceeded to develop and even to produce some FTF targets without a sound business case and, consequently, their acquisition has not gone as planned. The funds required for the FTF were spent sooner than expected and were insufficient for the development. Development of all FTF sizes and variants has been discontinued except for the 72-inch diameter ground-launched target, referred to as the LV-2. With guidance from the Missile Defense Executive Board, MDA is currently conducting a comprehensive review of the targets program to determine the best acquisition strategy for future BMDS targets. It is expected to be completed in mid-2009. Whether or not MDA decides to restart the acquisition of the 52-inch diameter targets, or other FTF variants, depends on the results of this review. The process of qualifying FTF target components for the LV-2 was more difficult than expected. While many of the LV-2’s components are found on existing systems, their form, fit, function, and the environment they must fly in are different. Consequently, many critical components initially failed shock and vibration testing and other qualification tests and had to be redesigned. MDA has acknowledged that the component qualification effort ran in parallel with design completion and initial manufacturing. So far, the resultant delays in the LV-2 target have had two consequences. First, a planned test flight of the LV-2 itself for the Space Tracking and Space Surveillance program was delayed and instead its first flight will be as an actual target for an Aegis BMD intercept. Second, because the LV-2 was not ready, that Aegis intercept test was deferred from fiscal year 2008 to third quarter fiscal year 2009. In addition to delaying progress on individual elements, testing problems have had other consequences for BMDS. Specifically, the reduced productivity of testing has delayed understanding the overall performance of BMDS, production and fielding have in some cases gotten ahead of testing, and declarations of capabilities ready for fielding have been made based on fewer tests and less modeling and simulation than planned. The overall performance of the BMDS cannot yet be assessed because MDA lacks a fully accredited end-to-end model and simulation capability and, according to the BMDS Operational Test Agency, it will not have that capability until 2011 at the earliest. The lack of sufficient flight test data has inhibited the validation of the models and simulations needed for the ground tests and the simulation. MDA’s modeling and simulation program enables it to assess the capabilities and limitations of how BMDS performs under a wider variety of conditions than can be accomplished through the limited number of flight tests conducted. Flight tests alone are insufficient because they only demonstrate a single collection data point of element and system performance. Flight tests are, however, an essential tool used to both validate performance of the BMDS and to anchor the models and simulations to ensure they accurately reflect real performance. Computer models of individual elements replicate how those elements function. These models are then aggregated into various combinations that simulate the BMDS engagement of enemy ballistic missiles. Developing an end-to-end system-level model and simulation has been difficult. MDA’s first effort to bring together different element models and simulations to produce a fully accredited, end-to-end model and simulation was for the first annual performance assessment of the fielded BMDS configuration in 2007. Performance Assessment 2007 was unsuccessful primarily because of inadequate data, particularly flight test data, for verification and validation to support accreditation. Instead, Performance Assessment 2007 used several models and simulations that represented different aspects of the BMD system and were not fully integrated. Consequently, acting on a joint recommendation between MDA and the Operational Test Agency, MDA officials cancelled the 2008 performance assessment in April 2008 because of developmental risks associated with modeling and simulations, focusing instead on testing and models for Performance Assessment 2009. According to the BMDS Operational Test Agency’s January 2009 Modeling and Simulation accreditation report, confidence in MDA’s Modeling and Simulation efforts remains low although progress was made during the year. Out of 40 models, the BMDS Operational Test Agency recommended in January 2009 full accreditation for only 6 models, partial accreditation for 9 models, and no accreditation for 25 models. MDA is now exercising stronger central leadership to provide guidance and resources as they coordinate the development of verified and validated models and simulations. MDA intends to verify and validate models and simulations by December 2009 for Performance Assessment 2009. However, BMDS Operational Test Agency officials stated that there is a high risk that the performance assessment 2009 analysis will be delayed because of remaining challenges and MDA’s delayed progress in accreditation. MDA does not expect to have a single end-to-end simulation for use in performance assessments until 2010. Testing problems have contributed to a concurrent development, manufacturing and fielding strategy in which assets are produced and fielded before they are fully demonstrated through testing and modeling. For example, although a test of the ability of the SM-3 Block 1A missile to engage and intercept a long range ballistic target was delayed until the third quarter of fiscal year 2009, MDA purchased 20 of the missiles in fiscal year 2008 ahead of schedule. While the GMD program has only been able to conduct two intercepts since 2006 for assessing the fielded configuration, the production of interceptors has continued. From the beginning of fiscal year 2007 through the first quarter of fiscal year 2009, MDA planned to conduct 7 flight tests and field 16 new ground-based interceptors. The plan included a test that would utilize two ground-based interceptors against a single target, known as a salvo test. By January 2009, GMD had conducted only 2 flight tests and dropped the salvo test; yet it fielded 13 ground-based interceptors. Moreover, the GMD program had planned to conduct an intercept test to assess the enhanced version of the EKV called the Capability Enhancement II (CE-II) in the first quarter of fiscal year 2008, months before emplacing any interceptors with this configuration. However, developmental problems with the new configuration’s inertial measurement unit and the target delayed the first flight test with the CE-II configuration—FTG-06—until at least fourth quarter fiscal year 2009. Despite these delays, emplacements will proceed; MDA expects to have emplaced five CE-II interceptors before this flight test. More importantly, GMD projects that the contractor will have manufactured and delivered 10 CE-II EKVs before that first flight test demonstrates the CE-II capability. This amounts to over half of the CE-II EKV deliveries that are currently under contract. When MDA determines that a capability can be considered for operational use it does so through a formal declaration. MDA bases its declarations on, among other things, a combination of models and simulations—such as end-to-end performance assessments (from missile launch to attempted intercept)—and ground tests all anchored to flight test data. In fiscal year 2008, MDA declared it had fielded 7 of 17 BMDS capabilities planned for 2008 (postponing 10). In doing so MDA largely reduced the basis for the declarations due in part to test problems and delays. Specifically, MDA had intended to use a GMD flight test that was cancelled, a key ground test that was delayed and a performance assessment that was cancelled. MDA had to shift the basis of the 7 declarations to previous flight and ground tests. MDA has undertaken a three-phase review of the entire BMDS modeling, simulation, and test program. According to MDA, the three phases involve identifying critical variables that have not been proven to date, determining what test scenarios are needed to collect the data, and developing an affordable and prioritized schedule of flight and ground tests. MDA intends to complete all three phases of the review by May 2009. At this point, our knowledge of the review is limited, as we have only had an introductory briefing on it. Nonetheless, the review appears to offer a sound approach for closing the gaps that exist between testing, modeling, and simulation. Further, the involvement of test and evaluation organizations is encouraging. While sound, the success of this approach hinges on providing sufficient resources, ensuring robustness, and anticipating contingencies. In addition to linking the critical modeling and simulation variables with test events, the review will have to address the factors that have limited the productivity of the current test approach, such as the availability and performance of targets. MDA’s current approach to testing could be characterized as a just-in-time approach to having the test assets, such as targets, ready. This left little margin to solve issues that arise leading up to the tests. Accordingly, the third phase of MDA’s new approach—properly resourcing the tests with sufficient time, funding and reliable targets—will be key. MDA has indicated that its revision will result in a more robust test plan, providing more margin to conduct the tests through, for example, having spare interceptors and targets available. Other contingencies that a new approach to modeling, simulation, and testing should anticipate include unexpected or incomplete test results, and problems in accrediting the models that are needed for aggregated simulations, such as performance assessments. An important consideration in this regard is for modeling, simulation, and testing events to be re-synchronized so that they properly inform decisions on producing, fielding, and declaring assets operational. Contingency plans could then be formed for adjusting the pace of these decisions should shortfalls occur in modeling, simulation, or testing. MDA has indicated that this new approach to testing will take time to implement, with partial implementation in fiscal year 2010 and full implementation not occurring until fiscal year 2011. Therefore, MDA must manage the transition to the new testing approach. In particular, the ambitious fiscal year 2009 flight test plan may need to be reassessed with the goal of establishing a robust series of tests that can withstand some delays without causing wholesale changes to the test plan during the transition. In the mean time, MDA will have to be prudent in making decisions to produce and field additional assets. Our annual report on missile defense is in draft and with DOD for comment. It will be issued in final by March 13, 2009. In that report, we are recommending additional steps to further improve the transparency, accountability, and oversight of the missile defense program. Our recommendations include actions to improve cost reporting as well as testing and evaluation. DOD is in the process of preparing a formal response to the report and its recommendations. Madame Chairman, this concludes my statement. I would be pleased to respond to any questions you or members of the subcommittee may have. For questions about this statement, please contact me at (202) 512-4841 or [email protected]. Individuals making key contributions to this statement include David B. Best, Assistant Director; Steven B. Stern; LaTonya D. Miller; Thomas Mahalek; Ivy Hübler; Meredith Allen Kimmett; Kenneth E. Patton; and Alyssa Weir. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Missile Defense Agency (MDA) has spent about $56 billion and will spend about $50 billion more through 2013 to develop a Ballistic Missile Defense System (BMDS). This testimony is based on two reviews GAO was directed to conduct in 2008. In addition to our annual review assessing the annual cost, testing, schedule, and performance progress MDA made in developing BMDS, we have also reported on MDA's targets program. In this testimony we discuss (1) the productivity of MDA's recent test program, (2) the consequences of the testing shortfalls, and (3) key factors that should be considered as MDA revises its approach to testing. GAO assessed contractor cost, schedule, and performance; tests completed; and the assets fielded during 2008. GAO also reviewed pertinent sections of the U.S. Code, acquisition policy, and the activities of a new missile defense board. The scale, complexity, cost and safety associated with testing the missile defense system constitute a unique challenge for MDA, test agencies and other oversight organizations. This challenge is heightened by the fact that missile defense assets are developed, produced, and fielded concurrently. Overall, during fiscal year 2008, testing has been less productive than planned. While MDA completed several key tests that demonstrated enhanced performance of BMDS, all elements of the system had test delays and shortfalls, in part due to problems with the availability and performance of target missiles. GMD in particular was unable to conduct either of its two planned intercept attempts in fiscal year 2008. While it did subsequently conduct one in December 2008, it was not able to achieve all primary objectives because the target failed to release its countermeasures. As a result, aspects of the fielded ground-launched kill vehicles may not be demonstrated since no more flight tests have been approved. Target missiles continue as a persistent problem in fiscal year 2008 as poor target performance caused several tests to either fail in part or in whole. Testing shortfalls have had several consequences. First, they have delayed the validation of models and simulations, which are needed to assess the system's overall performance. As a result, the performance of the fielded BMDS as a whole cannot yet be determined. Second, the production and fielding of assets has continued and in some cases has gotten ahead of testing. For example, enhanced Exoatmospheric Kill Vehicles will now be produced and delivered before they are flight tested. Third, MDA has relied on a reduced basis--fewer test, model, and simulation results--to declare capabilities as operational in the field. MDA has undertaken a three-phase review of the entire BMDS test program that involves identifying critical variables that have not been proven to date, determining what test scenarios are needed to collect the data, and developing an affordable, prioritized schedule of flight and ground tests. This review, as long as it continues to involve test and evaluation organizations, appears to offer a sound approach for closing the gaps that exist between testing, modeling, and simulation. Critical to being able to implement the approach will be addressing the factors that have limited the productivity of the current test approach, such as the availability and performance of targets. An additional consideration in a new testing approach must be to ensure that assets are sufficiently tested before they are produced and fielded. An important consideration in this regard is for modeling, simulation, and testing events to be re-synchronized so that they properly inform decisions on producing, fielding, and declaring assets operational. Contingency plans could then be formed for adjusting the pace of these decisions should shortfalls occur in modeling, simulation, or testing. Because MDA has indicated implementation will take time, managing the transition may need to include reassessing the ambitious fiscal year 2009 test plan. In the mean time, MDA will have to be prudent in making decisions to produce and field assets. |
Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The rate at which states are reimbursed for Medicaid service expenditures is known as the Federal Medical Assistance Percentage (FMAP), which may range from 50 percent to no more than 83 percent. To obtain federal matching funds for Medicaid, states file a quarterly financial report with the Centers for Medicare and Medicaid Services (CMS) and draw down funds through an existing payment management system used by the Department of Health and Human Services (HHS). The Recovery Act initially provided eligible states with an estimated $87 billion through increased FMAP rates for 27 months from October 1, 2008, to December 31, 2010. On August 10, 2010, federal legislation was enacted amending the Recovery Act and providing for an extension of increased FMAP funding through June 30, 2011, but at a lower level. On February 25, 2009, CMS made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for fiscal year 2009 through the third quarter of fiscal year 2011, the increased FMAP is calculated on a quarterly basis and is comprised of three components: (1) a “hold harmless” provision, which maintains states’ regular FMAP rates at the highest rate of any fiscal year from 2008 through 2011; (2) a general across-the-board increase of 6.2 percentage points in states’ regular FMAPs through the first quarter of fiscal year 2011, which will then be phased down until July 1, 2011; and (3) a further increase in the FMAPs for those states that have a qualifying increase in unemployment rates. For states to qualify for the increased FMAP, they must pay the state’s share of Medicaid costs and comply with a number of requirements, including the following: States generally may not apply eligibility standards, methodologies, or procedures that are more restrictive than those that were in effect under their state Medicaid programs on July 1, 2008; states must comply with prompt payment requirements; states cannot deposit or credit amounts attributable (either directly or indirectly) to certain elements of the increased FMAP in any reserve or rainy-day fund of the state; and states with political subdivisions—such as cities and counties—that contribute to the nonfederal share of Medicaid spending cannot require the subdivisions to pay a greater percentage of the nonfederal share than would have been required on September 30, 2008. In addition, CMS requires states to separately track and report on increased FMAP funds. To help states comply with these requirements, CMS provided the funds to states through a separate account in an existing payment management system. CMS also provided guidance in the form of State Medicaid Director letters and written responses to frequently asked questions, and the agency continues to work with states on an individual basis to resolve any compliance issues that may arise. Despite these restrictions, states are able to make certain adjustments to their Medicaid programs without risking their eligibility for increased FMAP funds. For example, the Recovery Act does not prohibit states from reducing or eliminating optional services, such as dental services, or reducing provider payment rates. States also continue to have flexibility in how they finance the nonfederal share of Medicaid payments, and may implement new financing arrangements or alter existing ones—such as provider taxes, intergovernmental transfers, and certified public expenditures—to generate additional revenues to help finance the nonfederal share of their Medicaid programs. The FMAP rates in the 16 states and the District increased substantially immediately following enactment of the Recovery Act, and most states’ rates continued to increase, albeit at a slower pace, through the fourth quarter of federal fiscal year 2010. During the fourth quarter of federal fiscal year 2010, the increased FMAP averaged about 11 percentage points higher than the regular 2010 FMAP rates, with increases ranging from about 9 percentage points in Iowa to nearly 13 percentage points in Florida. For all states and the District, the largest proportion of the increased FMAP was the component attributable to the across-the-board increase of 6.2 percentage points, followed by qualifying increases in unemployment rates in each of the states. The “hold harmless” component further contributed to the increased FMAP in five sample states, although to a lesser extent. (See table 2.) As of July 31, 2010, the 16 states and the District had drawn down $43.9 billion in increased FMAP funds, which is 75 percent of the total $58.9 billion in increased FMAP that we estimated would be allocated to these states and the District through December 31, 2010. (See table 3.) If current spending patterns continue, we estimate that the 16 states and the District will draw down $56.2 billion by December 31, 2010—about 95 percent of the initial estimated allocation. The national drawdown mirrors the experiences of our sample states, with the 50 states and the District having drawn down 74 percent of their estimated total allocation of nearly $87 billion through the end of 2010. While the increased FMAP funds are for Medicaid services only, the receipt of these funds may free up funds that states would otherwise have had to use for their Medicaid programs. Similar to their reported uses in fiscal year 2009 and the first half of fiscal year 2010, the 16 states and the District most commonly reported using or planning to use these freed-up funds to cover increased Medicaid caseloads, maintain program eligibility levels, and to finance general budget needs. As with our last survey, most states reported that increased FMAP funding continues to be a major factor in their ability to cover enrollment growth, which has continued to increase since our last Recovery Act report. Between February 2010 and June 2010, overall enrollment across the 16 states and the District grew by an average of nearly 2 percent, with a cumulative increase of 18 percent since October 2007—a rate of increase that is significantly higher than in years prior to the recession. The increase in Medicaid enrollment continues to be attributable primarily to children, a population that is sensitive to economic downturns. However, the highest rate of increase during this period occurred among the nondisabled, nonaged adult population—35 percent, compared to an increase of nearly 19 percent for children. In addition, 10 states and the District reported using freed up funds to maintain benefits and services or to maintain payment rates for practitioners or institutional providers. Six states reported using these funds to meet prompt payment requirements, and five states and the District reporting using the funds to help finance their State Children’s Health Insurance Program or other local public health insurance programs. While most states continue to report using freed-up funds for multiple purposes, North Carolina and Ohio again reported that they use these funds exclusively to finance general budget needs. Despite increases in program enrollment since October 2007, state responses were mixed when asked about changes in the time it takes to process new Medicaid applications. While six states reported an increase in the time it takes to process new applications—most commonly attributing this change to an increase in the volume of new applications and staff cutbacks—nine states and the District reported no change or a decrease in the processing time. Most states and the District reported processing applications, on average, within federally-required time frames. When asked about the long-term outlook for their Medicaid programs, the District and all but three of the 16 states reported a concern about sustaining their Medicaid programs once increased FMAP funding is no longer available. When asked about the factors driving their concerns, most states and the District reported (1) the increased share of the state’s Medicaid payments in 2011; (2) the current projection of the state’s economy and tax revenues; and (3) the current projected growth in the state’s Medicaid enrollment for 2011. Mississippi, Ohio, and Texas did not report concerns about their Medicaid programs’ sustainability once increased FMAP funds are no longer available. Due to these concerns, most states reported taking actions to adjust their Medicaid programs, including reducing or freezing provider payment rates, implementing new or increasing existing provider taxes, or reducing certain optional benefits. Specifically, 12 states reported reducing or freezing provider payment rates. When given a list of 13 types of providers, these states reported implementing 55 payment rate reductions and 46 payment rate freezes, for a total 101 different rate actions taken since February 2009; on average, these states reduced or froze payment rates for 8 types of providers. States frequently reduced or froze payment rates to nursing facilities, clinics, and home health providers, among others. (See table 4.) In addition, 10 states and the District reported implementing 28 new or increased provider taxes. In contrast to states’ changes to provider payment rates, however, states’ taxation efforts were concentrated among a handful of provider types. Specifically, 21 of the 28 taxes were imposed on inpatient hospitals, nursing facilities, and outpatient hospitals— providers for which most states reported paying on a cost basis. (See figure 2.) In some cases, states reported implementing payment rate reductions and new taxes on the same providers. For example, at least half of the states that implemented new or increased taxes for inpatient hospitals, nursing facilities, or outpatient hospitals also reduced or froze payments to those same providers. In addition to changes to payment rates and provider taxes, eight states reported making reductions to optional benefits and services, most commonly reducing or eliminating dental services for adults. Several states provided estimates of savings or increased revenue generated by actions they undertook. For example, California estimated savings of nearly $600 million from payment rate freezes for long-term care providers and other rate reductions, and the discontinuation of dental and certain other optional services; Michigan estimated savings of $152 million from an 8 percent reduction in payment rates for all providers; Pennsylvania projected that a new hospital provider tax will generate $498 million in new revenue for the state; and New York estimated that increases in various provider taxes will generate an additional $184 million annually. States were less certain when asked about future program changes that may be necessary to sustain their Medicaid programs after Recovery Act funding ends, and their uncertainty was likely due to questions surrounding a potential extension of the increased FMAP, as well as Patient Protection and Affordable Care Act (PPACA) provisions. At the time of our survey, the legislation amending the Recovery Act to extend the increased FMAP had been proposed but not yet enacted, and the PPACA had just recently been enacted. Despite states’ uncertainties, however, 12 states and the District reported on the survey that their 2011 budgets had assumed a full extension of the increased FMAP, and many of these states had not developed a contingency plan in the event that such legislation was not enacted. Nationally, 30 states assumed an extension of increased FMAP in their 2011 budgets. Under the recent amendments to the Recovery Act, states’ increased FMAP rates will decrease by at least 3 percentage points beginning on January 1, 2011, and continue to be phased down to their regular FMAP rates by July 1, 2011. For states that had assumed a full extension of the increased FMAP, the available federal funds will be less than anticipated. The effect of these decreases in states’ FMAP rates will vary depending on each state’s unique economic circumstances and the size of their Medicaid population. PPACA also includes several provisions that could affect states’ Medicaid programs, and 12 states and the District reported that PPACA will be a major factor in their ability to make future changes to their programs. For example, the maintenance-of-eligibility requirement under PPACA precludes states from receiving federal Medicaid funding if they apply eligibility standards, methods, or procedures under their plan or waiver that are more restrictive than those in effect on the date of PPACA’s enactment until the date the Secretary of HHS determines that a health insurance exchange established by the state is fully operational, which must be no later than January 1, 2014. PPACA also requires states to expand Medicaid eligibility by 2014 to cover all persons under age 65 who are not already eligible under mandatory eligibility groups and with incomes up to 133 percent of the federal poverty level, but states have the option to expand eligibility immediately and to receive federal funds for these individuals. While the District has already been approved by CMS to expand eligibility to cover this group prior to 2014, and two other states— California and Colorado—reported that they are planning to do so, it remains to be seen how all the states will respond to this option. Our review of states’ use of Recovery Act funds covers three programs administered by the U.S. Department of Education (Education)—the State Fiscal Stabilization Fund (SFSF); Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA), as amended; and the Individuals with Disabilities Education Act (IDEA), Part B, as amended. As part of this review, we surveyed a nationally representative sample of local educational agencies (LEA)—generally, school districts—about their uses of Recovery Act funds for each of these programs. We also met with program officials at the U.S. Department of Education to discuss ongoing monitoring and technical assistance efforts for Recovery Act funds provided through ESEA Title I, IDEA, and SFSF. At the state level, we spoke with state ESEA Title I officials in five states and the District of Columbia, which had relatively low drawdown rates of ESEA Title I Recovery Act funds. We also interviewed state officials in five states and the District of Columbia about their application for and implementation of the School Improvement Grant program. Finally, we interviewed officials in eight LEAs located in four states to understand how they were using their Recovery Act funds. Even with Recovery Act Funds, an Estimated One-Third of LEAs Experienced Funding Cuts in School Year 2009-2010 and More Anticipated Cuts in 2010-2011 Education funding in the United States primarily comes from state and local governments. Prior to the influx of Recovery Act funding for education from the federal government, LEAs, on average, derived about 48 percent of their fiscal year 2008 funding from state funds, 44 percent from local funds, and 8 percent from federal funds. These percentages, however, likely shifted due to increased federal funding through the Recovery Act and reductions in some state budgets for education. While the federal role in financing public education has historically been a limited one, the federal funds appropriated under the Recovery Act provide a significant, but temporary, increase in federal support for education to states and localities, in part, to help them address budget shortfalls. According to the Congressional Research Service, the Recovery Act provided approximately $100 billion for discretionary education programs in fiscal year 2009, which, when combined with regular appropriations for discretionary education programs, represents about a 235 percent increase in federal funding compared to fiscal year 2008. Over the last 2 years—a time period when many states have dealt with decreasing revenues as a result of the sustained economic downturn—a number of states in our review experienced K-12 education cuts. (See table 5 below for expenditure changes for states in fiscal years 2008 and 2009.) Nationwide, 34 states reported that they cut K-12 education funding in fiscal year 2010, including 12 of the 16 states in our review, according to the Fiscal Survey of States. In some states, such as Arizona and Georgia, these fiscal year 2010 cuts were in addition to expenditure cuts in fiscal years 2009 or 2008. However, other states such as Colorado, Florida, Massachusetts, and Michigan experienced cuts to education expenditures in fiscal year 2009 but did not report cutting K-12 funding in 2010. Looking forward to fiscal year 2011, cuts for K-12 education had been proposed in 10 of the 16 states in our review, according to data presented in the June 2010 Fiscal Survey of States report. Given that nearly half of LEA funding, on average, is provided by the states, the impact of state-level cuts to education could significantly affect LEA budgets. The funding condition of LEAs across the country is mixed, and districts expected it to worsen in the 2010-2011 school year, even with Recovery Act funding—however, the new Education Jobs Fund created in August 2010 will provide some additional funding. As shown in figure 3, an estimated one-third of LEAs faced funding decreases in the 2009-2010 school year, but according to our survey conducted in March and April 2010, more than one-half of LEAs—56 percent—are expecting to face funding decreases in the upcoming 2010-2011 school year. LEA officials we spoke with in California and Massachusetts expect funding declines in 2010-2011 which come on top of cuts made in prior years, indicating that state-level cuts to education have been the primary reason for their large funding declines and continue to provide an uncertain landscape for school funding in coming years. Public Law 111-226, enacted on August 10, 2010 provides $10 billion for the new Education Jobs Fund to retain and create education jobs nationwide. The Fund will generally support education jobs in the 2010-2011 school year and be distributed to states by a formula based on population figures. States can distribute their funding to LEAs based on their own primary funding formulas or LEAs’ relative share of federal ESEA Title I funds. However, while many LEAs reported worsening funding situations, the overall funding levels of many other LEAs increased or remained the same in the 2009-2010 school year; although before the new Education Jobs Fund was created, a smaller proportion reported expecting funding increases in 2010-2011. Specifically, around half of LEAs reported that their overall funding level in the 2009-2010 school year had increased compared to the previous year, and an estimated 12 percent reported that their funding had remained the same. We contacted several school officials who had reported on the survey that their district’s funding had increased for the 2009-2010 school year, for the 2010-2011 school year, or during both years. These officials offered a variety of explanations for such funding increases, including increased enrollment numbers due to having added a grade level, having won competitive grant awards, a rebound in state tuition revenue, and having received state aid for a previously approved capital project, including additions to a middle school and high school. Moreover, before the creation of the Education Jobs Fund, of those LEAs experiencing funding decreases, the percentage of LEAs that anticipated funding cuts greater than 5 percent is notably higher for the 2010-2011 school year than for the 2009-2010 school year. Specifically, as shown in figure 4, an estimated 31 percent of LEAs expected funding reductions greater than 5 percent in the 2010-2011 school year, compared to 18 percent of LEAs that experienced cuts of this magnitude during the 2009- 2010 school year. The percentage of LEAs who anticipated funding cuts of 10 percent or higher is also projected to increase somewhat, from 8 percent in the 2009-2010 school year to 14 percent in the 2010-2011 school year. Officials in some of the school districts we visited in June and July 2010—before the creation of the Education Jobs Fund—noted that their funding situation would likely be more dire in the 2011-2012 school year, when Recovery Act funds are no longer available. For example, Boston Public Schools in Massachusetts, which reported experiencing funding decreases this year and the past 2 years, is already preparing for a decreased budget in fiscal year 2012 and beginning to plan how to address budget shortfalls. Additionally, fewer LEAs anticipate funding increases of more than 5 percent for the 2010-2011 school year than for the 2009-2010 school year. Specifically, as shown in figure 4, only 5 percent of LEAs reported anticipating overall funding levels to increase by more than 5 percent for the 2010-2011 school year compared to 17 percent in the 2009- 2010 school year. We also found statistically significant differences between the fiscal situations of urban and rural LEAs and between LEAs of different sizes. For example, significantly more urban LEAs than rural LEAs experienced total funding increases of over 5 percent in the 2009-2010 school year. Specifically, an estimated 32 percent of urban LEAs experienced total funding increases of over 5 percent in the 2009-2010 school year compared to an estimated 8 percent of rural LEAs. We did not find a significant difference between urban and rural LEAs for the 2010-2011 school year, however. While urban LEAs generally fared better than rural LEAs, we found that a larger percentage of the largest LEAs reported expecting a budget decrease for the 2010-2011 school year when compared to all other LEAs. Specifically, we found that 36 percent of the largest LEAs expected funding to decrease by between 1 and 5 percent in the 2010-2011 school year compared to 26 percent of all other LEAs. To Address Expected Funding Decreases, in Spring 2010 Many LEAs Reported Being Very Likely to Cut Teachers, Related Staff, and Other Items Of the 56 percent of LEAs expecting funding decreases, many reported being likely (somewhat or very) to take personnel actions such as cutting positions or freezing pay. However, this information was reported before the $10 billion Education Jobs Fund was created. Our survey results also show that some LEAs also reported being likely to furlough teachers. Specifically, an estimated 76 percent of LEAs that expected funding decreases reported they were likely to cut noninstructional positions and an estimated 70 percent reported they were likely to cut instructional positions. (See fig. 5.) For example, when we met with officials in California’s Mountain View-Whisman School District in June 2010, before the Education Jobs Fund had been created, they expected to cut 20 percent of their K-3 teaching staff in the upcoming school year in part due to projected revenue decreases of between 6 to 10 percent. Given these planned reductions in instructional staff, an estimated 70 percent of LEAs reported being likely to increase class size in the coming school year. For example, LEA officials in Mountain View-Whisman School District, Elk Grove Unified School District in California, and Revere Public Schools in Massachusetts said they were increasing class sizes to deal with budget shortfalls. In addition to cutting positions, an estimated 61 percent of LEAs expecting funding decreases are likely to reduce professional development or teacher training. Approximately 55 percent of LEAs expecting funding decreases reported being likely to freeze pay, and around one-third reported being likely to furlough teachers. For example, officials at San Bernardino City Unified School District and Elk Grove Unified School District in California told us they had decided to furlough some employee groups for at least 9 days in the 2010-2011 school year. Similarly, many LEAs expecting funding decreases also reported being likely (somewhat or very) to take nonpersonnel actions, such as reducing instructional supplies and eliminating summer programs. Specifically, an estimated 87 percent of LEAs expecting funding cuts are likely to reduce instructional supplies or equipment, 73 percent are likely to defer maintenance, 71 percent are likely to reduce energy consumption, and 50 percent are likely to reduce custodial services. (See figure 6.) For example, LEA officials in Elk Grove Unified School District said they were very likely to reduce the purchase of instructional supplies—or have already reduced them—and noted that this may result in teachers and parents voluntarily purchasing additional supplies for classrooms. In addition, LEA officials in Kingston Community Schools, Plymouth Educational Center in Michigan, Elk Grove Unified School District, and Boston Public Schools told us they had been deferring maintenance and would continue to defer it, though they would not defer any maintenance that would compromise the safety of children. Examples of deferred maintenance projects included painting rooms, replacing a roof, promptly fixing air conditioners, and resurfacing parking lots. LEA officials in Boston Public Schools and Elk Grove Unified School District said they were very likely to reduce energy consumption through such efforts as lowering the temperature in schools in winter months, offering incentives to schools with lower energy consumption, and using more energy- efficient light bulbs. Officials in Boston Public Schools, San Bernardino City Unified School District, and Elk Grove Unified School District said they had reduced custodial services in their schools and some would likely further reduce them. Smaller proportions of schools reported being likely to reduce transportation, shorten the school year, or close or consolidate schools. A Boston Public Schools official told us the district planned to reduce transportation costs by creating smaller transportation zones, and also hopes to close and consolidate up to 20 schools to reduce costs. In addition, we found that significantly higher percentages of the largest LEAs reported being likely to reduce transportation services. Specifically, 50 percent of large LEAs reported being likely to reduce transportation services, compared to 35 percent of all other LEAs. Recovery Act Funds Allowed Most LEAs to Retain or Create Teaching Positions and Related Jobs, though Some Still Lost Jobs in School Year 2009-2010 Recovery Act funds for education allowed over three-quarters of LEAs to retain or create teaching positions and related jobs during the 2009-2010 school year, though some LEAs still reported losing jobs even with the additional federal funding. The use of the Recovery Act funding for these purposes is consistent with one of the primary goals of the Recovery Act, which is to save and create jobs in order to help economic recovery. An estimated 87 percent of LEAs across the country reported that Recovery Act funding allowed them to retain or create jobs. Specifically, a higher percentage of LEAs reported retaining staff positions—77 percent—than creating new staff positions —39 percent—for the 2009-2010 school year. In addition, a significantly higher percentage of large LEAs reported that Recovery Act funding allowed them to retain school staff, with nearly all— 98 percent of the largest LEAs in the country—reporting using Recovery Act funding for retention. While most LEAs were able to retain or create jobs with Recovery Act funding, some of these LEAs—nearly 1 in 4—still reported losing jobs overall in their LEA in the 2009-2010 school year. (See fig. 7.) Retaining jobs was top use of Recovery Act funds for three education programs: LEAs used large portions of their Recovery Act IDEA Part B; ESEA Title I, Part A; and SFSF education stabilization funds toward staff retention in the 2009-2010 school year. According to our survey, nearly 70 percent of LEAs spent more than half to all of their Recovery Act SFSF education stabilization funds to retain jobs for the 2009-2010 school year. (See fig. 8.) Although a smaller percentage of LEAs reported using half to all of their IDEA Part B and ESEA Title I, Part A Recovery Act funding—25 percent and 27 percent, respectively—for job retention, retaining staff was still the top use cited by LEAs for IDEA Part B and ESEA Title I, Part A Recovery Act funding. For example, LEA officials in Kingston Community School District told us they had used all of their Recovery Act SFSF education stabilization funds and ESEA Title I, Part A funds, and most of their Recovery Act IDEA Part B, funds to retain staff. A number of factors may explain why such a large percentage of LEAs spent a significant amount of their Recovery Act funding for job retention. For example, a large portion of school expenditures are employee-related costs—with salaries and benefits accounting for more than 80 percent of local school expenditures, according to Education’s most recent data. Also, given the fiscal uncertainty and substantial budget shortfalls facing states, federal funds authorized by the Recovery Act have provided LEAs with additional flexibility to pay for the retention of education staff. Overall, the impact of Recovery Act education funds on job retention may be significant because K-12 public school systems employ about 6.2 million staff, based on Education’s estimates, and make up about 4 percent of the nation’s workforce. In fact, through the reporting period ending June 30, 2010, nearly two-thirds of full-time equivalent positions reported on Recovery.gov have resulted from Recovery Act education programs. Based on our visits to states and LEAs, we were told that Recovery Act SFSF funds, in particular, have provided additional resources and flexibility allowing LEAs to retain staff. For example, one state education official noted that LEAs have more flexibility in spending SFSF funds for general education expenses because ESEA Title I, Part A and IDEA Part B programs target special populations—disadvantaged youth and students with disabilities, respectively. This official said that because funding levels for general education programs in his state have decreased while federal funding levels for ESEA Title I, Part A and IDEA Part B programs have increased, LEAs have used SFSF funds to shore up funding for general education and, in particular, preserve jobs. Instructional positions were more often retained and created than noninstructional positions: Substantially more LEAs retained or created positions for instructional staff compared to noninstructional staff positions for the 2009-2010 school year. Instructional staff typically includes classroom teachers and paraprofessionals and noninstructional staff can include office support, janitorial staff, and school security staff. Specifically, an estimated 74 percent of LEAs nationally retained jobs for instructional staff, compared to 48 percent that retained them for noninstructional staff. Furthermore, 33 percent of LEAs reported creating new instructional staff positions with Recovery Act funding compared to the 22 percent that created them for noninstructional staff. (See fig. 9). According to a number of LEA officials we interviewed, LEAs often spent Recovery Act funding in ways that would benefit students directly in the classroom, thereby focusing on creating and retaining positions for instructional staff before creating and retaining jobs for noninstructional staff, such as administrative and auxiliary staff. For example, officials from the Plymouth Educational Center said that in order to minimize the impact on students, they have made or would consider making cuts to administration, security guards, and paraprofessionals, and instituting further pay cuts before letting go of teachers. Fewer LEAs Used Large Portions of Their Recovery Act Funding to Hire Staff Than to Retain Staff, although Fund Use for Hiring Varied by Program Although our survey results indicate that LEAs overall spent a significant amount of their Recovery Act funding from all three programs to retain jobs, LEAs also reported using Recovery Act funding to hire new staff. As indicated in figure 10, the percentage of LEAs that reported using Recovery Act funding to hire new staff varied across the three programs. For example, 4 percent and 6 percent of LEAs reported spending half or more of their Recovery Act IDEA Part B and SFSF funding, respectively, to hire new staff, while 15 percent of LEAs reported the same use for their Recovery Act ESEA Title I, Part A funds. Overall, nearly three-quarters of LEAs did not use any of their Recovery Act SFSF funding to hire new staff, concentrating instead on using that funding for staff retention. Nearly One in Four LEAs Reported Losing Jobs, Even with Recovery Act Funding, Due to Decreasing Budgets and Other Factors Even with the additional Recovery Act funding provided to LEAs in school year 2009-2010, nearly one-quarter of LEAs reported losing jobs, primarily due to decreasing overall budgets. Without Recovery Act funds, it is likely that the magnitude of job losses in these LEAs would have been higher, given that nearly all of the LEAs experiencing job loss overall also reported retaining jobs. Specifically, an estimated 92 percent of LEAs where LEA officials indicated the number of teachers had decreased also said that Recovery Act funds had allowed them to retain jobs during the school year. Also, almost 30 percent of LEAs used Recovery Act funds to create new jobs during the 2009-2010 school year, even as their overall number of jobs decreased. For example, according to a Boston Public Schools official, the number of staff in the district had decreased in the 2009-2010 school year, but the district also used Recovery Act funds for both retention and job creation. For example, the district hired 16 new English as a Second Language teachers and specialists with ESEA Title I, Part A Recovery Act funds even as they let go of teachers during school closures. Decreasing overall budgets at the LEA level was the main reason that LEAs reported losing jobs in School Year 2009-2010. Specifically, 67 percent of LEAs that lost jobs reported that their budget was a factor to a great or very great degree. (See fig. 11.) For example, officials from Elk Grove Unified School District in California told us they laid off about 500 staff at the end of the 2009-2010 school year due to budgetary pressures, after exhausting their reserves and spending Recovery Act funds. In addition to budgetary factors, LEAs lost jobs because of staff attrition and declining enrollment, although to a much lesser extent. In addition to retaining and hiring staff, LEAs spent Recovery Act funds on items that could help build long-term capacity, while also avoiding creating recurring costs for LEAs. Overall, LEAs reported several one-time expenditures such as purchasing computer technology, providing professional development for instructional staff, and purchasing instructional materials as among some of the highest uses of funds after job retention and creation. (See fig. 12.) LEA officials reported making one-time purchases with Recovery Act funds to enhance district capacity. For example, at Plymouth Educational Center in Michigan, officials told us that Recovery Act funds were used to enhance computer technology for both students and teachers. Further, several LEA officials told us they had used IDEA Part B Recovery Act funds to purchase professional development and assistive technologies that would help build the district’s capacity to serve more students with disabilities. These officials told us that they will be able to educate students with disabilities far more affordably within the district than by paying external providers—a benefit they anticipate will continue even after the Recovery Act funds are spent. For example, in rural Michigan, officials told us that IDEA Part B funding has allowed the Kingston Community Schools to build capacity by partnering, along with other schools from the surrounding area, with the University of Kansas to provide coaching and training to teachers who can then provide services to more students with disabilities. In addition, LEA officials in Boston, Massachusetts, said they had used these funds to obtain equipment and provide professional development so they could serve more students with autism within the district. Although more than half of all LEAs reported being able to provide students with the same level of service in 2009-2010 as in 2008-2009, a number of LEAs reported they had not been able to maintain the same level of service at their LEA for the same time frame. Specifically, an estimated 63 percent of LEAs nationally reported that Recovery Act SFSF funds allowed them to maintain the same level of service to students in their LEA in school year 2009-2010 as compared to the previous school year. However, 40 percent of the largest LEAs reported not being able to maintain the same level of service compared to 16 percent of all other LEAs. (See fig. 13.) LEAs reported a range of areas in which there was a great or very great reduction in the level of services, including instructional materials and resources, staff development, and summer school programs. For example, LEA officials from San Bernardino City Unified School District told us they had applied cuts with the intent of having the least impact on children in the classroom, and that these cuts included delay of new textbook adoption, administrative reductions, and reduced maintenance. Further, Boston Public Schools and Revere Public Schools pointed to cuts in programming such as art and music as examples in how their service levels had decreased. A number of LEAs reported that Recovery Act SFSF funds allowed them to raise their level of service in 2009-2010, with a lower percentage of the largest LEAs reporting raising service levels compared to all other LEAs. Based on our survey results for the 2009-2010 school year, 20 percent of all LEAs indicated that the additional Recovery Act SFSF funding made it possible to raise the level of services provided to students compared to what the LEA was able to provide in the prior 2008-2009 school year. A significantly lower percentage of the largest LEAs in the country—5 percent—specified that the SFSF funding raised service levels in their schools. Some LEAs report making modest progress in education reform, but relatively few report they are making significant progress in advancing the four core education reform areas states are required to address as a condition of receiving SFSF funding. For example, an estimated 28 percent of LEAs reported making modest progress and just 13 percent of LEAs reported making significant progress in increasing teacher effectiveness— the highest percentage among the four areas. (See fig. 14.) However, some of these goals, such as improving standards and assessments, are more likely to be pursued at the state level than at the local level, while others, such as supporting struggling schools, may not apply to all districts. In order to receive SFSF funding, states had to submit an application to Education that required each state to provide several assurances, including that it would implement strategies to advance four core areas of education reform, as described by Education: (1) increase teacher effectiveness and address inequities in the distribution of highly qualified teachers; (2) establish a pre-K-through-college data system to track student progress and foster improvement; (3) make progress toward rigorous college- and career-ready standards and high-quality assessments that are valid and reliable for all students, including students with limited English proficiency and students with disabilities; and (4) provide targeted, intensive support and effective interventions to turn around schools identified for corrective action or restructuring. Furthermore, in order to receive the remainder of their SFSF allocations (Phase II), states had to agree to collect and publicly report on more than 30 indicators and descriptors related to the four core areas of education reform described above. While states will be responsible for assuring advancement of these reform areas, LEAs were generally given broad discretion in how to spend the SFSF funds. It is not clear how LEA progress in advancing these four reforms will affect states’ progress toward meeting their assurances. Education officials noted that they were not surprised that fewer LEAs reported expanding reform efforts in 2009-2010 given their budget situation. Figure 14 depicts the extent to which LEAs reported making modest or significant progress in each of the four reform areas. Almost all LEAs we surveyed stated that ESEA Title I, Part A and IDEA Part B Recovery Act funds allowed their LEAs to either expand or maintain education reform efforts in 2009-2010, but a small and increasing percentage of LEAs expect to reduce reform efforts in 2010-2011 than reduced such efforts in 2009-2010. In addition to retaining and creating jobs, Education officials reported they intended the use of Recovery Act funds to spur education reform in LEAs to improve student achievement. Education provided guidance to states and LEAs on ways to use the Recovery Act funds to stimulate reform, as well as to retain jobs. Because LEAs are required to obligate 85 percent of their Title I, Part A Recovery Act funding by September 30, 2010, unless approved for a waiver, Title I, Part A education reform efforts in districts without waivers could decrease because fewer funds would be available in the upcoming school year. ESEA Title I, Part A Recovery Act Funding Enhanced Education Reform Efforts at Nearly Half of All LEAs and Helped Enhance or Maintain Reform at Nearly All LEAs Most LEAs report that Recovery Act funding for ESEA Title I, Part A allowed them to either expand or maintain education reforms for disadvantaged students in both 2009-2010 and 2010-2011, but the percentage of districts that expect to expand reform is lower for 2010-2011 than for the 2009-2010. According to our survey results, an estimated 48 percent of LEAs indicated that the additional ESEA Title I, Part A Recovery Act funding they received allowed their LEA to expand education reform efforts in 2009-2010. For example, officials from one Michigan LEA told us they used the ESEA Title I, Part A Recovery Act funding to enhance a tutoring program for all at-risk students in math and language arts. Officials at another LEA told us the ESEA Title I, Part A Recovery Act funding allowed them to enhance their after-school tutoring program targeted at English language learners. Moreover, an additional 48 percent of LEAs stated that the funding allowed them to maintain reform efforts for ESEA Title I, Part A programs. However, the percentage of districts anticipating ESEA Title I, Part A funding that will allow their district to expand reform efforts is lower for the 2010-2011 school year than for the 2009-2010 school year. (See fig. 15) While an estimated 3 percent of LEAs stated that even with the additional Recovery Act funding provided under ESEA Title I, Part A, education reform efforts decreased in the 2009-2010 school year, this percentage increased to 11 percent when we asked LEAs to look ahead to the 2010-2011 school year. Title I, Part A reform efforts could potentially decrease in the coming school year, in part because LEAs are required to obligate 85 percent of ESEA Title I, Part A Recovery Act funds by September 30, 2010, unless they receive a waiver. IDEA Part B Recovery Act Funding Allowed Most LEAs to Either Expand or Maintain Reform Efforts for Special Education Students Most LEAs report that Recovery Act funding for IDEA Part B allowed them to either expand or maintain education reform efforts for special education students in both 2009-2010 and 2010-2011, but the percentage of districts that expect to expand reform is lower for 2010-2011 than for the previous year. Specifically, in 2009-2010, we estimate that 43 percent of LEAs nationally expanded reform efforts for special education students because of the additional IDEA Part B Recovery Act funding in 2009-2010. (See fig. 16). For example, an official in Boston, Massachusetts, told us that the Boston Public Schools has used some of its IDEA Part B Recovery Act funding to train teachers and purchase equipment to enhance classroom services for autistic students. In addition, 55 percent of LEAs noted that the Recovery Act funding allowed them to maintain ongoing education reform efforts targeted for special education students in the same year. For example, in Michigan, one LEA official we interviewed stated that the LEA had used Recovery Act funding to maintain intervention services for special education students. Looking ahead to the 2010-2011 school year, however, a lower percentage of districts—28 percent—expect to expand reform. Given the Increase in IDEA Recovery Act Funding in 2009-2010, about 36 Percent of LEAs Exercised Flexibility to Decrease Local Spending on Special Education, and Primarily Used Funds to Retain Staff In the 2009-2010 school year, among the 86 percent of LEAs that reported receiving Recovery Act IDEA Part B funds, an estimated 36 percent reported taking advantage of the maintenance-of-effort (MOE) flexibility under IDEA that allows them to reduce their local, or state and local, spending on students with disabilities. IDEA requires LEAs to budget at least the same total or per capita amount of local funds for the education of children with disabilities as the LEA spent in the most recent prior year for which information is available. As provided for in IDEA, in any fiscal year in which an LEA’s federal IDEA Part B allocation exceeds the amount the LEA received in the previous year, an eligible LEA may reduce local spending on students with disabilities by up to 50 percent of the amount of the increase, as long as the LEA uses those freed-up funds for activities authorized under ESEA, which supports activities for general education. Because Recovery Act funds for IDEA Part B count as part of an LEA’s overall federal IDEA allocation, in fiscal year 2009, the total increase in IDEA Part B funding for LEAs was far larger than the increases in previous years, which provided a greater incentive for many LEAs to take advantage of the MOE flexibility in the 2009-2010 school year. Of the 36 percent of LEAs exercising the flexibility, an estimated 41 percent reported spending more than half of the “freed-up” local funds on retaining staff. Other uses of the freed-up funds included providing professional development for instructional staff, purchasing computer technology, and hiring new staff. We also found an example of an LEA that planned to take advantage of the MOE flexibility even though it was not eligible to do so. Based on our review of budget documents and local officials’ statements, the Syracuse City School District (SCSD) had reduced their 2009-2010 spending by about $2.3 million. We determined, and local officials subsequently agreed, that SCSD was not eligible for the MOE reduction because it was not meeting performance indicators related to graduation and drop-out rates among disabled students and it had a significantly high percentage of students with disabilities being suspended for more than 10 days, among other indicators. When we notified LEA officials of its ineligibility during our visit in March 2010, they attributed their situation to miscommunication among staff in the special education and finance offices and a misunderstanding of the eligibility rules for reducing MOE. LEA officials informed us that they would follow up on this issue and take steps to ensure they met MOE requirements. SCSD subsequently provided documentation showing that they were indeed meeting MOE requirements. While the decision by LEAs to decrease their local spending can free up funds to address other needs in the current school year, it could also have implications for future local spending on special education. Because LEAs are required to maintain their previous year’s level of local spending on special education and related services to continue to receive IDEA Part B funds, LEAs taking advantage of the spending flexibility will only be required to maintain these expenditures at the reduced level in subsequent years. If LEAs that use the flexibility to decrease their local spending do not voluntarily increase their spending in future years, and federal IDEA Part B allocations decrease—specifically by returning to levels comparable to those before the Recovery Act—the total federal, state, and local spending for the education of students with disabilities will decrease compared to overall spending before the Recovery Act. However, while LEAs may maintain the lower level of spending, because of the IDEA requirement that children with disabilities receive a “free appropriate public education,” (FAPE) districts may not be able to maintain services for students with disabilities at the lower levels of spending. For example, in Elk Grove Unified School District (California), which reduced local spending in 2009-2010, local officials reported that they have plans to include in their budget for 2010-2011 an amount equal to or greater than their 2008-2009 spending to ensure that services to students with disabilities are maintained. Officials said they needed to do this in order to maintain services for students with disabilities. In contrast, a charter school in Michigan reported that it may not be able to restore funding to previous years’ levels, given decreases in state funding, but would make sure it provided services for students with disabilities. As of August 27, 2010, states covered by our review had drawn down 72 percent ($18.2 billion) of the awarded SFSF education stabilization funds; 46 percent ($3.0 billion) of Recovery Act funds for ESEA Title I, Part A; and 45 percent ($3.4 billion) of Recovery Act funds for IDEA Part B. Some states had drawn down a much larger portion of their funds than other states. (See table 6.) For example, Arizona, Georgia, Illinois and New Jersey had drawn down all of their SFSF education stabilization funds as of August 27, 2010, while Florida, Mississippi, Pennsylvania, and Texas had drawn down less than 55 percent of these funds. As noted in a previous report, drawdowns typically lag behind actual expenditures. For example, state officials in New Jersey stated that drawdown figures lag expenditures because funds are only drawn down once districts submit for reimbursement. However, because LEAs are required to obligate 85 percent of ESEA Title I Recovery Act funds by September 30, 2010, a low drawdown rate could indicate either that a large percentage of districts have sought and obtained or will seek and obtain waivers from this requirement or are at risk of not meeting this requirement. To help mitigate the effects of the funding cliff—when Recovery Act funding is no longer available—Education officials are encouraging districts to use carryover waivers to spread ESEA Title I, Part A funds over 2 years. Specifically, in a webinar Education officials hosted on June 15, 2010, Education officials explained how districts could minimize the impact of the funding cliff by strategically using carryover waivers. Also, officials in states we contacted appeared to be following Education’s suggested strategy to encourage the use of carryover waivers. We spoke to state officials in five states and the District of Columbia with relatively low drawdown rates and some of these officials told us they were encouraging districts to spread the funds over the 2-year period rather than try to obligate 85 percent of the funds by September 30, 2010. For example, Massachusetts state officials told us they have encouraged all districts receiving Recovery Act Title I funds to apply for a carryover waiver to allow them the flexibility to use Recovery Act funds throughout the two-year period. Similarly, officials in New York state told us they had requested a blanket waiver for all districts in the state, which was approved by Education. Education has completed 16 of the 18 on site-monitoring visits it scheduled for the 2009-2010 monitoring cycle (including 11 states and the District of Columbia that are in our review), according to department officials. The most frequent monitoring findings related to the Recovery Act had to do with districts failing to follow fiscal and set-aside requirements, such as the requirements to document time and effort of employees paid with Title I, Part A funds and properly calculate how much funding was required to be set aside for specific purposes, according to Education officials. Regarding fiscal requirement findings, the most frequent findings included districts’ failure to (1) determine whether services provided in schools receiving ESEA Title I, Part A funding were comparable to those services provided to students in other district schools not receiving ESEA Title I, Part A funding, (2) determine whether federal funding had been used to “supplant” local or state funds by paying for services that had previously been provided using local or state funds, or (3) document that employees funded through multiple funding sources were dedicating the appropriate proportion of their time and effort to serving disadvantaged students. Regarding set-aside calculations, Education officials said that they found that some districts had not included Recovery Act funding in their calculations as required. Education officials provided examples of corrective actions state educational agencies and LEAs with fiscal or set-aside calculation findings could take to resolve these issues. For example, calculations for comparability or set- asides could be corrected to comply with requirements. For the 2010-2011 monitoring cycle, Education officials plan to conduct on-site visits in 11 states, including 2 in our review, and the Bureau of Indian Education. During each of these 12 monitoring visits, Education officials will assess state and local implementation of the School Improvement Grant program in addition to the implementation of regular and Recovery Act ESEA Title I, Part A requirements. Department officials said that during the upcoming monitoring cycle, they will continue to shift their monitoring focus away from strict audits towards providing technical assistance. Department officials also told us that they will develop state- specific technical assistance for the states reviewed during the 2009-2010 monitoring cycle to help them resolve identified challenges. Education officials told us they continue to engage state and local officials using a variety of technical assistance efforts. Such efforts include issuing written guidance, hosting webinars, and giving presentations at state ESEA Title I conferences to explain and discuss federal guidance. Education officials also noted that they constantly communicate with state and local officials over the telephone and through email, and issue frequently asked questions to share their answers to questions from state and local officials. Department officials also noted that they have offered state-specific technical assistance to state and local officials in several states, particularly in states with new ESEA Title I leaders. Some of these technical assistance efforts have been initiated as a direct result of the Recovery Act, according to Education officials, who also said that the increased technical assistance efforts have created a strain on their resources and capacity. Education Continues to Address Recovery Act Issues within Its Ongoing IDEA Monitoring Efforts Regarding IDEA, in the fall of 2009, Education officials reported that they pursued their regular targeted monitoring visits and technical assistance, which covers 16 states or territories, and in response to the Recovery Act, Education’s Office of Special Education Programs (OSEP) is also performing a desk review of all states. According to Education officials, the department uses annual performance report information and focused monitoring priorities to determine in which states it will conduct monitoring visits. In the course of its monitoring visits, the department verifies the effectiveness of state systems for general supervision, data collection, and fiscal management, as well as reviews state progress toward the goals from its state performance plan. In conducting site visits, OSEP reviews state records, makes visits to selected LEAs for on-site examination of student records, and assesses state special education systems. Following these visits, Education issues a report on findings and, when noncompliance is found, requires states to demonstrate correction of the noncompliance. For fall 2010, Education is pursuing some additional monitoring and providing additional support to states in implementing the Recovery Act. Specifically, in addition to its annual monitoring visits, OSEP is planning to visit up to 10 additional states this year. These additional visits will be less intensive than the regular monitoring visits, and will focus more on the Recovery Act than the annual monitoring visits. Also in response to the Recovery Act, the department has assigned four Recovery Act Facilitators, who work with four teams that will provide support and guidance to states regarding their Recovery Act monitoring efforts and the reporting of accurate data for recipient reporting under the Recovery Act. While they did not have any Recovery Act-specific findings in their most recent monitoring visits, OSEP officials did report some areas on which they will be focusing in their upcoming monitoring. OSEP officials reported that one of the issues they have been focusing on for several years is ensuring timely obligation and expenditure of funds. After finding 10 years ago that states had failed to obligate a total of $32.8 million in IDEA funds before the end of the 27-month timeframe required under the law, the department began to track state-level draw-downs, and now works to remind states that have balances above a certain threshold when the deadlines for obligating funds are approaching. OSEP officials reported that in subsequent years, after they began tracking drawdowns, the expired unobligated funds have declined to about $5.6 million. Also, OSEP officials reported that some states were calculating their state-level MOE spending without including spending on special education from sources outside of the state educational agency. For example, if other state departments are providing counseling or rehabilitation services, that spending must be included. Finally, OSEP officials reported that while state education agencies generally require LEAs to provide a budget for their intended uses of IDEA funds, and require LEAs to attest that they are complying with MOE requirements, states do not always perform monitoring later to ensure that LEAs can document that they spent the funds according to their budgets. In one example, in Iowa, we found equipment purchases under IDEA larger than $5,000 for a single piece of equipment that were not submitted to the state for approval as state officials reported was required. In other examples, we found that the Des Moines Public School District purchased equipment for about $25,000, and the Marshalltown Community School District in Iowa purchased $8,400 in communications equipment and software, without seeking review and approval from the state prior to purchase, as state officials said was required. As we completed our reviews, the LEAs were making changes in their procedures to ensure state approval of IDEA equipment purchases greater than $5,000. Given State-Level Budget Situations, Education Has Approved Waivers Allowing States to Decrease Their State Spending on Special Education Because of declines in state-level budgets, Education has approved waiver applications from states to decrease their state-level spending on special education. Under IDEA, the Secretary of Education may waive state-level MOE requirements for equitable purposes due to “exceptional or uncontrollable circumstances such as a natural disaster or a precipitous and unforeseen decline in the financial resources of the State.” Education approved a state-level waiver for one state in our review—Iowa—for 2009. Education officials said that the waiver will only apply for 1 year, and, in 2010 Iowa must return its spending on special education to the 2009 level unless the state applies for and receives another waiver. Education officials said that the department is considering each application individually based on its own merits, and is reminding states in its approval letters that they must provide services to students with disabilities that would still meet the requirement under the law that the state provide a free appropriate public education, despite any cuts. In a June 2010 memorandum, Education said that it was considering the impact of other sources of funding for special education, including those from the Recovery Act, when making waiver decisions. Education officials also told us that they want to ensure that cuts to special education services are equitable when compared to other budget cuts, and therefore they consider the percentage decrease in spending on special education in relation to that of other items in the states’ budget, both education-related and other items. Education’s guidance also notes that states that receive a waiver may be subject to additional monitoring, and Education officials told us that each of the waiver-approved states will be among the 16 states chosen for full monitoring visits described above and subject to additional monitoring to make sure that free appropriate public education was provided. Education Has Begun to Monitor SFSF Grantees and Address Initial Challenges Associated with Monitoring Noneducation State and Local Agencies Education has begun to monitor SFSF grantees, and as of August 30, 2010 had conducted on-site monitoring of 1 state—New York—and Washington, D.C. included in our review as well as desk reviews of two states in our review—Georgia and North Carolina. Education has not yet completed its monitoring reports to states, but department officials told us that its findings were minor and that it would work with states to address any findings. For example, Education officials told us that some of the minor findings included not providing timely certification documents, ensuring that all jobs were reported on required recipient reports, or adhering to monitoring schedules of subrecipients. Education has 10 more on-site monitoring visits planned between September and November 2010 and 10 planned for 2011. Education officials reported some challenges they experienced during their initial monitoring visits because of the differing types of subrecipients and the amount of documentation to review. Education’s Office of Elementary and Secondary Education (OESE) is charged with administering and monitoring SFSF funds. While OESE is experienced with monitoring LEAs, SFSF educational stabilization funds may also flow to Institutions of Higher Education, which OESE has little or no experience overseeing. Further, SFSF government services funds provide funding to a broad range of state and local agencies that Education does not normally monitor. For example, SFSF government services funds subrecipients consist of a variety of noneducational entities including state police forces, fire departments, corrections departments, and healthcare facilities and hospitals. Since this is the first SFSF monitoring effort, Education officials told us that it will take time for Education’s staff to become familiar with these subrecipients and the types of documentation they provide. In addition, Education officials reported that the amount of information necessary to monitor SFSF funds was voluminous and required more time than was expected, but they are continuing to work to improve the SFSF monitoring process. In September 2009, we reported that some states faced challenges in developing monitoring plans for SFSF funds, and we recommended that Education take action such as collecting and reviewing documentation of state monitoring plans to ensure that states understand and fulfill their responsibility to monitor subrecipients of SFSF funds. Education acted on our recommendation and required states to submit SFSF monitoring plans to Education by March 12, 2010. Education officials told us they are reviewing the plans to ensure that states planned to adequately monitor SFSF subrecipients. Given State-Level Budget Situations, Education Has Approved SFSF Waivers Allowing States to Decrease Their State Spending on Education The Secretary of Education has granted an SFSF MOE waiver to one state in our review—New Jersey—allowing the state to reduce 2009 state support for education below 2006 levels. The department grants these waivers once a state certifies that state education spending did not decrease as a percentage of total state revenues. As we reported in May, the states we reviewed told us they met SFSF MOE levels in fiscal year 2009 or obtained waivers. Because of declines in state-level budgets, two states in our review—Florida and New Jersey—requested a waiver from Education to decrease their 2009 state-level spending on education. After these states’ 2009 state education funding figures were finalized, Education officials told us they reviewed waiver applications to ensure that state education funding in 2009 met the requirements for an SFSF waiver. Education officials reported that New Jersey’s and Rhode Island’s waivers have been approved and that they are currently reviewing South Carolina’s and Florida’s waivers. Education Announced Race to the Top Grants and SFSF Phase II Awards Education has announced that the District of Columbia and 11 states, including Florida, Georgia, Massachusetts, New York, North Carolina, and Ohio, will receive Race to the Top grants. This program is a competitive grant fund created by the Recovery Act as part of SFSF providing $4.35 billion in funding for statewide reform efforts and to develop common academic assessments. In addition, Education officials reported that almost all of the SFSF Phase II funds have been awarded to most states. As such, states now have access to their entire allotment of SFSF funds and all SFSF funds must be obligated by September 30, 2011. Education Released New Clarifying Guidance on Recipient Reporting As in previous reporting periods, FTE positions funded by Education grants accounted for a large proportion of all reported FTEs. Specifically, Education recipients reported around 450,000 FTEs, which represent 60 percent of the nearly 750,000 FTEs reported for the period ending June 30, 2010. To improve the consistency of FTE data collected and reported, in May and March 2010 GAO made several recommendations to Education, including that Education re-emphasize the responsibility of sub-recipients to include hours worked by vendors in their quarterly FTE calculations and that Education provide clarifying guidance to recipients on how to best calculate FTEs for education employees during quarters when school is not in session. Education implemented our recommendations by issuing clarifying guidance on August 26, 2010, that specifies how education sub- recipients are to calculate FTEs for recipient reporting for Education- specific situations, such as how to calculate FTEs for teachers not working during the summer months who are considered full-time employees. Setbacks in issuing final written guidance and resource constraints at Education have slowed the application process for School Improvement Grants (SIG)—competitive awards to help turn around the lowest performing schools—according to department officials. According to Education officials, one reason that the state application process took longer than expected was that the department had to revise the final requirements it initially released in December 2009. According to Education officials, some language in the Consolidated Appropriations Act, 2010 necessitated changes to these requirements. Education officials released revised guidance in late January 2010 and again in June 2010 with a few additional revisions. While the changes to the guidance and other delays created a challenge for some states, Education assisted states in moving their applications forward by responding to questions in a timely manner. In addition, Education extended the application deadline set in the initial guidance document to allow time for the department to offer technical assistance and for states to revise their applications given the changed requirements. In addition to the delay caused by issuing revised guidance, department officials said that staffing constraints had limited the department’s ability to review state applications, which ranged from 200 to 400 pages in length, and to help state officials revise these applications. They noted that, in some cases, states had to revise the application, sometimes more than once, in order to comply with SIG requirements. Because certain compliance issues related to more than one part of the application (depending on how states put their applications together), Education staff had to reread each application in full after each resubmission to ensure compliance. Overseeing the substantial influx of additional ESEA Title I funds provided through the Recovery Act, including SIG funds, substantially increased staff workload, particularly given that staffing levels did not increase, said a senior Education official. While one staff member works full-time to coordinate the SIG application process at the department, the 17 other staff who were assigned to work on SIG application reviews assumed these responsibilities in addition to their other monitoring, technical assistance, and programmatic duties, according to a senior Education official. State officials in some states and the District of Columbia told us that they had encountered various challenges in applying for and implementing the School Improvement Grants and that timeframes have been tight. These states were at different stages in the process of selecting LEAs to receive SIG funds, but were taking various steps to address the tight timeframes and work through challenges, and expected that districts would be ready to use the grant funds in the 2010-2011 school year. For example, officials in New York told us in late August that they had nearly completed their review of districts’ SIG applications. They also noted encountering challenges in New York City, where school districts are not allowed to replace principals or close schools—steps required by certain school turnaround models—and having to work through two specific collective bargaining issues. In contrast, New Jersey officials said they had completed their review of district applications and selected 12 schools, representing 7 school districts to receive SIG funds, with some districts receiving grants for multiple schools. To ease tight time frames, Michigan officials told us that while awaiting approval from Education, they had created an iterative application process for districts, whereby districts were required to submit an initial statement of intent in June, followed by a more detailed initial application in mid-July. State education officials told us they reviewed these initial drafts and gave local officials feedback before the final applications were due. As of late July, Education had approved SIG applications for 48 states and the District of Columbia, including all 16 of the states and the District of Columbia in our review. Nationwide, the Federal Highway Administration (FHWA) obligated $25.6 billion in Recovery Act funds for over 12,300 highway projects and reimbursed $11.1 billion as of August 2, 2010. The Federal Transit Administration (FTA) obligated $8.76 billion of Recovery Act funds for about 1,055 grants and reimbursed $3.6 billion as of August 5, 2010. Figure 17 shows FHWA’s and FTA’s reimbursements during the Recovery Act. Nationally, 44 percent of funds obligated for highway projects had been reimbursed as of August 2, 2010. Reimbursement rates varied widely among the 16 states and the District—between 23 percent and 77 percent. Illinois, Iowa, and Mississippi had the highest reimbursement rates—each at 65 percent or more. Officials in all 3 states told us that in selecting projects they emphasized projects that could be completed quickly, and each undertook more pavement resurfacing projects—which can be quickly initiated—than any of the other states we reviewed. Five states had reimbursement rates below 30 percent—of particular note, California, which received almost 1 out of every 10 Recovery Act highway dollars apportioned nationwide, had the second lowest reimbursement rate among the 16 states and the District at 26 percent ($633 million). Officials from California noted that the state had undertaken a number of large projects that had the potential to offer long-term benefits but for which construction could not be initiated quickly. For example, California used about $197 million in Recovery Act funds to partially finance the Caldecott Tunnel improvement project (total estimated cost of $420 million). California awarded a contract in November 2009 and began construction of a new tunnel on a congested stretch of highway between Oakland and Orinda in February 2010, nearly 1 year after the Recovery Act was enacted. California officials also attributed the state’s lower reimbursement rates to having a majority of its projects administered by local governments, which are often reimbursed more slowly than state-administered projects. According to California officials, as of June 30, 2010, about 62 percent or $1.5 billion of California’s $2.5 billion are obligated for local government projects. California officials stated that locally-administered highway projects take longer to reach the reimbursement phase because of the additional steps required to approve local highway projects and because localities with relatively small projects tend to seek reimbursement in one lump sum at the end of a project to minimize time and administrative costs. The effect of projects sponsored by local agencies on reimbursements is not limited to California. Among all the 16 states and the District, reimbursement of funds suballocated for metropolitan, regional, and local use lagged behind state projects. Suballocated funds can be administered through local transportation agencies such as city or county agencies that can lack familiarity with federal requirements. As we have previously reported, local agencies have had challenges selecting projects that will meet these requirement and suballocated funds have generally taken longer to obligate than nonsuballocated funds. Data show this pattern extending to reimbursements as well. New Jersey and Arizona had the lowest reimbursement rates on suballocated projects, 10 and 18 percent, respectively. Our past reports have noted that New Jersey and Arizona were among the slowest states to select projects for funding in suballocated areas. Table 7 shows the total reimbursement rates in the 16 states and the District, as well as the reimbursement rates for state and suballocated projects. Recovery Act highway obligations were used primarily for pavement improvement projects, such as resurfacing, reconstruction, and rehabilitation of existing roadways. Recovery Act public transportation funds were used primarily for upgrading transit facilities and improving bus fleets (see fig. 18). States Asked FHWA to Deobligate Funds after the 1-Year Deadline, but Some Suballocated Areas Faced Challenges in Identifying Additional Projects for Funding As we have previously reported, an economic stimulus package should assure that projects are undertaken quickly to provide a timely stimulus to the economy. The Recovery Act included obligation deadlines to facilitate the timely use of funds, including early March 2010 (1-year) deadlines to obligate Recovery Act highway and transit funds. In our May 2010 report, we reported that the states met these deadlines. Since the March 2010 deadline for obligating Recovery Act highway funds, states have asked FHWA to deobligate some funds and are subsequently asking FHWA to obligate these funds to new projects. To use states’ full apportionments, those funds must be obligated again by September 30, 2010, after which all unobligated highway funds will no longer be available to the states. As of August 2, 2010, about $397 million, or 2.6 percent, of total Recovery Act highway funds remained to be obligated in the 16 states and the District. Nationally, about $565 million remained. These amounts have increased steadily since the March 2010 deadline—for example, in the 1-month period between June 30, 2010, and August 2, 2010, the amount available for obligation increased from about $509 million to $565 million (see fig. 19). Projects supported with suballocated funds generally had higher levels of unobligated funds compared with projects using funds that are not suballocated. As of August 2, 2010, $199 million of the $7.7 billion available to suballocated areas nationwide remained to be obligated before the September 30, 2010, deadline. Also, several of the states we reviewed had unobligated suballocated funds that were roughly three to five times larger than the national average (see table 8). FHWA officials told us that the timely expenditure of funds on projects administered by local public agencies remains an area of concern and that the agency is closely monitoring these projects to ensure, on behalf of the states, that all funds are obligated. These funds will be withdrawn after September 30, 2010, if these funds are not obligated. State officials identified several reasons projects might have been delayed in suballocated areas. Officials from the Arizona Department of Transportation, in which 12.4 percent of suballocated funds were unobligated, said that many suballocated areas did not have projects ready for federal-aid funding in part because of limited staff and other resources to move projects through approvals and prepare documentation in a manner consistent with federal requirements. Officials from North Carolina, in which 8.9 percent of suballocated funding was deobligated, told us that local agencies using suballocated funding faced challenges completing environmental documents, acquiring rights-of-way, and finalizing bid documents. As a result, many projects local agencies considered to be “ready-to-go” did not meet various federal standards, and agencies had to find other projects, which created delays. Among the states we reviewed, most of the funds that the states asked FHWA to deobligate were from contract award savings. From March 2, 2010, to June 7, 2010, the 16 states and the District that we are reviewing requested FHWA to deobligate almost $457 million. About 85 percent of those funds were deobligated due to contracts continuing to be awarded below state cost estimates (see fig. 20). Withdrawn projects accounted for only about $17 million, or 4 percent, of deobligations from March 2 to June 7, 2010, less than 1 percent of the total $15.2 billion available to the 16 states and the District for highways. Two projects using suballocated funds in California accounted for about $9.7 million of the $17 million in withdrawn projects. In both cases, the project was withdrawn and later established as a new project. California officials told us they withdrew one $1.8 million project because local officials wanted to expand the scope of the project. Another $7.9 million project was withdrawn because it had an incorrect right-of-way certification. Officials told us that the state subsequently resubmitted the project and funding was obligated after correcting the certification. Contract Data from FHWA’s Recovery Act Data System Continues to Be Inaccurate In May 2010, we reported that while progress has been made in awarding Recovery Act contracts and initiating work, the accuracy of contract data in FHWA’s Recovery Act Data System (RADS) is of concern. Among other information, the Recovery Act requires the U.S. Department of Transportation (DOT) to report to Congress on the number of projects for which contracts have been awarded, for which work has begun, and for which work had been completed, and the amount of federal funds associated with these contracts. DOT established RADS because it had not previously collected and reported such information for the regular federal highway formula program. DOT relies on states to enter data into RADS and uses automated data checks and rules, as well as periodic reviews by FHWA Division office officials located in every state, to improve the accuracy of state-reported data. We continued to find problems with the accuracy of RADS contract data. For example, more than 3,100 contracts were shown as having been awarded on the same date the funds were obligated. We also found that about 1,400 contracts were reported as awarded before FHWA obligated the funds. Because contracts are normally awarded several weeks or months after funds are obligated by FHWA, the numbers and amounts of contracts awarded and work begun is likely overstated. Because FHWA does not have accurate data from states in RADS, it is not able to use RADS to meet the Recovery Act reporting requirements for contracts. FHWA officials acknowledged that they cannot use data from RADS to provide information on contract award amounts. Officials said they instead use data from FHWA’s financial management system to meet the Recovery Act reporting requirements for contracts because this system receives more checks for data accuracy. However, using FHWA’s financial management system can also overstate the amount of funds under contract. FHWA reports data at the project level, not at the contract level; this is important because one project can include several contracts. When reporting at the project level, FHWA reports the entire project as being under contract once one contract is awarded, even if several more remain to be awarded. FHWA provided project-level data in its report to Congress dated May 7, 2010, but these data were labeled in the report as contract data. As noted above, the Recovery Act requires DOT to report not only the number of projects, but also the total amount of federal funds associated with contracts that have been awarded, work has begun, and work is completed. FHWA has taken some steps to improve data accuracy in RADS, but officials said that there was no date for when they would implement changes. These officials said they have assembled a state advisory group to look at the challenges that exist in RADS and make recommendations on improvements. FHWA officials said they have not had sufficient resources to incorporate additional data checks into the software that would check for errors. Such checks could ensure that milestones are sequentially entered, thereby improving the accuracy of these data. Many States Requested That FHWA Transfer Funds to FTA for Public Transportation Projects and Many States and Transit Agencies Elected to Use Some Funds for Operating Expenses, Although Data on Operating Expenses Is Limited As we reported in our prior Recovery Act work, states have the option to request that FHWA transfer Recovery Act highway funds to FTA for use in public transportation programs, just as they do in the regular Federal Aid Highway Program. While most states transfer some funds each year to address transit priorities, data from Recovery Act funds indicated that 21 states requested FHWA transfer some Recovery Act funds to the states’ public transportation program. Many states transferred funds shortly after Recovery Act funds became available in February 2009. For example, Caltrans transferred almost $2 million in July 2009. Caltrans officials told us that their state has a robust transfer program because of the state’s extensive public transportation system and the system’s many needs. Caltrans’ subrecipients used this funding for two large projects identified in the state’s transportation improvement plan but for which sufficient funding had not been available. Specifically, one subrecipient is purchasing two buses for a rural transit agency, and the second is constructing a new intermodal transit hub that will serve the north Lake Tahoe area. Caltrans officials said that the Recovery Act funding was sufficient to complete these programs. According to FTA data, many state departments of transportation (DOT) and transit agencies also used a portion of Recovery Act funds for public transportation operating expenses. In June 2009, Congress gave urbanized areas and states the authority to use a maximum of up to 10 percent of certain Recovery Act transit funds for operating expenses. Data provided by FTA indicated that, nationwide, urbanized areas and states used about $190 million, or about 2 percent of Recovery Act funding for public transportation, toward operating expenses as of August 25, 2010. FTA officials told us that urbanized areas and states determine how much Recovery Act funds they spend on operating expenses. According to FTA data, 169 grantees throughout the U.S. chose to use a portion of public transportation funds for operating expenses. This represented approximately 25 percent of total Recovery Act public transportation grantees. These 169 grantees ranged from major urban transit agencies in San Francisco and St. Louis to transit agencies in smaller cities such as Charlottesville, Virginia, and Pocatello, Idaho. In addition, 18 states used a portion of their Recovery Act funding to pay for operating expenses for rural public transportation. FTA provided us data on the dollar amounts that urbanized areas and states obligated for operating expenses, but noted that they did not begin to track at a national level the percent of funds each state or urbanized area was using for operating expenses until August 2010. FTA officials also said that they rely on FTA’s regional offices—as part of the grant approval and review process—to ensure that urbanized areas and states plan to spend no more than the 10 percent threshold. However, they are considering instituting a control in its electronic grants management system so that staff could not award a grant if an urbanized area or state was over the 10 percent threshold. FTA officials also noted that there is no reporting requirement to make publicly available the percent of funds that urbanized areas and states are using for operating expenses but that they are considering placing summary information on the use of Recovery Act transit funds for operating expenses on the FTA Web site. We spoke with several states and transit agencies about whether they used Recovery Act funds for operating expenses. For example, officials from Michigan’s Department of Transportation, after asking nonurban transit agencies for input, found that funding for operating expenses was a priority. According to Michigan Department of Transportation officials, the majority of nonurban transit systems in Michigan are demand response—meaning that passengers are picked up and dropped off where they want to go within a defined service area—and officials told us that expenses for these services have been increasing annually. As a result, officials said the state used the maximum 10 percent of Recovery Act transit funds for this purpose. Officials from Caltrans told us that they used 1.1 percent of their Recovery Act funds for the operating expenses of their paratransit program. They added that these expenses were already allowable as capital expenses under both the Recovery Act as originally enacted and the regular federal transit programs. Caltrans officials told us that if they had the option to use transit funds for public transportation operating expenses when the Recovery Act was first enacted, they would have used the full 10 percent. However, because California had already identified and requested that funds be obligated for capital projects prior to when the option to use these funds for operating expenses became available, they chose to adhere to their initial plan rather than risk that the funds be deobligated and applied for another purpose. Transit officials from Illinois and New Jersey said their states chose not to use Recovery Act funds for operating expenses. Illinois DOT officials told us they decided early in the process to devote all Recovery Act funds to capital projects, so that the use of these funds was evident to the public. Illinois also chose to use state funds to cover all administrative expenses related to managing Recovery Act funds both to ensure maximum impact on capital projects and minimize paperwork needed to clear administrative charges for payments. New Jersey Transit officials told us they used Recovery Act funds for preventive maintenance—such as bus mechanical maintenance—which they said was considered a capital expense but did not produce new infrastructure. Officials noted that this reduced pressure on the transit agency’s budget, which freed up state funds for operating expenses. As we reported in May 2010, a portion of the highway money that was transferred was not obligated by the Recovery Act’s March 2010 1-year obligation deadline for highways and transit. We noted that the U.S. Department of Transportation (DOT) did not treat these funds as subject to the Recovery Act obligation deadline for either FHWA or FTA because it concluded that once Recovery Act highway funds were transferred to FTA, they were subject to the provisions of the law that apply generally to the transfer of highway funds to FTA. At the time, we expressed no opinion on DOT’s determination but stated that we were exploring this issue further. On further review, we have no objection to DOT’s interpretation of the applicability of the Recovery Act’s 1-year obligation deadlines. Obligation and Reimbursement of Regular FHWA Formula Funds Slowed during the Recovery Act, Raising Questions about Whether Recovery Act Funds Had the Full Economic Stimulative Effect Intended While states have been working to have FHWA obligate funds for constructing Recovery Act projects, we found that, compared with previous years, many states were slower in obligating and expending regular federal highway formula funds. FHWA officials stated that with the emphasis placed on the economic benefits to be gained, the obligation of Recovery Act funds and meeting the act’s statutory deadlines have taken priority. States are facing drastic fiscal conditions, and FHWA officials noted economic and budget difficulties in many states have led to staffing shortages. FHWA officials also suggested that uncertainty about future program funding levels may have slowed spending because a long-term reauthorization of federal programs has not yet been enacted. Nationally, as of June 30, 2010 (the end of the third quarter of the fiscal year), states had $19.7 billion remaining to be obligated, 63 percent more funds than they did at the same time for the 3 previous years (see fig. 21). In addition, while funding available to states for highways has increased in each of the last 3 fiscal years, we found that as of July 31, 2010, the reimbursement of regular federal highway formula program funds were lower compared with the reimbursement at the same point in the 3 previous fiscal years (see fig. 22). As Figure 23 shows, this trend was also true on a monthly average basis. Specifically, the reimbursement of regular federal highway formula funds for the first 10 months of fiscal year 2010 has been almost 18 percent (or about $4.3 billion) less than the average reimbursement in the previous 3 fiscal years. In the last 3 months of fiscal year 2010, state highway agencies not only have to request FHWA obligate over $500 million in remaining Recovery Act funds, but also $19.7 billion of regular federal highway formula funds. The amount of unobligated regular federal highway formula funds varied among states. For example, Illinois had none as of June 30, 2010, while Utah had $178.4 million—almost 6 times as much compared with its average balance of unobligated funds over the 3 previous years. Nationally, we found 16 states with over twice the amount of unobligated funds, while 5 states had fewer unobligated funds than in the past. Some state officials told us they had not been obligating regular federal highway formula funds as quickly because they had been focusing on meeting the Recovery Act obligation deadlines and did not have the resources to do both. Because states did not spend regular federal highway formula funds at the same pace as in previous years, while also spending Recovery Act funds, the full economic benefits of Recovery Act funds are likely to be delayed. Specifically, if states had awarded contracts and begun expending those regular federal highway formula funds at the same rate as in previous years and in conjunction with spending Recovery Act funds, states would have experienced an earlier stimulus effect. Funding being obligated now for projects will need up to several months to award contracts and initiate construction, and the effect on the economy comes when construction is initiated and workers are employed. FHWA officials said they expect all regular program funds to be obligated by the end of the fiscal year. To ensure that all authorized funds are obligated nationally each year, FHWA redistributes obligation authority from states that are not able to obligate their funds to other states that are. Despite projects being obligated at a slower rate than in previous years, in August 2010, when we completed our review, the 16 states and the District all reported to FHWA that they would fully obligate fiscal year 2010 highway formula funds. We will continue to monitor the relationship of obligations and reimbursements in both the regular federal highway formula program and Recovery Act in future reviews. DOT Is Developing Plans to Assess the Impact of the Recovery Act but Has Not Committed to Assessing Long-Term Benefits The goals of the Recovery Act were not only to promote economic recovery and to preserve and create jobs but also to make investments in transportation and other infrastructure that would provide long-term economic benefits. However, the Recovery Act did not include requirements that DOT or states measure the impact of funding on highway and transit projects to assess whether these projects ultimately produced long-term benefits. In our May 2010 report, we noted that, although DOT developed performance plans to measure the impact of Recovery Act transportation programs, these plans generally did not contain an extensive discussion of specific goals and measures needed to assess the impact of Recovery Act projects. As we have reported, it is important for organizations to measure performance to understand the progress they are making toward their goals. In our May 2010 report, we noted several efforts DOT initiated to strengthen its capacity to assess the impact of Recovery Act funds. For example, DOT is exploring opportunities to link databases that stored information about road smoothness and congestion, bridge structural sufficiency, and transit performance with financial data. Our May report recommended that DOT assess the results of Recovery Act transportation investments and determine whether these investments produced long-term benefits. We further recommended that, in the near term, DOT determine the types of data and performance measures needed to conduct such an assessment and, as appropriate, identify specific authority DOT may need to collect and report on these measures. In its response, DOT noted that it expected to be able to report on Recovery Act outputs, such as the miles of road paved, bridges repaired, and transit vehicles purchased, but not on outcomes, such as reductions in travel time, nor did it commit to assessing whether transportation investments produced long-term benefits. DOT further explained that limitations in its data systems, coupled with the magnitude of Recovery Act funds relative to the overall annual federal investment in transportation, would make assessing the benefits of Recovery Act funds difficult. DOT indicated that, with these limitations in mind, it is examining its existing data availability and, as necessary, would seek additional data collection authority from Congress if it became apparent that such authority were needed. While we are encouraged that DOT plans to take some steps to assess its data needs, it has not committed to assessing the long-term benefits of Recovery Act investments in transportation infrastructure. We are therefore keeping our recommendation on this matter open. DOT Plans to Report on State Progress in Meeting Maintenance-of- Effort Provisions As we have previously reported, timely information on the progress states are making in meeting the Recovery Act maintenance-of-effort provisions could better inform policymakers’ decisions on the usefulness and effectiveness of the maintenance-of-effort requirements and of including similar provisions in future legislation. The Recovery Act required governors to certify that their states will maintain the level of spending for the types of transportation projects funded by the Recovery Act that it planned to spend the day the Recovery Act was enacted. As part of this certification, the governor of each state was required to identify the amount of state funds planned to be spent from February 17, 2009, through September 30, 2010. Timely information is also important to assessing the impact of Recovery Act funding and whether it achieves its intended effects of providing countercyclical assistance and increasing overall spending. Our earlier reports have noted that DOT does not have current information on the progress states are making toward meeting their certified amounts. This is because the Recovery Act does not require states to report final expenditures until February 2011. As a result, DOT will not make a determination as to whether states have met their required program expenditures until some 6 months after the maintenance-of-effort provision time period expires on September 30, 2010. We have also reported that the challenges to implementing a maintenance-of-effort provision have been tremendous—as of mid-August 2010, for example, DOT had not yet fully accepted the certifications of three states. As we have reported, these implementation challenges, coupled with the fiscal challenges states have faced, raise questions as to whether the maintenance-of-effort provision will achieve its intended purpose of preventing states from substituting federal funds for some of their planned spending on transportation programs. That said, DOT and FHWA have invested a significant amount of time and work to ensure consistency across states on how compliance with the act is certified and reported. As a result, DOT is in an advantageous position to understand lessons learned—what worked, what did not, and what could be improved in the future. Our March 2010 report recommended that DOT gather timely information on the progress states are making in meeting the maintenance-of-effort requirements. Specifically, we recommended that DOT gather these data and report preliminary information to Congress within 60 days of the maintenance-of-effort period on (1) whether states met required program expenditures as outlined in their maintenance-of-effort certifications; (2) the reasons that states did not meet these certified levels, if applicable; and (3) lessons learned from the process. In response, DOT officials stated that DOT will encourage states to report preliminary data for the certified period ending September 30, 2010, and deliver a preliminary report to Congress within 60 days of the certified period. DOT officials said they have developed a timeline for obtaining information to produce this report and will issue guidance by October 1, 2010, requesting that states update actual aggregate expenditure data and provide the data to DOT by November 15, 2010. DOT officials said they will use this information to develop the report to Congress, and it will submit the report no later than November 30, 2010. Publicly Available Information Continues to Overstate the Extent to Which Recovery Act Funds Were Directed to Economically Distressed Areas Our previous reports have identified challenges DOT faced in implementing the Recovery Act requirement that states give priority to projects located in economically distressed areas. In July 2009, we reported substantial variation in the extent to which states prioritized projects in economically distressed areas and how they identified these areas. Many states based their project selections on other factors and only later identified whether these projects were in economically distressed areas. We also found instances of states developing their own eligibility requirements for economically distressed areas using data or criteria not specified in the Public Works and Economic Development Act of 1965, as amended. In response to our recommendation, FHWA, in consultation with the Department of Commerce, issued guidance to the states in August 2009 that defined “priority,” and directed states to give priority to projects that were located in an economically distressed area and could be completed within the 3-year time frame over other projects. In addition, FHWA’s guidance set out criteria for states to use to identify economically distressed areas based on “special need.” Three states—Arizona, California, and Illinois—developed their own eligibility requirements or applied a special-need criterion that overstated the number of counties, and thus the amount of funds, directed to economically distressed areas. For example, California designated all counties as economically distressed, and we identified 219 projects with an estimated cost of $1.1 billion coded as being in economically distressed areas that should not have been so coded. In early February 2010, FHWA determined the documentation these states provided to justify these additional designations was not consistent with FHWA guidance. In May 2010, we recommended that FHWA advise these states to correct the designations, and in July 2010, FHWA instructed its division offices to advise the states to revise their designations and to report these projects as being in noneconomically distressed areas. In December 2009, DOT testified to the House Committee on Transportation and Infrastructure that 57 percent of projects were in economically distressed areas—including 99 percent and 100 percent of Recovery Act highway funding in California and Arizona, respectively. However, as we noted above, these data had not yet been corrected by DOT and therefore overstated the amount of funding, and this testimony is DOT’s only public accounting of how states implemented this provision of the Recovery Act. Because FHWA’s July guidance did not direct states other than Arizona, California, and Illinois to correct existing entries, we reviewed RADS data on projects in economically distressed areas. We found about 2,300 projects that did not appear to meet FHWA’s guidance for classifying projects in economically distressed areas and thus appeared to contain errors that would result in an overstating of the funds directed to these areas. For instance, over 2,100 of these entries did not include an explanation justifying the designation of an area as economically distressed. In response to this information, DOT officials told us that they manually compared these entries with maps designating distressed area and other data sources. When we completed our review, FHWA officials said they were able to verify that most of these data were accurate; however, they did not provide documentation of the analysis to us. DOT stated it does not intend to correct this information because the Recovery Act does not contain a specific requirement that DOT report on the extent to which distressed areas prioritized and directed funds to economically distressed areas. However, without accurate publicly available information, it is difficult to determine the extent to which Recovery Act funds were directed to areas most severely impacted by the recession or to know the extent to which states prioritized these areas in selecting projects for funding. The Recovery Act included a number of requirements and provisions designed to support the Act’s goals of promoting economic recovery, creating jobs, and, in the case of transportation funds, making investments that contribute to long-term economic benefits. Although the Act included some reporting requirements to accompany these provisions, it did not specify such requirements in all cases. Noting the large amount of federal transportation funding provided in the Recovery Act, we have previously made recommendations that DOT take additional steps to go beyond the specific reporting requirements in the Act, and that DOT develop plans to assess the long-term benefits of Recovery Act funds on the transportation system. We have also made recommendations that DOT improve and correct the data it is collecting to better facilitate a public accounting of the use and impact of these funds. For instance, we have recommended that DOT report on the extent to which states met maintenance-of-effort requirements 60 days after the end of the certification period. In his March 2009 memorandum to the heads of executive departments and agencies, the President emphasized the need for providing public transparency and accountability of these expenditures. We are making two new recommendations to DOT because of the value such information can offer policy decision makers and the public to better understand whether the use of Recovery Act funds met intended goals. We plan to continue to monitor these issues in our future work. To ensure that Congress and the public have accurate information on the extent to which the goals of the Recovery Act are being met, we recommend that the Secretary of Transportation direct FHWA to take the following two actions: Develop additional rules and data checks in the Recovery Act Data System, so that these data will accurately identify contract milestones such as award dates and amounts, and provide guidance to states to revise existing contract data. Make publicly available—within 60 days after the September 30, 2010, obligation deadline—an accurate accounting and analysis of the extent to which states directed funds to economically distressed areas, including corrections to the data initially provided to Congress in December 2009. The Energy Efficiency and Conservation Block Grant program (EECBG) is administered by the Office of Energy Efficiency and Renewable Energy within the Department of Energy (DOE). It was authorized in the Energy Independence and Security Act (EISA) of 2007 and funded for the first time by the Recovery Act. The EECBG program provides about $3.2 billion in grants to develop, promote, implement, and manage projects to improve energy efficiency and reduce energy use and fossil fuel emissions in local communities. Of this amount, approximately $2.8 billion has been allocated through formula grants to about 2,150 state, local, and tribal governments (recipients) as of August 23, 2010. Funding is allocated to state recipients based on their population and total energy consumption; to city and county recipients based on their resident and commuter populations; and to Native American tribes based on population and climatic conditions. Eligible applicants for formula EECBG grants include the 50 states; the District of Columbia (the District); five U.S. territories; cities or city equivalents, such as towns or villages with populations of at least 35,000; counties with populations of at least 200,000; and federally recognized Native American tribes. Each state-level recipient must use at least 60 percent of its allocation to provide subgrants to local government units that are not eligible for direct formula grants. In addition to these formula grants, the Recovery Act also includes approximately $400 million in EECBG funding to be awarded on a competitive basis. Competitive grants are designed to stimulate activities that can fundamentally and permanently transform energy markets and sustain themselves beyond the grant period. Our review focuses on the direct formula grants. The Recovery Act requires that DOE obligate $2.8 billion in formula EECBG funds by September 30, 2010. DOE has obligated most of the EECBG funds to recipients and has plans to obligate the remainder by the September 30 deadline. As of September 13, 2010, DOE has obligated to recipients more than 99 percent of this amount—$2.77 billion. Nearly all of the approximately 2,150 recipients nationwide—approximately 1,700 cities and counties, 56 states and territories, and 392 tribal communities—have received EECBG funding, with about 68 percent (approximately $1.9 billion) going to cities and counties, 28 percent (approximately $767 million) to states, territories, and the District, and 2 percent (approximately $55 million) to Native American tribes. Steps are being taken to ensure that the remaining funds will be obligated to recipients by the September 30 deadline, and the DOE Inspector General has recently reported that there is nothing to indicate that DOE’s plan to obligate the remaining Recovery Act funding by September 30 will not be effective. DOE announced the opportunity for interested applicants to submit applications for EECBG formula funding on March 26, 2009. As part of the application process, interested states and local units of government were required to submit an Energy Efficiency and Conservation Strategy (EECS) that described their strategy for achieving the goals of the program. DOE reviewed recipients’ EECS and had 120 days to approve or disapprove EECS strategies. DOE officials report that as of July 2010, most recipients have had their EECS plans approved and are now moving from the application to the execution phase. Also, as of July 2010, DOE officials report that EECBG grant recipients have obligated to subrecipients about half ($1.3 billion) of the $2.8 billion awarded to recipients through formula grants. While most recipients are moving to implement projects, recipients awarded larger grant amounts have obligated about twice as much as recipients awarded smaller grant amounts. As of July 2010, the 291 recipients receiving EECBG grants above $2 million have obligated to subrecipients about half (approximately $1.1 billion) of the $1.9 billion awarded to them by DOE through formula grants. The remaining 1,860 or so smaller communities (communities with grants less than $2 million) have obligated only about one-quarter of their EECBG awards (approximately $0.2 billion) of the approximately $0.9 billion awarded to them by DOE through formula grants. To facilitate increased obligations, DOE has encouraged recipients to meet targets of obligating 90 percent or more of their funds by June 25, 2010. Regarding spending, DOE reports that as of August 2010, about 18 months since the passage of the Recovery Act, recipients have spent about 11 percent (approximately $311 million) of the $2.8 billion authorized for formula funding for the program. Consistent with this, many recipients we visited had spent less than 8 percent of the amount awarded. While many recipients have spent only a small part of their funding, DOE is taking steps to accelerate spending. For example, DOE has encouraged recipients to meet a spending target of 20 percent by September 30, 2010. DOE officials believe this has had a positive impact on spending rates. In particular, DOE officials note that many recipients receiving less than $250,000 met DOE’s target of spending 20 percent as of June 30, 2010—3 months ahead of schedule. DOE officials also note that while spending rates are at 11 percent, much more of the funding is obligated and projects are in the process of being selected and started. They note that the actual costing of the funds for projects is one of the last steps in the process. DOE placed restrictions on the selection of projects in line with EISA and the activities that funds are to be used for. DOE required that projects be selected from the 14 eligible activities identified in EISA. As of July 28, 2010, as shown in table 9, more than 60 percent of EECBG funds have been obligated for three purposes: energy-efficiency retrofits (35.3 percent), such as replacement of heating and cooling systems in fire stations and libraries in the District; financial incentive programs, such as the rebate program in New Jersey that pays for energy-efficiency retrofits not already covered by existing incentives (15.6 percent); and building and facilities (11.1 percent), such as a geothermal system at a new corrections facility. In many of the communities we visited, energy-efficiency improvements were made to public buildings, but the types of projects selected for implementation in public buildings and facilities varied considerably. For example, projects included improvements to a waste treatment plant, occupancy sensor lighting at public schools, solar trash compactors to reduce the frequency of trash pickup, solar parking meters, and replacement of personal computer workstations with more energy- efficient virtual desktops that reduce both power consumption and environmental waste. While not required, grant recipients were also encouraged to implement programs and projects that leveraged other public or private resources, enhanced workforce development, persisted beyond the funding period, and promoted energy market transformation, such as revolving loans and energy savings performance contracting. Recipients that we visited indicated that their selection of projects was also based on a variety of additional criteria, including: communities’ determination of energy savings; job creation; availability of staff and other resources; the extent to which communities could benefit after Recovery Act funds run out; the ease with which projects could be easily implemented; the potential for return on savings; and populations to be served. For example, the District chose to focus on target populations that they had been unable to serve, such as nonprofits and small businesses. Kent County, Michigan also considered the availability of county staff to complete the project. Several recipients have selected projects that leverage other state energy efficiency programs. For example, in Arizona, EECBG funds were used to take advantage of a program that encourages commercial and government customers to implement energy efficiency projects for which a public utility will pay up to 30 percent of the cost. In addition, a few of the recipients we visited had developed a revolving loan fund to provide low- interest loans for energy-efficient improvements in businesses and commercial buildings and facilities. While many recipients we visited reported that technical assistance, especially that provided by DOE project officers, was helpful, timely, and sufficient, some recipients and DOE project officers we interviewed reported that project implementation guidance, especially early in the process, was unclear and overwhelming and that such guidance has contributed to the delay of project implementation. In particular, several recipients we visited indicated that DOE guidance regarding timeline requirements, drawing down funds, and Buy American requirements was at times unclear, duplicative, and ever-changing. DOE is working to provide greater assistance to recipients that DOE officials believe will increase responsiveness and clarify guidance. In particular, DOE is adding staff in order to reduce the workload of project officers and monitors to give them more time to assist recipients. DOE also recently issued guidance that reduces reporting requirements, reduces workloads, and streamlines communication with recipients. Regarding timeline requirements, grant recipients reported that DOE’s guidance on the timeline for obligating, spending, and drawing down EECBG funds has been confusing. For example, DOE’s Funding Opportunity Announcement, as well as its Program Notice 10-011 for the EECBG program, states that EECBG recipients are to obligate all funds within 18 months of the effective date of their award and expend all funds within 36 months of the effective date of their award. Most recipients had been awarded funding in the fall of 2009, and several of the recipients we visited believed they were on track to obligate funds within 18 months of the effective date of their award (by the spring of 2011). However, in April 2010, DOE set internal milestones designed to help recipients ensure that they are on track to meeting their obligation and expenditure deadlines. Specifically, DOE requested that recipients have 90 percent of their funds under contract and obligated by June 25, 2010, and to spend a minimum of 20 percent of their funds by September 30, 2010. While DOE reports that some recipients found the guidance useful and that the new guidance was helpful in getting many recipients to obligate funds quickly, several recipients we visited said they were confused by this new guidance. These recipients expressed concern that because of the milestones, they had to obligate funds sooner than expected—instead of having 18 months to obligate funds, they had to have funds obligated in half the time—approximately 9 months after funds had been awarded. In addition, National Association of State Energy Officials (NASEO) representatives said that DOE had not made clear to recipients that the revised timelines were “milestones” and not deadlines and that several recipients had the impression that funds could be taken away if recipients did not meet the revised spending targets. Regarding requirements on drawing down funds, while DOE reports that many recipients found information on drawing down funds helpful, a few recipients we visited were confused about when they needed to draw down funds to their accounts. These recipients believed that based on DOE guidance, they should draw down their entire award soon after it was awarded. For example, Colorado Springs, Colorado, officials reported that they drew down the entire $3.7 million award as of March 2010 based on their understanding of the Funding Opportunity Announcement, even though they were not yet ready to spend it. In April, a Colorado Springs official realized the mistake, and the city paid back $3.1 million in mid-May 2010. However, in Jackson County, Michigan, local officials also mistakenly drew down their award and told us that when they tried to return the money, DOE required them to make interest payments on the amount. DOE issued guidance on June 23, 2010, in response to recipient questions on drawing down funds. Regarding Buy American requirements, recipients report that guidance on the Buy American requirement was difficult to understand and ever- changing. The Buy American requirement of the act generally requires that grant recipients use iron, steel, and manufactured goods produced in the United States on all Recovery Act-funded projects. However, some recipients found that it was unclear how to comply with the Buy American requirements in a reasonable way and that the guidance was lacking or difficult to understand. For example, Colorado Springs officials said that DOE did not have a list of eligible vendors and that trying to ensure compliance with the Buy American requirement delayed their light emitting diode (LED) street lighting-replacement projects by at least 4 months. For Berks County, Pennsylvania, officials, it was difficult to determine the source of some components of a product and therefore whether the product could be used. NASEO representatives said that DOE’s guidance did not provide sufficient detail to enable officials to determine the types of brands or goods they could purchase and that recipients did not have the expertise to trace the supply chain of manufactured goods to determine origin. In recent months, DOE has made numerous attempts to help recipients understand Buy American requirements, including guidance e-mailed to all recipients in May, a webinar in June, and subsequent notice to recipients regarding the guidance. DOE officials did note that they were concerned about providing lists of vendors because that might be viewed as an endorsement of particular vendors at the potential exclusion of other eligible vendors. DOE officials said that DOE cannot recommend specific products, in part, due to the large number of eligible products and because of the potential ethical and liability concerns associated with a federal agency recommending specific manufacturers. While many recipients have not had problems understanding program requirements and have successfully navigated requirements through training and technical assistance provided by project officers, DOE is working to give project officers the tools to better assist recipients in navigating DOE guidance. In particular, since March and April 2010, DOE has added staff in order to reduce the workload of project officers and monitors, which DOE officials believe has increased its responsiveness to recipients in clarifying guidance. In addition, as of July 2010, DOE is providing project officers with the Automated Standard Application for Payment (ASAP) reports, so that they can monitor the drawdown of funds. DOE has also recently standardized e-mail distribution lists and provided more frequent communication to recipients. DOE monitors grant recipients primarily through its project officers and monitors. DOE project officers and monitors work directly with recipients to provide guidance and evaluate performance. They also gather and analyze information about project planning, implementation, and outcomes to help ensure data quality and to ensure that statutory requirements are met. There are three levels of review: desktop, on-site, and work-site reviews. During desktop monitoring, monitors examine recipients’ reports to assess progress and determine compliance with federal rules and regulations, goals, and objectives of the grants and the reporting and tracking of resources expended by the recipient and its subrecipients. During on-site monitoring, monitors review deficiencies identified through routine monitoring and how the recipient is resolving the outstanding quality and operational issues. On-site monitoring may also include interviews with contractors to determine whether follow-up protocols were conducted and deficiencies were corrected. During work- site monitoring, monitors review the project, facility, or building being completed. One of the project officers’ key functions is to conduct both on-site and desktop monitoring. DOE monitoring is conducted at minimum frequencies (see table 10), depending on the funding received by the recipient, and can be increased if project officers have sufficient cause, resources, time, and approval of management. In these reviews, project officers evaluate both financial and project status by evaluating financial records, activities, budgets, and spending plans to ensure sufficient progress is being made against planned activities. The monitoring questions were updated on July 30, 2010, downsizing the number of questions asked from over 100 to about 30 questions in on-site reviews and about 6 multi-part questions in quarterly reviews. Now, the desktop monitoring quarterly checklist consists of financial questions such as “For each activity, does the expenditure match, within reason, the amount of work completed?” and programmatic questions such as “For each activity, is the grantee on track to meet performance goals?” DOE is also developing guidance that includes best practices for how states should monitor their subrecipients. As of July 28, 2010, DOE has conducted 3,985 desktop reviews of the EECBG program, and about 170 on-site reviews. Through its monitoring, DOE has found that smaller recipients have been more likely to fail to complete quarterly programmatic and financial reporting to DOE. While 97 percent of larger recipients (recipients with grants greater than $2 million) have completed required quarterly reports due April 30, only 79 percent of recipients receiving less than $250,000 completed quarterly reports. While DOE monitors its grant recipients (as well as conducts work-site reviews of subrecipients as needed), grant recipients are expected to monitor their subrecipients. While DOE does not expect grant recipients to have a formal monitoring plan, DOE does require that state recipients “develop a sub-granting process…that prevents fraudulent spending” DOE is also developing guidance that includes best practices for how states should monitor their subrecipients. Several of the states we visited do have a formal plan in place for monitoring their subrecipients. A few of the states we visited have begun monitoring their subrecipients. For example, Colorado reviews monthly reports prepared by subrecipients, Michigan project managers review detailed expenditure and employment data submitted by subrecipients on a quarterly basis, and Massachusetts is beginning to monitor subrecipients through regular interactions with subrecipients. DOE also expects localities to have a system for monitoring to ensure that subrecipients comply with EECBG requirements. However, for localities that received direct funding, and as we also found in our May 2010 report on DOE’s Weatherization Assistance Program, DOE provides localities with discretion in developing and implementing internal controls, and as a result, several localities we visited did not have a formal monitoring plan in place for monitoring subrecipients or work performed. Many of the localities we visited have developed a system for monitoring, which may include monitoring procedures such as evaluation of the reasonableness of costs, monitoring of building improvements and post-improvement audits, checking payroll for compliance with Davis-Bacon requirements, checking the validity of expenses, and announced and unannounced site visits. However, despite guidance on how to report jobs, it is unclear if all localities’ systems for monitoring include measures to ensure the reliability of reported data. For example, in one locality, an official reported that there were no data quality steps to ensure the reliability of the total job count. Another recipient said that it requested greater clarification on what was expected from DOE regarding internal controls but did not get more than a general answer about providing good accountability for the use of funds. EECBG recipients are required to report on outcomes quarterly to DOE on three categories of activity and results metrics. The categories are jobs created or retained; standard programmatic metrics, such as obligations, outlays, and metrics associated with the EECBG activity undertaken; and other critical metrics, such as energy savings and energy cost savings. Recipients report to DOE through its Performance and Accountability for Grants in Energy system (PAGE). In addition, recipients of grants greater than $2 million must report to DOE monthly on funds spent and funds obligated, amount of relevant activity completed, and additional information as required per activity. Several recipients we visited were reporting programmatic metrics such as obligations, expenditures, and jobs created. In addition, several recipients we visited were just beginning to implement projects. However, DOE officials have told us that they are not required to report until the completion of their projects. As a result, several recipients do not yet have data to report for critical outcome metrics such as energy savings and emissions reductions. Some recipients we visited experienced challenges reporting outcomes using these metrics. For example, in one locality, officials said that they planned to estimate jobs because they do not have hourly contracts. Similarly, in another locality, officials were not aware of how to calculate full-time equivalents (FTE) per OMB guidance. In addition, because they experienced challenges in measuring impact metrics, recipients in Georgia have a variety of methods for calculating a metric value. For example, officials from Columbus, Georgia, stated that energy savings from upgrades to traffic lights will be estimated by making assumptions on the amount of energy used by the original lights compared to improved traffic lights. A Warner Robins, Georgia, official explained the city intends to report project impacts by comparing past monthly utility bills for the water treatment plant to new monthly utility bills. To measure the impact of energy efficiency improvements, Cobb County, Georgia, plans a mixed approach. According to officials, the county will take field measurements of the performance of old equipment prior to removal and replacement equipment as well as use energy models or engineering estimates, including estimates provided by the county’s energy audit consultant. Cobb County also intends to use the new energy software procured through the EECBG grant to benchmark and track energy use, cost, and savings and revise calculations based on observed energy usage for each facility. To help ensure consistency, Georgia has provided guidance from DOE to its subrecipients detailing instructions on estimating and reporting energy savings. In addition, some recipients experienced challenges in the process for reporting metrics, especially due to DOE’s reporting system, PAGE. DOE provides resources to assist recipients in navigating the PAGE reporting system on a Web site and offers assistance to recipients via a helpdesk. However, several recipients described the reporting process as overwhelming or frustrating and that reporting was time consuming and required extensive resources. One DOE project official said he would like to see the reporting process streamlined because it is too complicated for recipients. An official from one EECBG locality in California said that if he had known about the extent of the reporting burden, he may not have applied for the grant. In particular, some recipients said the PAGE reporting system was not user-friendly, and others were confused about what to input into the PAGE system. Similarly, one DOE project officer said that it seems that every time recipients log in, there is a new structure or a new report that is required to be filled out. Other grant recipients said that using OMB’s federalreporting.gov and DOE’s PAGE system is difficult because the systems ask for information to be reported in different ways. In addition, some officials have reported challenges in understanding for how many quarters they are required to report some metrics, especially energy saved once a project has been implemented. Adding to recipients’ frustration about the reporting process has been the volume of contact from various DOE offices about reporting requirements or changes in reporting requirements. For example, one DOE project officer that we spoke with said that grant recipients have expressed frustration at having received so much e-mail about guidance or changes to guidance received from different DOE offices. In addition, in Redding, California, officials initially expressed frustration that questions about reporting guidance required calls to several DOE staff to find the right person to answer their questions. Kent County, Michigan, officials said that reporting has been challenging because of the multiple guidance released and because DOE’s PAGE system was not user-friendly. DOE is beginning to take steps to deal with the amount of guidance and requirements being provided to recipients. DOE plans to issue guidance in August 2010 to assist recipients in navigating the PAGE system, how to correctly categorize metrics and how to interpret and understand financial reporting requirements. In addition, DOE expects to issue formal guidance for the reporting period ending August 30, 2010, in which DOE will no longer require recipients with formula awards greater than $2 million to report obligations and performance metrics on a monthly basis. In addition, all recipients will no longer report hours worked through nonfederal funds or outlays of nonfederal funds on a quarterly basis. DOE estimates that these changes will increase administrative reporting efficiency by approximately 40 percent across the program. In addition, in June 2010, DOE began an effort called “One Voice” that is intended to improve and streamline communication with recipients. This effort will entail the circulation of a weekly newsletter with announcements regarding guidance, training, and events and will also include an effort to streamline communication via e-mail. DOE is also working on developing specific requirements for closing out EECBG grants that should clarify when recipients can stop reporting, and a working group within DOE plans to clarify the energy metrics reporting guidance. The Recovery Act appropriated $3.1 billion to the State Energy Program (SEP), which is to be administered by the Department of Energy (DOE) and spent over a 3-year period by the states, U.S. territories, and the District of Columbia (the District) for 56 recipients. The SEP provides funds through formula grants to achieve national energy goals such as increasing energy efficiency and decreasing energy costs. Created in 1996, the SEP has typically received less than $50 million per year. As such, the Recovery Act provided a substantial increase in funding for this program. Recipients are making progress obligating SEP funds, according to August 30, 2010, data from DOE. As of August 30, 27 states, territories, and the District of Columbia reported obligating at least 80 percent of their funds, meeting a departmental goal, with another 24 states and territories reporting obligating between 50 percent and 80 percent of their funds. Some states and territories continued to lag in obligating funds—5 states and territories reported obligating less than 50 percent of their funds. Currently, a limited amount of national data is available on planned or actual state spending trends. DOE officials noted that they would not have final aggregate national spending data until the funds are fully obligated by recipients in late 2010. Until that time, funding may still shift among spending categories. The data provide information on different spending categories that have been recommended by DOE to recipients. The most recent data available in August 2010 indicated that the funds were directed to the following: buildings (50 percent)—programs such as school and government improvements, energy-efficiency building code adoption and training, and revolving loan programs. electric power and renewable energy (30 percent)—examples include wind turbine deployment, ground source heat pumps, and solar generation. industry (8 percent)—programs such as those for energy audits, waste reduction management, water conservation, and manufacturing energy efficiencies. policy, planning, and energy security (4 percent), which includes programs such as developing state energy strategic plans, energy policy development, and legislative initiatives. transportation (4 percent), which includes programs related to mass transit use, bike to work, telecommuting, and street light replacement. energy education (3 percent)—specific programs include those such as curricula development and K-12 education, training workshops, and technical and college course development. Nationally, as of August 25, 2010, approximately 75 percent ($2.31 billion) of funds have been obligated by recipients and 10.8 percent ($332 million) have been spent, out of the total $3.07 billion in SEP funds available for grants. Individually, state recipients have reported targeting funds to meet Recovery Act goals such as creating or retaining jobs while also generating-long term benefits such as energy and cost savings. Recipients have prioritized their spending priorities differently: California allocated the largest portion of its $226 million in total funds—$110 million—to improve various types of facilities, including residential, municipal, and commercial buildings. New York allocated the largest portion of its $123 million in funds— $74 million—for energy conservation projects: energy efficiency, renewable energy, and clean fleets. Pennsylvania targeted the largest share of its $99.7 million in funds— $22.8 million—to help leverage private investments from wind energy developers and manufacturers to develop projects through a state wind initiative. Though recipients are making progress meeting DOE funding goals, state energy officials noted uncertainty with meeting changing DOE obligation timetables. For example, in the initial funding announcement sent by DOE on April 24, 2009, DOE stated that funds must be obligated by recipients within 18 months of the award of the grant or face potential cancellation by DOE and spent within 36 months. However, the same guidance also indicates that 100 percent of funds must be obligated by September 30, 2010, to meet departmental and congressional goals. State energy officials noted that DOE later provided an updated correspondence on April 21, 2010, informing states that they were encouraged to obligate 80 percent of their funds by June 30, 2010, and spend 20 percent by September 30, 2010. DOE officials stated that the June 30 date was meant to help keep states on track to meet the September 30 goal. In many cases, however, states interpreted these dates as new deadlines and were concerned that they would lose access to funds partly because they had experienced pressure from both the administration and the department to meet these new funding milestones. DOE officials stated that they would be unable to deobligate the funds from states if the funding milestones were not met but also stated that they did not know whether funds could be reappropriated if states are unable to obligate those funds by September 30, 2010, as policy guidance has not been provided by the administration and Congress. We have previously reported that DOE has provided unclear funding deadlines for the Recovery Act weatherization program, creating confusion among state recipients trying to meet DOE Recovery Act deadlines. State energy officials told us that delays by DOE in providing guidance hampered early obligating and spending on Recovery Act projects. For example, both Iowa and New York state energy officials noted that they waited for DOE to provide guidance before moving forward on projects. Additionally, other state energy officials we spoke with through the National Association of State Energy Officials (NASEO) stated that while DOE guidance has significantly improved, it was not always timely or complete for issues such as the Recovery Act’s Buy American and Davis- Bacon requirements. For example, similarly to the problems noted by EECBG funding recipients, several state energy officials said that while DOE provided broad guidance for meeting Buy American requirements, this guidance did not provide sufficient detail that would enable officials to determine the types of brands or types of goods they could or could not purchase. DOE officials stated that DOE is not capable of recommending specific products requested by recipients partly because of the large number of products requested. Further, DOE officials stated that there were potential ethical and liability concerns associated with a federal agency recommending specific manufacturers. While the state officials stated that they did ask DOE for further advice, the response was not always timely. As an example, several state energy officials described one Recovery Act project that was held up several weeks in order to determine if energy-efficient lights purchased for the project met Buy American requirements. States further encountered challenges with meeting National Environmental Policy Act (NEPA) and National Historic Preservation Act policies. DOE officials acknowledged that NEPA requirements had added significant time to the process but stated that progress had been made through the use of categorical exclusions for projects. As of August 23, 2010, 87 percent ($2.65 billion) of all SEP Recovery Act-funded projects have been granted categorical exclusions, and 75 percent ($2.31 billion) of funds have been obligated to subrecipients for specific activities. Pennsylvania officials noted that National Historic Preservation Act requirements had slowed some projects but that the situation has improved through an agreement set up between DOE and the state historic preservation offices. While DOE officials have acknowledged they initially lacked the infrastructure necessary for rapid implementation of a SEP, they noted that DOE has made significant improvements in the level of guidance provided to recipients. For example, DOE has developed a Web site to provide information and guidance for recipients to comply with the Buy American requirements. DOE officials further stated that in addition to their guidance, states can also contact their DOE project officer for assistance on meeting Recovery Act and program requirements and deadlines. Overall, state energy officials spoke positively about their project officers, though DOE has only recently filled many of these positions. Several state energy officials also noted that other obstacles such as the lack of energy management staff has made it more difficult to administer SEP projects. Specifically, District officials stated it was difficult to hire highly qualified people because potential staff did not want to leave a current permanent position for a temporary Recovery Act position that might only last 2 years. Similarly, Iowa officials noted that their efforts were hindered by the loss of two key staff in 2010 and that they did not want to hire staff for a limited duration. Additionally, California’s state auditor reported in December 2009 that the state was not prepared to administer Recovery Act funds for the SEP and listed insufficient staffing as one cause for the lack of preparedness. The report also indicated that the lack of preparedness raised the potential for misuse of Recovery Act funds. California officials stated that following the state auditor’s report, they have since taken actions to address inadequate staffing through the hiring of additional contractors. Both state recipients and DOE reported that the final reporting of spending of state energy funds can take place significantly after the funds have been obligated and work has begun. For example, District officials told us that school improvements were scheduled for late June, but the funds would not be reported as spent until significantly later because the contractor would not be paid until the work was completed. District officials also noted there would also be an additional delay in reporting the final outlay because the work was being conducted though a sister agency. Pennsylvania officials also reported delays between the time work was performed and the final spending was reported. State officials told us that they reimburse SEP subrecipients on a cost reimbursement system, after work is completed and invoices and proof of payment have been submitted, reviewed, and approved. District and Pennsylvania officials both said that reimbursing costs only after work is completed helps to ensure that the funds are spent appropriately. DOE officials stated that they have tried to increase the speed at which invoices have been paid to better demonstrate the timely use of Recovery Act funds. For example, through program guidance, DOE has encouraged states to pay contractors after specific work milestones have been achieved rather than after the project has been completed. DOE officials stated that they are currently on track to meet their monitoring goals. The SEP Recovery Act Monitoring Plan developed by DOE calls for each recipient to be visited twice a year. DOE officials stated that the plan provides guidance for various classes of enhanced monitoring and visitation, but leaves it to the discretion of the DOE field office to plan these trips. DOE state project officers give priority, on a case by case basis, to recipients facing special challenges. The frequency of monitoring may be increased if prior monitoring reports uncovered significant deficiencies in how a recipient is administering and managing its program. Visits to recipients with low obligation or expenditure rates are focused on providing technical assistance to help increase the rates. Overall, DOE officials stated that on-site monitoring will increase as payments grow larger. DOE also conducts on-site monitoring visits at the subrecipient—local agency—level. DOE’s target is to conduct on-site monitoring of about 10 percent of all subrecipients nationwide. However, if the risks to the particular state are higher, then the state would be given closer attention by DOE staff for potential assistance. While DOE does conduct some on- site monitoring of subrecipients, the officials clarified that the main monitoring relationship is between the state recipient, the state energy official, and the DOE project officer. DOE officials stated that they view state recipient monitoring as DOE’s main responsibility. DOE reported that it is on track to meet its monitoring goals: As of late June 2010, DOE staffed a total of 29 project officers to 56 recipients, exceeding its goals of one officer per two recipients. Though meeting their goal, DOE officials noted that 12 of these officers had been hired in the past 6 weeks. By the end of September 2010, DOE anticipates that all 56 recipients will have received the first of their required annual site visits, with the second follow-up site visit to be performed by the end of the calendar year. In addition to on-site monitoring of the states, project officers are also required to visit between 5 percent and 10 percent of all subrecipients each year. To date, DOE has not determined any projects that are “at variance,” indicating a high risk for funding misuse. DOE officials noted that the primary monitoring challenge facing project officers and state recipients during desktop and on-site visits is gathering the quantity of information and other process indicators needed for compliance certification by the project officer. DOE officials stated that assistance from the field offices, the technical assistance provider network, and best practices from the state’s own NASEO peer organization are helping to address this situation to assist states in developing effective documentation in a timely manner. Planned state recipient monitoring practices vary, and some recipients are just beginning their monitoring activities because they are just starting projects. For example, District officials plan to monitor projects via video and desk monitoring but noted that they have not yet started monitoring in the field. Planned monitoring will focus on ensuring that the work being done is consistent with the agreed-upon scope of work. Additionally, District officials have developed monitoring procedures that will include monitoring checklists of programmatic and financial questions, desktop monitoring, and financial monitoring by the District SEP/Recovery Act financial Officer. Pennsylvania has also developed monitoring procedures; project advisers from state regional offices are assigned to each SEP project and, using an inspection form, conduct initial and final inspections of projects and are encouraged to perform other inspections as needed. Project advisers also communicate on a weekly basis with SEP recipients and update project status in the agency’s reporting system. Some state recipients in our review are also using independent contractors to aid in grant monitoring at varying levels. For example, Colorado, California, and New York all reported hiring outside contractors to supplement their monitoring activities. Colorado hired an outside firm to manage its rebate program for appliances, energy-efficient measures, and renewable-energy systems, citing a need for expertise to handle the large growth in the program due to the addition of Recovery Act funds. Additionally, Colorado also recently issued a Request for Proposal for measurement and verification activities for its grant funds. Due to the significant increase in the size of Colorado energy programs, Colorado officials determined that oversight by state program managers alone is no longer sufficient. California officials set aside $6 million of its $226 million grant to hire contractors to provide, among other things, monitoring, verification, and audit support. Though still in their early stages of oversight, some state energy officials have noted monitoring challenges. For example, Arizona officials noted that some rural grant recipients were more challenging to monitor due to their remote location. Additionally, Colorado officials told us that detecting fraud in rebate programs is difficult and that while the contractor administering the program has procedures in place to detect fraudulent rebate claims, it is not possible to ensure that 100 percent of the claims will be legitimate. Colorado officials further noted that there was not a clear standard for how to monitor certain programs such as rebate programs. DOE officials have acknowledged that grant programs such as revolving loan programs can require special skill sets to monitor and reported that they are taking steps to provide recipients with technical resources by early September 2010. DOE officials stated that recipients have experienced challenges with meeting Recovery Act reporting requirements. Similar to EECBG, DOE requires monthly and quarterly reporting by SEP recipients to DOE. DOE officials stated that many state recipients have as many as 17 different Recovery Act programs and must coordinate with many different state agencies to fulfill their reporting requirements. In turn, state agencies must also coordinate with local agencies. The officials said that they faced the problem of balancing state and DOE needs with collecting information; asking states to collect too much information would be overly burdensome, while collecting insufficient information would not allow states or DOE to track long-term outcomes. To help decrease the administrative burden of reporting, DOE decreased both its monthly and quarterly reporting requirements effective for the August 30, 2010, reporting deadline. The changes will decrease the amount of job, performance, and funding information reporting required and will help states focus on expending Recovery Act funds. Both DOE and state energy officials have noted that reporting on outcome measures has been limited because SEP Recovery Act projects are in their early stages. For example, DOE officials stated that SEP recipients first had to report quarterly beginning in January 2010 but that the early reports by recipients did not include many critical metrics, such as total energy saved and dollars savings. DOE officials further stated that because outcomes such as total energy cost savings take time to achieve, and because the state energy offices were still in the initiation phases earlier in 2010, there are few outcomes to report. State officials have also noted that outcome data are currently limited due to the early stage of SEP Recovery Act projects. For example, District energy officials noted that they won’t have data on calculating energy savings until projects are complete. Specifically, the District plans to report on energy savings and greenhouse gas emissions by calculating the building square footage, pre- and post- installation utility bills, as well as the energy-savings measures installed and the dollars spent. On-site monitoring will be an important part of the verification process. State energy officials have indicated difficulties with reporting information into DOE’s primary reporting system, PAGE. For example, Iowa noted that PAGE was not compatible with their existing grant management system or other federal reporting systems, which meant that data had to be input twice. DOE’s Inspector General also described significant issues with PAGE in a recent report. Along with other concerns, the report indicated that DOE officials “did not seek input from grant recipients—the system’s external users—related to the design of PAGE due to the limited time before the system had to be operational.” To assist states with reporting, a NASEO official stated that DOE has asked NASEO to work with each recipient to complete the submissions through PAGE. Additionally, DOE officials stated that the contractor that developed PAGE is providing additional assistance and feedback to the recipients on data entry issues. The Recovery Act appropriated $5 billion for the Weatherization Assistance Program, which the Department of Energy (DOE) is distributing to each of the states, the District of Columbia (District), all five territories, and two Indian tribes. According to DOE, during the past 33 years, weatherization has helped more than 6.4 million low-income families by making long-term energy-efficiency improvements to their homes such as installing insulation; sealing leaks; and modernizing heating equipment, air circulation fans, and air conditioning equipment. These improvements enable families to reduce energy bills, allowing them to spend their money on more pressing needs. The Recovery Act appropriation represents a significant increase for a program that has received about $225 million per year in recent years. During 2009, DOE obligated about $4.73 billion of the Recovery Act’s weatherization funding to the states, territories, and tribes, while retaining about 5 percent of funds to cover the department’s expenses. Initially, DOE provided each recipient with the first 10 percent of its allocated funds, which could be used for start-up activities such as hiring and training staff, purchasing needed equipment, and performing energy audits of homes, among other things. Before recipients could receive the next 40 percent of their funds, DOE required each to submit a weatherization plan outlining how it would use its Recovery Act weatherization funds. These plans identified the number of homes to be weatherized and included strategies for monitoring and measuring performance. By the end of 2009, DOE had approved the weatherization plans of all 58 recipients and had provided all recipients with half of their weatherization funds under the Recovery Act. According to DOE officials, as of June 30, 2010, about 166,000 homes have been weatherized nationwide, or about 29 percent of the approximately 570,000 homes currently planned for weatherization. To release the remaining 50 percent of funds, DOE requires that recipients complete weatherizing 30 percent of the homes identified in their weatherization plans and meet other requirements— namely, fulfilling the monitoring and inspection protocols established in its weatherization plan; monitoring each of its local agencies at least once each year to determine compliance with administrative, fiscal, and state policies and guidelines; ensuring that local quality controls are in place; inspecting at least 5 percent of completed units during the course of the respective year; and submitting timely and accurate progress reports to DOE, and monitoring reviews, to confirm acceptable performance. Recovery Act funds are available for obligation by DOE until September 30, 2010, and DOE has indicated that the recipients are to spend their Recovery Act weatherization funds by March 31, 2012. Recipients’ ability to access all of their Recovery Act weatherization funding by meeting DOE’s requirements varies considerably. DOE records indicate that as of June 30, 2010, 29 states had weatherized at least 30 percent of their total planned units. As of August 2010, DOE reported it had released the remaining 50 percent of funds to 22 states that had met the other requirements. Of the 7 states and the District in our review for the Recovery Act weatherization program, two states, Iowa and Arizona, have been granted access to their remaining 50 percent. In Iowa, DOE released about $40.4 million after the state reported its completion of weatherizing 2,179 homes—more than 30 percent of its target of 7,196 homes. Similarly, in Arizona, officials reported the state had weatherized 1,930 homes, about 30 percent of its 6,414 total estimated homes, and gained access to the remaining $28.5 million. Additionally, other states, such as California and Florida, are close to meeting their 30 percent production targets. Despite a delayed start in spending Recovery Act funds, California reported weatherizing 8,679 homes out of its total estimated production target of 43,150 units as of June 30, 2010. Similarly, Florida officials reported a total of 3,878 single- family residences had been weatherized, or about 20 percent of the total 19,090. Furthermore, both California and Florida officials report they are on track to weatherize 30 percent of their total estimated units by September 30, 2010. Some recipients that we found to be behind schedule in our May 2010 report, such as the District and Georgia, have since increased their weatherization of units; however, these recipients still have not met production goals requested by DOE. For example, as of March 31, 2010, we found service providers in the District and Georgia had weatherized about 14 percent and 11 percent of homes identified in their state weatherization plans, respectively. By the end of June 2010, although both recipients’ production targets were still below DOE’s approved goal of 30 percent, the District and Georgia reported they have increased their production to about 25 percent and about 22 percent, respectively. Some recipients are still challenged with establishing controls to ensure compliance with weatherization program and Recovery Act requirements. DOE has issued guidance requiring recipients of Recovery Act weatherization funds to implement a number of internal controls to mitigate the risk of fraud, waste, and abuse. In our May 2010 report, we recommended that DOE should develop best practices for key internal controls that should be present at the local weatherization agency level to ensure compliance with key program requirements. DOE provides recipients with the discretion to develop and implement these internal controls in accordance with each state’s weatherization plan. Local agencies use various methods to prevent fraudulent or wasteful use of Recovery Act funds, such as conducting risk assessments. Since our last report, we have identified challenges in the implementation of internal controls for some local weatherization agencies. For example, in the District, we conducted client file reviews and found that while some weatherization project data were not present in the physical files, much, but not all, of this data was in an online software system used to manage weatherization projects. While the online system appeared to be a useful tool in managing weatherization projects, it has not yet been fully implemented and does not contain all of the data necessary to track individual weatherization projects from start to finish. As a result, at the time of our review, neither the physical files nor the online weatherization management system presented a complete record of weatherization projects. District officials reported that they conducted inspections of local weatherization agencies in early July 2010—roughly 2 weeks after our review—and found that all agencies they reviewed had copies of all required documentation in the physical files. Additionally, District officials reported they are continuing to fully implement the online reporting system and address issues associated with incomplete data. In Florida, the state agency responsible for administering the program had instituted various management controls over the program, but our review of two local weatherization agencies revealed internal control gaps and compliance issues similar to those identified in our May 2010 report. For example, weatherization work done was often not consistent with the recommendations of home energy audits and no reasons were given for the differences; in some instances, work was charged to the program but not done or lacked quality; several potential health and safety issues were not addressed; and contractors’ prices were not being compared to local market rates, as required by the state weatherization agency. State officials have acknowledged these problems and have taken steps to address the problems, including changing procedures and guidelines and instructing contract field monitors to be more attentive to these issues. The two local weatherization agencies we reviewed also agreed to take corrective actions. Most of the states we reviewed have oversight procedures in place to monitor local agencies; however, the level of monitoring varies considerably. DOE requires state weatherization agencies to conduct on- site monitoring of all weatherization service providers to inspect the management of funds and the production of weatherized homes at least once a year. These monitoring visits consist of a financial review of the service provider’s records pertaining to salaries, materials, equipment, and indirect costs; program reviews of the service provider’s records, contracts, and client files; and a production review, consisting of the inspection of weatherized homes by the state agencies and by the service provider. In our May 2010 report, we recommended that DOE set time frames for development and implementation of state monitoring programs; DOE generally agreed with this recommendation and indicated it will take steps to address this issue. We found in the states we reviewed that levels of monitoring varied considerably. Some state monitoring plans are fully implemented. For example, in Arizona, state officials reported program monitors conduct file reviews of all completed units each month using a statewide database. Also, program monitors visit each of the 10 service providers at least once a month, exceeding DOE’s requirement of yearly visits to local service providers. Iowa officials reported inspecting at least 5 percent of the weatherized homes for each local agency and providing monitoring at 15 of 18 local agencies. In contrast, monitoring procedures in other states have either just been fully implemented or are still facing challenges. We identified some issues in our May 2010 report related to weatherization monitoring in Georgia. Some monitoring positions remained vacant, and oversight of the providers had been slow to start. However, state officials at the agency responsible for the weatherization assistance program have since taken steps to address these issues. Specifically, they told us that their contractor had filled all monitoring positions, and all 22 of its providers have received monitoring visits. Additionally, Pennsylvania officials are still facing challenges. For example, state officials reported weatherization program monitors are not in compliance with some Recovery Act monitoring procedures, and they are not getting about half of their monitoring reports back to the agencies within 30 days of the site visit. DOE reported the need for Pennsylvania’s department responsible for administering the program to improve the financial management system to better track actual costs for each unit weatherized on a service provider basis. State officials reported they are working on corrective actions to address these concerns by August 2010. Finally, some recipients, such as California and the District, were delayed in spending Recovery Act weatherization funds and have just begun to implement monitoring efforts. For example, in California, program officials recently began on-site monitoring of Recovery Act activity in June 2010, and by July 31, they visited seven of the 38 service providers. Additionally, program officials also conduct quarterly performance visits as needed for providers with production deficiencies, monthly Recovery Act expenditure and performance analyses, fiscal monitoring, on-site monitoring of whistleblower complaints and high-risk agencies, and Davis- Bacon on-site reviews to ensure employees are paid appropriately and paperwork is in compliance. Similarly, District officials reported a number of monitoring procedures are in place, such as annual monitoring reviews of local weatherization agencies and site inspections of at least 10 percent of weatherized units. District officials told us that, as of July 15, 2010, their program managers had conducted monitoring visits of all seven local weatherization agencies, and program auditors had begun conducting site inspections for the quality assurance of work completed by contractors. With respect to energy cost savings, some states are actively measuring energy savings, while others are beginning to develop methods to do so. As with EECBG and SEP, weatherization recipients are required to report on different program metrics, both monthly and quarterly. A long-term goal of the weatherization program is to increase energy efficiency through cost-effective weatherization work, and DOE relies on its recipients to ensure compliance with this cost-effectiveness requirement. For example, in Arizona, the agency responsible for administering the weatherization program calculates the estimated kilowatt hour usage reduction and utility costs savings resulting from weatherization work performed on homes. As of June 2010, officials estimated that Recovery Act weatherization services have resulted in approximately $267,000 in savings for the residents in the 1,930 homes weatherized. Florida officials reported contracting with the University of Florida to conduct a study of overall energy savings utilizing consumption data obtained from clients’ utility bills. Alternatively, District officials are still developing a methodology to capture energy savings for weatherized homes. In July 2010, Georgia officials stated it had begun using a Web-based reporting tool to track real-time information on energy savings. In addition, program monitors will track and compare energy costs after weatherization work has been completed for 3, 6, and 12 months. While California estimated annual energy savings of about $1.5 million resulting from Recovery Act funds, state officials currently do not anticipate attempting to calculate actual energy savings and noted that they would like more guidance from DOE on its effort to study energy savings. In our May 2010 report, we provided eight recommendations and raised concerns about whether program requirements were being met. DOE generally agreed with all of our recommendations and has begun to take several steps in response. For example, DOE reported that it has drafted national workload standards to address our concerns regarding training, certification, and accreditation. DOE plans to issue these standards to recipients in October 2010. DOE is still in the process of considering our recommendations and will provide additional information on how they plan to fully implement our recommendations at a later date. The Recovery Act requires the U.S. Department of Housing and Urban Development (HUD) to distribute nearly $1 billion to public housing agencies based on competition for priority investments, including investments that leverage private sector funding or financing for renovations and energy conservation retrofitting. In September 2009, HUD awarded 396 competitive grants in the amount of $995 million to 212 public housing agencies. (Subsequently, three housing agencies returned competitive grants totaling approximately $14 million to HUD). The Recovery Act required housing agencies that received competitive grants to obligate 100 percent of their competitive grant funds within 1 year of the date when competitive funds became available to agencies for obligation, which means they have until September 2010 to obligate 100 percent of their funds. As of August 7, 2010, 179 housing agencies reported obligations totaling about $460.1 million for 340 grants. This reflects about 46.3 percent of the total Public Housing Capital Fund competitive funds allocated to them (see fig. 24). In addition, there were 57 grants (14 percent) located at 39 housing agencies for which no competitive funds had been obligated. Further, another 102 grants (26 percent) had less than 20 percent of their funds obligated. As the September 2010 obligation deadline approaches, HUD officials said they are working to ensure that housing agencies meet the deadline, but expect that some housing agencies may not. HUD will recapture any funds not obligated by the deadline and return them to the Department of the Treasury. One hundred forty-four housing agencies had also drawn down funds to pay for project expenses already incurred. As of August 7, 2010, these 144 public housing agencies had drawn down about $93.5 million, or about 9.4 percent of the total allocated to them. The Recovery Act required housing agencies to expend 60 percent of obligated funds within 2 years and expend 100 percent of Recovery Act funds within 3 years of the initial date when funds were provided to agencies for obligation. More specifically, housing agencies have been using their competitive grants for the creation of energy-efficient communities, gap financing for projects stalled because of financing issues, public housing transformation, and improvements addressing the needs of the elderly or persons with disabilities: For the creation of energy-efficient communities, HUD awarded 36 grants totaling $299.7 million for substantial rehabilitation or new construction and 226 grants totaling $305.8 million for moderate rehabilitation. For example, in New Jersey funds are to be used to incorporate green features in two new buildings with public housing units. Some of the energy-efficient features of the project include water conserving fixtures, Energy Star lighting packages in all interior units, and Energy Star or high-efficiency commercial grade fixtures in all common areas, as well as daylight sensors or timers on all outdoor lighting. In Massachusetts, funds are to be used to reduce the annual energy and water costs of more than $4,000 per unit in a physically distressed site. The project will redevelop a portion of the site into innovative, high-efficiency affordable housing for current residents with the new construction of 96 affordable rental units and a community center. For gap financing for projects that were stalled due to financing issues, HUD awarded 38 grants totaling $198.8 million. For example, in Pennsylvania, $10 million in funds are to be used to construct 50 units of a 101-unit development that will be a mixture of walk-up and duplex apartments and three-scattered site buildings replacing a high-rise building demolished in 2008. For public housing transformation, HUD awarded 15 grants totaling $95.9 million to revitalize distressed or obsolete public housing projects. For example, in Illinois, funds are to be used on a multiphase, mixed-finance project that will build public housing, rental, and for- sale apartments and houses on housing agency land and vacant city lots. For improvements addressing the needs of the elderly or persons with disabilities, HUD awarded 81 grants totaling $94.8 million. For example, in Texas, funds are to be used to complete work on common areas to make them accessible and ADA-compliant, upgrade and improve space used for supportive services, and add energy-efficient lighting, heating, ventilating, and air conditioning in properties housing the elderly. In California, funds are to be used to provide upgrades to nine dwelling units for accessibility improvements for the elderly and disabled, as well as improvements to common spaces used for supportive services targeted to those residents. As discussed above, HUD officials expect that some housing agencies may not meet the September 2010 competitive grant obligation deadline. They noted that among all the competitive grant projects nationwide, the 75 grants supported by mixed-financing have been at greatest risk of missing the obligation deadline. HUD’s Office of Urban Revitalization has assigned a grant manager to each of the mixed-finance competitive grant projects to track and monitor their progress. Officials with 5 of the 10 housing agencies we visited that had received competitive grants told us they were experiencing challenges related to mixed-financing of their projects, but they still anticipated meeting the deadline. Because funding for these projects comes from multiple sources, if one financing party is not able to finalize its part of the contract by the obligation deadline, the housing agency will not be able to close on the contract. As a result, the housing agency would not be able to obligate its competitive grant funds on time and the funds would be recaptured. For example, one housing agency is relying on a 4 percent low-income housing tax credit to pay for about $10 million of the $40 million cost for the first phase of its project. The 4 percent tax credit was contingent on the state selling tax-exempt bonds, and according to HUD field office officials, the state’s difficulty doing so had prevented the housing agency from securing the tax credit. State officials told us they notified the project developer on August 5, 2010, that the tax-exempt bonds, which will generate the tax credits, had been approved, which would allow the housing agency to submit its final paperwork to HUD by September 18, 2010. Additionally, HUD field staff have taken several steps to assist public housing agencies in obligating Recovery Act competitive grant funds by the September 2010 deadline. HUD field officials told us that they have been communicating regularly with housing agencies via e-mail and telephone to address their questions, provide technical assistance, and monitor their progress. For example, field staff in one field office in Texas use weekly conference calls to communicate with all of the housing agencies in their jurisdiction and answer their questions about obligation- related issues for competitive grants. The officials also told us they have dedicated three staff to work with the five public housing agencies under their jurisdiction that received 14 competitive grants. HUD field staff in Illinois have contacted each competitive grant recipient in their region on a weekly basis and use an internal tracking sheet to monitor progress. HUD officials in Massachusetts have provided additional oversight to smaller housing agencies to help them better understand federal procurement policies. Based in part on these efforts, HUD field staff believed that housing agencies in Massachusetts would meet the September 2010 obligation deadline. As we note in our May 2010 Recovery Act report, HUD plans to redistribute $17.16 million of competitive and formula grant funds that were rejected or returned by housing agencies by awarding a new set of competitive grants. HUD plans to redistribute these funds to qualified housing agencies that previously applied for competitive grants but did not receive them because HUD had obligated all of the nearly $1 billion allocated to the program. Given HUD’s emphasis on green, energy-efficient housing, HUD will limit the redistribution of funds to those applications for energy retrofit projects. Prior to funding any of the remaining applications, HUD planned to verify that potential recipients still would be able to complete the work outlined in their original applications and that they currently are in compliance with Recovery Act requirements. Of the 23 public housing agencies that HUD has contacted and verified their eligibility to receive additional competitive grant funds, 22 agencies accepted the additional funds. According to HUD officials, they may be able to redistribute these funds by the end of fiscal year 2010. According to HUD officials, once the housing agencies receive the redistributed funds, housing agencies must obligate 100 percent of the funds within 1 year, expend 60 percent within 2 years, and expend 100 percent within 3 years. The Recovery Act required HUD to allocate $3 billion through the Public Housing Capital Fund to public housing agencies using the same formula for amounts made available in fiscal year 2008. HUD allocated Capital Fund formula dollars to 3,134 public housing agencies shortly after passage of the Recovery Act and, after entering into agreements with housing agencies, obligated these funds on March 18, 2009. As we previously reported, all housing agencies met the March 17, 2010, obligation deadline for formula grants by either obligating all of their funds or rejecting or returning a portion of the funds. The Recovery Act also required that public housing agencies expend 60 percent of their formula funds within 2 years from when the funds became available and expend 100 percent of their formula grant funds within 3 years from when the funds became available. Housing agencies have been making progress in drawing down funds in accordance with these deadlines. According to HUD data, as of August 7, 2010, 3,075 housing agencies had drawn down funds totaling more than $1.6 billion from HUD, or about 55 percent of the total allocated to the housing agencies, to pay for project expenses already incurred (see fig. 25). Public housing agency officials said they have been using these funds to support a variety of improvement projects at public housing sites, including performing roofing and gutter work, replacing windows and doors, rehabilitating unit interiors, and replacing heating, cooling, and hot water systems. For example, a housing agency in California used formula funds to rehabilitate vacant units at two sites quickly to make them available for lease to prospective low-income tenants. Work at both sites included repairing damaged walls, ceilings, and floors; removing old plumbing fixtures; replacing tile and appliances; and painting the interiors of units (see fig. 26). In Pennsylvania, a housing agency is using formula funds to rehabilitate 23 row houses on the last remaining blighted block adjacent to a large redevelopment the housing agency had already completed. Combined with the previous redevelopment, once this project is complete, there will be more than 700 total units in an approximate four square block area. Work on the last block of the development began in March 2010 (see fig. 27). According to housing agency officials, this redevelopment project not only has provided additional public housing units, but also has increased property values for row homes in the area from $40,000 or $50,000 to asking prices of up to $125,000. HUD has employed multiple monitoring efforts for Recovery Act funds and has found that only a few housing agencies had deficiencies relating to their obligations. For the second year of implementation, HUD’s strategy for monitoring Recovery Act formula and competitive grant funds includes a combination of remote and on-site reviews of housing agencies’ administration of Recovery Act requirements, the same approach it used for monitoring housing agencies during the first year of the Recovery Act. Specifically for the formula grant funds, HUD developed a four-tier monitoring approach that includes quick-look reviews of all Recovery Act formula grant obligation documents generated from February 26, 2010, to March 17, 2010, by 543 housing agencies that had obligated less than 90 percent of formula grant funds as of February 26, 2010. HUD completed the quick- look reviews in July 2010; on-site and remote reviews. Housing agencies currently designated as troubled will have a minimum of one on-site review. Housing agencies that are nontroubled may be subject to additional remote or on-site reviews depending upon factors including having open audit findings, failing to expend funds in prior years, and having procurement-related deficiencies such as not revising procurement policies to reflect Recovery Act requirements. HUD anticipates that about 25 percent of grant recipients will be subject to these reviews, which the agency plans to complete by February 2011; quality assurance and quality control reviews by HUD’s Office of Field Operations, which HUD plans to conduct between December 2010 and March 2011; and independent reviews (performed by an outside contractor) of housing agencies that HUD identified as being the top 100 to 125 funded agencies with the largest formula grant award amount. The independent reviews are to be completed by June 2011. According to HUD officials, as a result of its quick-look reviews of 543 housing agencies, HUD staff identified 26 housing agencies that were potentially deficient in meeting HUD requirements for obligating formula grant funds and required further review. For example, some housing agencies signed contracts to obligate funds after the March 17, 2010, deadline. HUD staff also determined that some housing agencies obligated funds for products and services that were not approved for Recovery Act use, such as paying the local police department to provide security services. In addition, HUD staff identified 24 housing agencies with minor deficiencies that did not warrant further review. Finally, there were 22 housing agencies that did not submit final documentation requested by HUD staff for the quick-look reviews. HUD plans to conduct on-site reviews of those housing agencies if they do not submit the requested documentation. HUD created a panel comprised of officials from its Office of Field Operations, Office of Capital Improvements, and Office of General Counsel to examine in greater detail those 26 housing agencies with potential deficiencies identified via the quick-look reviews, as well as potential deficiencies identified by other means. Of the 26 housing agencies, the panel determined that deficiencies at 8 housing agencies were significant and necessitated a recapture of funds. As of August 27, 2010, the panel reviews identified approximately $1 million in Recovery Act funds that necessitated recapture. HUD is in the process of recapturing these funds and will return them to the Department of the Treasury. HUD plans to continue conducting panel reviews and will recapture Recovery Act funds from any housing agency found to have deficiencies. HUD adopted a similar multireview approach for its second-year monitoring of Recovery Act competitive grant funds. HUD has been conducting remote reviews of all 393 competitive grants and had planned to complete them by August 20, 2010. As of August 23, 2010, HUD field staff reported having completed 371 remote reviews of competitive grants. HUD officials are in the process of analyzing the results of these reviews. While HUD officials have not completed their reviews, as of August 27, 2010, the agency may recapture approximately $12 million in competitive grant funds based on remote review findings and other means. For example, after reviewing one project’s proposed building site, HUD staff found that the project would be located in an industrial space next to a railroad track with little access to roadways, raising both transportation and environmental concerns. HUD also plans to conduct quality assurance and quality control reviews for a random sample of 20 to 25 percent of the remote reviews, which HUD plans to complete by September 2010. In October and November 2010, HUD also plans to review obligations made by housing agencies that had not fully obligated their grant funds within 2 weeks of the September 2010 deadline. Finally, from January to March 17, 2011, HUD plans to conduct on-site reviews of all eight housing agencies that received competitive grant funds and were designated as troubled as of September 30, 2009. Given that less than half of the competitive grant funds have been obligated to date, we believe that it is important for HUD to continue to closely monitor progress in meeting the obligation deadline. In addition, because HUD identified some deficiencies among those housing agencies that obligated their formula grant funds near the deadline, we believe it will be important for HUD also to review those competitive grant obligations made by housing agencies just prior to the deadline. As part of its second-year strategy, HUD developed a management plan for the administration of Recovery Act funds, including the need for an additional 11 FTEs to carry out Recovery Act responsibilities. This was in response to our March 2010 recommendation that HUD develop such a plan to address its resource needs for both the Recovery Act funds and the existing Capital Fund program. Similarly, HUD’s Office of Public and Indian Housing also agreed to develop a management plan addressing the activities and resources needed to administer its existing Capital Fund program. In July 2010, HUD provided us with its management plan for the Public Housing Capital Fund program. The plan summarized the key activities HUD undertakes to monitor and facilitate the use of these funds by program area, including rule and policy development, planning, program awards, program management, technical assistance, and reporting. The plan also included the specific activities, tasks, and resources used for each of these existing program areas, identifying approximately 91 existing FTEs in its headquarters and field offices to support these activities. According to HUD’s management plan, HUD’s current staffing level is sufficient to manage its existing Capital Fund program, but the agency could more efficiently utilize its current resources. As a result, HUD plans to realign current staff to focus on its core missions, including Recovery Act responsibilities. HUD’s management plan for its existing Capital Fund program states that HUD currently has the staff and resources required to effectively implement its core programs. However, officials in two HUD field offices we visited stated that they revised their oversight strategies for their regular programs to accommodate Recovery Act work. For example, officials in one HUD field office told us that most of the field office’s resources have been devoted to the Recovery Act and, as a result, staff have done less on-site monitoring of non-Recovery Act grant recipients. However, the officials noted that their staff still have been conducting remote monitoring of all recipients, although staff have not conducted any asset management reviews of grant recipients this year. At another HUD field office, an official told us that since the Recovery Act was passed, Recovery Act work has been a top priority for HUD nationwide. He noted that other housing work, especially conducting on-site reviews, has been deferred to meet Recovery Act requirements. According to HUD headquarters officials, as a consequence of field office monitoring of Recovery Act requirements, field staff conducted reviews (on-site or remote) of every housing agency in the country, something they would not have accomplished in the course of their routine monitoring activities. HUD officials also stated that field staff were able to strengthen housing agency officials’ knowledge of contract administration and forge stronger relationships with a greater number of housing agencies as a result of their Recovery Act oversight. HUD officials told us they have been using the same quality review procedures for the fourth recipient reporting period as they did for the third reporting period. However, HUD also issued additional guidance to recipients to help them more accurately report job-related data. Although HUD does not play a direct role in compiling the recipient data, officials noted they continued to support recipients’ report preparation by providing technical assistance, including issuing guidance, conducting conference calls, manning a call center, and transmitting regular e-mail correspondence. Officials also told us that their data quality reviews of recipient reports continued to include automated data checks to flag values in specific fields that were incorrect or that fell outside of parameters that HUD had defined as reasonable and to generate comments notifying housing agencies of the potential errors. HUD officials told us that they also have been using the same processes from the third reporting period for checking for and addressing errors in job-count totals for the fourth reporting period. The officials noted that their on-time reporting rate for the fourth reporting period was high and their error rate continues to decline with each reporting cycle. We are in the process of assessing the transparency of information reported in Recovery.gov for three HUD Recovery Act programs, including formula grants awarded through the Public Housing Capital Fund program. As we reported in May 2010, officials with two housing agencies reported using an out-of-date version of HUD’s jobs-counting calculator for the third round reporting period. To ensure that housing agencies use the correct jobs calculation, we recommended that HUD clearly emphasize to housing agencies that they discontinue use of the outdated jobs calculator provided by HUD in the first round of recipient reporting. In response to our recommendation, HUD sent an e-mail to housing agencies on June 30, 2010, that explicitly instructed them not to use the outdated jobs-counting calculator, as it was not correctly computing the FTE calculation per updated OMB guidance. This e-mail also included a link to HUD’s new online jobs-counting calculator and instructed housing agencies to use this calculator for the July, and all future, reporting periods. OMB’s December 2009 guidance states that to the maximum extent practicable, job information should be collected from all subrecipients and vendors in order to generate the most comprehensive and complete job impact numbers available. As we reported in May 2010, for the third reporting period, at least one housing agency did not report job information for subcontractors even when the subcontractors were providing essential goods and services for Recovery Act-funded projects. We recommended that HUD issue guidance to housing agencies that explains when the prime recipient should report FTEs attributable to subcontractors. In response to our recommendation, HUD notified housing agencies in a June 30, 2010, e-mail that it had developed additional guidance for housing agencies to use when determining whether prime recipients should report FTEs for subcontractors and provided a link to the guidance on its Web site. The guidance noted that housing agencies should include Recovery Act-funded hours that contractors and subcontractors worked as part of their FTE calculation. The Recovery Act established two funding programs that provide capital investments to Low-Income Housing Tax Credit (LIHTC) projects: (1) the Tax Credit Assistance Program (TCAP) administered by HUD and (2) the Grants to States for Low-Income Housing Projects in Lieu of Low-Income Housing Credits Program under Section 1602 of the Recovery Act (Section 1602 Program) administered by the Department of the Treasury (Treasury). Before the credit market was disrupted in 2008, the LIHTC program provided substantial financing in the form of third-party investor equity for affordable rental housing units. As the demand for tax credits declined, so did the prices third-party investors were willing to pay for them, which created funding gaps in projects that had received tax credit allocations in 2007 and 2008. TCAP and the Section 1602 Program were designed to fill financing gaps in planned tax credit projects and jumpstart stalled projects. For TCAP, the Recovery Act requires HUD to obligate $2.25 billion to 52 housing finance agencies (HFA) for gap financing of LIHTC projects that included some LIHTCs. HFAs had to give priority to projects that were “shovel ready” and expected to be completed by February 2012. HFAs and project owners face three milestones for committing and disbursing TCAP funds. HFAs had to commit 75 percent of their TCAP awards by February 16, 2010. According to HUD officials, all HFAs met the February 16, 2010, deadline except for South Carolina because it did not have enough affordable housing projects that needed TCAP assistance. The Recovery Act requires that HFAs disburse 75 percent of the TCAP awards by February 16, 2011. The Recovery Act requires that project owners spend all of their TCAP funds by February 2012. As of the end of July 2010, HUD had outlayed 32.6 percent (about $733 million) of the TCAP funds, up from 16.5 percent as of April 30, 2010, that we reported in May (see fig. 28). Although HUD originally made obligations of $2.25 billion to HFAs, HUD officials told us that they have taken back TCAP funds from HFAs that either did not commit 75 percent of funds by February 2010, did not have enough demand for the funds, or both. South Carolina—which did not meet the February deadline and did not have enough demand—returned $13 million of its original $25.4 million TCAP allocation. Alabama, which did meet the deadline, returned $3 million of its original $32 million allocation because it did not have enough demand for TCAP funds and would not be able to use all the funds. (We further discuss why some HFAs were challenged to meet deadlines for TCAP and the Section 1602 Program later in this section.) HUD officials told us that they plan to reallocate the $16 million through a competitive process and develop criteria to be issued in a HUD notice this fall. HUD officials said they expect only HFAs that have demonstrated program progress to be eligible for consideration since the existing TCAP deadlines for disbursing 75 percent of funds by February 2011 still would apply. HUD officials told us that they heard informally from about 20 HFAs interested in receiving additional TCAP funds. For the Section 1602 Program, Treasury had obligated $5.5 billion and outlayed about 25.5 percent ($1.4 billion) as of July 31, 2010, up from the 13.6 percent outlayed as of April 30, 2010 (see fig. 29). Unlike HUD, Treasury has not taken back any funds because the first deadline for the HFAs to disburse funds is December 31, 2010. Specifically, under Section 1602 Program rules, HFAs must commit the funding to projects by December 2010 and can continue to disburse funds to awarded projects through December 31, 2011, provided that the project owners spend at least 30 percent of the eligible project costs by December 31, 2010. HFAs must disburse all Section 1602 Program funds by December 2011, or the funds the HFAs have not disbursed must be returned to Treasury. Originally, Treasury’s guidance required that HFAs had to make all Section 1602 Program disbursements by December 31, 2010, or return the undisbursed funds to Treasury, but Treasury extended the disbursement deadline to December 31, 2011. In the six previous Recovery Act reports, we have collected and reported data on programs receiving substantial Recovery Act funds in 16 selected states and the District of Columbia. These 16 states and the District of Columbia together have about 65 percent of the U.S. population and will receive an estimated two-thirds of the TCAP funds and about 60 percent of the Section 1602 Program funds. Figure 30 lists the TCAP and Section 1602 Program obligations and outlays for the 16 states and the District of Columbia as of July 31, 2010. According to HUD and Treasury data, nearly all the HFAs have made progress in disbursing TCAP and Section 1602 Program funds to project owners. Figure 30 shows HUD outlays to HFAs in the 16 selected states and the District of Columbia. Because HFAs must disburse their TCAP and Section 1602 Program funds to project owners within 3 days, these figures would closely track disbursements. As shown in figure 30, Arizona, Iowa, and the District of Columbia have drawn down more than 50 percent of their TCAP funds, and Iowa, North Carolina, and Pennsylvania have drawn down more than 50 percent of their Section 1602 Program funds as of July 31, 2010. When we reported in May, North Carolina was the only state out of the 16 selected states and the District of Columbia to have drawn down more than 50 percent of its funds from one of the programs. However, the level of outlays, and therefore HFA spending, continues to vary considerably across the states. We previously reported that the difference in spending across the 16 states and the District of Columbia depended on when the HFA requested Section 1602 Program funds, the level of construction activity, and the HFA’s implementation timeline. For example, Treasury officials told us that while 40 HFAs had requested funds by September 2009, the Mississippi Home Corporation (MHC) requested funds for the first time in February 2010. According to Treasury officials, Treasury outlayed funds to MHC for the first time on August 12, 2010. MHC officials told us that they expect to close on most of their projects in August and September 2010, which is when MHC will sign agreements with project owners that will meet HFA requirements to begin disbursing funds. HFA officials, project owners, and third-party investors that we interviewed generally agreed that TCAP and the Section 1602 Program provided funds to many stalled LIHTC projects and enabled them to move forward. For example, some owners of stalled projects said that their projects could not have continued without TCAP and Section 1602 Program funds. TCAP and Section 1602 Program funds also made some rural projects and special needs population projects—such as farm worker housing, housing for formerly homeless, and housing for the disabled— more feasible and attractive to third-party investors. Officials from one HFA we interviewed told us that investors scrutinize the financial outlook for rural projects because they expect that the income for these projects will be tight or non-existent. HFAs, project owners, and investors also told us that in a difficult market, investors are less likely to risk investments in these types of projects. We interviewed nine HFAs that awarded financing to 385 projects—154 (40 percent) were rural and 37 (10 percent) were special needs population projects—which may not have moved forward without the assistance of the TCAP and Section 1602 Program. LIHTC projects that received TCAP and Section 1602 Program funds typically had less investor equity than LIHTC projects had prior to the economic downturn. The decrease in investor equity varied for each project and by state. The nine HFAs that we interviewed reported that equity in LIHTC projects prior to the economic downturn generally ranged from 50 to 80 percent of the total financing for a project. In contrast, for projects receiving TCAP and Section 1602 Program funds from the nine HFAs that we interviewed, investor equity represents on average 43 percent of a project’s overall financing. For LIHTC projects receiving TCAP and Section 1602 Program funds, the decrease in investor equity has been offset by the increase in federal funds. Some HFA officials, project owners, and third-party investors that we interviewed currently believe that demand for LIHTCs is re-emerging since the credit markets were severely disrupted in 2008. However, some of them said that third-party investor demand still was not at a level where most projects were feasible. Investor demand and tax credit prices have been picking up in some states and regions more than others. According to some investors with whom we spoke, investor demand and tax credit prices tended to be higher on the coasts than in the middle of the country. Investors also preferred certain types of projects, such as those that were larger, located in urban areas, and catered to seniors. Projects that were smaller, located in rural areas, and targeted special needs populations more often lacked third-party investors. Since the economic downturn, the composition of third-party investors also has changed. First, Fannie Mae and Freddie Mac, which according to an investor, had bought the largest share of tax credits (40 percent in 2006) and were the primary third-party investors in special population projects, exited the marketplace. Second, according to another investor, the low tax credit prices have resulted in higher yields that in turn have attracted “yield-driven” investors, including insurance companies, large corporations, and individuals. Banks, which had invested in the tax credit markets primarily because of Community Reinvestment Act requirements, have continued to do so. However, if tax credit prices continue to rise, yields will decrease, which may cause “yield-driven” investors to exit the market. Because the LIHTC market is still rebounding and some states continue to face challenges attracting investors, the majority of HFAs that we interviewed support temporarily extending the Section 1602 Program. If the Section 1602 Program were not extended, some project owners anticipated scaling back development activities and being more selective about which projects they develop. Some HFAs and project owners who supported the extension of the Section 1602 Program believe the funds would be most useful if used to fill financing gaps, to fund rural and special needs population projects, and to provide grants or funds to nonprofits that are developing projects that target such needs. Some HFAs and projects may face challenges in meeting TCAP and Section 1602 Program deadlines for reasons ranging from increased workload to the time needed to assemble financing to construction delays. Some HFAs reported that the addition of TCAP and Section 1602 Program transactions this year has increased their workloads significantly. One HFA reported that it typically closed from 18 to 22 projects annually, but this year would close 60 projects. Two other HFAs reported that they typically closed from 8 to 15 projects annually, but expected to close 50 and 85 projects this year, respectively. Most HFAs Likely Will Meet TCAP Deadlines, but Those That Have Delayed Disbursing TCAP Funds May Face Challenges For TCAP, the potential challenges HFAs face appear to be related to how they structured the timing of the TCAP disbursements. According to HUD officials, it is difficult to determine which states may have difficulty meeting TCAP spending deadlines because states took different approaches to awarding funds. A few HFAs may be at a disadvantage in terms of meeting 2011 and 2012 deadlines because they chose to award TCAP funds late in the development process to ensure that commitments from other financing sources are in place and the projects will be successfully completed. As a result, these HFAs have disbursed a small percentage of their TCAP funds to date. The HFA officials we interviewed in nine states did not believe the TCAP disbursement deadline was a challenge for their projects. Treasury Plans for Ensuring That Section 1602 Program Projects Meet Spending Deadlines Remain Unclear According to some HFA officials, some project owners may face challenges meeting the 30 percent spending deadline (December 31, 2010) for Section 1602 Program projects, due to reasons that affected some projects allocated Section 1602 Program or TCAP funds, ranging from the timing needed to assemble or disburse funding by HFAs, litigation, and routine construction delays. According to HFA officials with whom we spoke, some projects needed to wait for FHA mortgage insurance approval or approval from other sources of subsidies before receiving final HFA approval. Officials from one HFA with whom we spoke said that while all their Section 1602 Program projects were shovel-ready, getting all parties educated and comfortable about the requirements of the new program took time. Some projects had been stalled for months, and it took time to “ramp up” all parties engaged in the projects. In addition, some projects encountered delays during the construction process due to weather or other issues typical of development projects such as waiting for construction permits from local agencies. Other projects were delayed due to legal issues unrelated to the Section 1602 Program. For example, Florida Housing Finance Corporation officials told us that as of August 2010, about $22.3 million in Section 1602 Program funds were allocated to projects involved in litigation. Furthermore, in HFAs that delayed the decision to participate in the Section 1602 Program or that had a slow start to launching the Section 1602 Program projects have collectively had less time to spend eligible funds than in other states where funds were awarded earlier. For example, the MHC did not request Section 1602 Program funds from Treasury until February 2010. MHC told us that it is concerned that each of its 17 projects receiving Section 1602 Program funds may not meet the 30 percent spending deadline. One HFA said that a typical LIHTC project would take about 15 months from applying for funds to closing the project and commencing construction. Our review of projects in nine states shows that these HFAs had not yet awarded Section 1602 Program funds to 75 projects as of June 30, 2010. Further, as of June 30, 2010, about 39 percent of Section 1602 Program projects (98 of 252 projects) that have been awarded funds in these nine states have not yet closed, which is the first step to being able to draw funds from entities that provide financing. Treasury initially required HFAs to return all Section 1602 Program funds not disbursed by December 31, 2010. In a regulation of August 31, 2009, Treasury extended the deadline for disbursing Section 1602 Program funds by 1 year, provided project owners met the 30 percent spending requirement. Treasury had determined that completion of projects by December 31, 2010, was too restrictive and would preclude funding of otherwise eligible projects. Treasury officials told us that the new 30 percent requirement was put in place to assure that project owners were making some progress by the original (December 31, 2010) deadline date. Missing the deadline for the 30 percent spending requirement could have significant implications for the viability of Section 1602 Program projects. If project owners failed to meet this spending deadline, they would not be eligible to receive any additional Section 1602 Program funds. If prevented from receiving the rest of their Section 1602 Program award, project owners might not be able to find replacement financing and committed financing sources might withdraw their funds. If projects could not secure replacement financing quickly, they would be unlikely to be completed in accordance with Section 1602 Program and LIHTC requirements and would be stalled again. Under such a scenario, the HFA would be responsible for recapturing any Section 1602 Program funds that were disbursed to the project prior to the 30 percent spending deadline. Treasury officials told us that they plan to enforce the deadline requirement, and would provide written guidance to HFAs that will describe the kind and format of information to be reported to Treasury to document whether projects have met the spending deadline. Treasury officials told us that they do not plan to collect this information until after the deadline has passed. Without a plan in place for handling projects that do not meet the Section 1602 Program deadline, Treasury risks further project interruptions, including the possible loss of any job creation associated with projects that must be discontinued if alternate financing cannot be found. TCAP and the Section 1602 Program require HFAs to assume a greater project oversight role than in the standard LIHTC program. Under the LIHTC program, HFAs need not monitor construction disbursements, but must report that projects are completed and occupied in accordance with LIHTC requirements and deadlines. For long-term monitoring under the LIHTC program, third-party investors in the project perform long-term asset management, and HFAs perform limited compliance reviews. HFAs must review LIHTC projects at least annually to determine project owner compliance with tenant qualifications and rent and income limits. Additionally, every 3 years the HFAs must conduct on-site inspections of all buildings in each LIHTC project and inspect at least 20 percent of the LIHTC units and resident files associated with those units. However, under TCAP and the Section 1602 Program, HFAs must monitor the disbursement and use of funds throughout the construction period. HFAs also must perform long-term asset management, which imposes ongoing responsibilities on the HFAs for the viability of each project. An HFA’s asset management activities may include monitoring current financial and physical aspects of project operations. For example, an HFA may perform analyses or approvals of operating budgets, cash flow trends, and reserve accounts and conduct physical inspections more frequently than every 3 years. Asset management activities also examine long-term issues related to plans for addressing a project’s capital needs, changes in market conditions, and recommendations and implementation of plans to correct troubled projects. HFAs also ensure compliance with LIHTC requirements as part of their asset- management activities. Moreover, HFAs are responsible for returning TCAP and Section 1602 Program funds to HUD and Treasury, respectively, if a project fails to comply with LIHTC requirements. Given the increase in responsibilities and risks to HFAs, HFAs have developed approaches for oversight during the construction period as well as long-term asset management over the 15-year tax credit compliance period. These approaches are designed to monitor the physical and financial health of projects and compliance with LIHTC affordability restrictions. In response to an open-ended question in our survey asking about what changes in oversight activities HFAs planned to put in place to assure compliance with the TCAP and Section 1602 Program, 37 HFAs said they would make some changes in oversight activities, 11 said they would make no changes in oversight activities, and 6 said they were not sure what changes they would make or they did not answer the question. Changes in activities varied across HFAs. For example, of the 37 HFAs that said they would make some changes, 13 HFAs noted that they would make changes to their disbursement process to more closely track the use of TCAP and Section 1602 Program funds, 9 HFAs said they would increase overall monitoring of projects or reporting required by project owners, and 7 HFAs planned to implement or increase the frequency of site visits or inspections. Eleven HFAs said they would not change their oversight activities, but 6 of those 11 HFAs noted that they would rely on their experience in and established procedures for monitoring their lending programs or disbursement of other federal funds. HFAs Have Increased Oversight during Construction Phase of TCAP and Section 1602 Program Projects HFAs have been providing greater oversight during the construction period for projects that receive TCAP and Section 1602 Program funds. This oversight includes monitoring disbursements of the program funds, overseeing the construction process, and ensuring compliance by TCAP projects with federal cross-cutting requirements such as Davis-Bacon wage requirements and the National Environmental Policy Act of 1969 (NEPA). For example, HFAs must review payrolls for all TCAP projects to ensure that project owners and contractors are paying prevailing wages to individuals employed in the construction of the projects. HFAs also had to ensure that all TCAP projects complied with the NEPA environmental review process prior to receiving any TCAP funds. However, according to HUD officials, up to one-third of HFAs lacked prior experience in overseeing compliance with these federal cross-cutting requirements. Under TCAP and the Section 1602 Program, HFAs have been disbursing a greater volume of funds than in the past and, as a result, have taken additional steps to limit risk and increase monitoring. For example, one HFA we interviewed expected to disburse the same amount of funds in 1 month as it previously disbursed annually. For standard LIHTC projects, HFAs only allocate tax credits and do not disburse funds. HFAs assume greater risk by disbursing TCAP and Section 1602 Program funds because they are responsible for repaying funds to HUD and Treasury, respectively, in the event of noncompliance. The nine HFAs we interviewed are supporting an average of 23 to 62 percent of the total development costs of projects through awards of TCAP or Section 1602 Program funds. Prior to the Recovery Act, three of these HFAs typically did not provide loans or grants to LIHTC projects, but now they are providing an average of 23 to 47 percent of total project financing through TCAP and Section 1602 Program project awards. The remaining six HFAs typically funded up to about 33 percent of the total project financing for some LIHTC projects through other loan programs prior to the Recovery Act. HFAs have mitigated risks by broadening the scope of guarantees and by requiring project owners to certify the accuracy of the information provided. Approaches to overseeing the construction process varied across HFAs, although most HFAs we interviewed planned to apply their existing construction oversight framework to oversee TCAP and Section 1602 Program projects. These activities include site inspections of varying frequency. For example, some HFAs we interviewed planned to conduct monthly site inspections, while two HFAs said that construction superintendents would visit project sites twice per month or more frequently if needed. Site inspections help confirm whether work performed on a project is carried out as planned and approved by the HFA. One HFA also told us that it planned to facilitate communication among project owners, investors, and other lenders by sharing information or holding more frequent meetings with these stakeholders. HFAs and project owners told us that meeting Davis-Bacon wage reporting and NEPA environmental review requirements for TCAP projects required time and resources, and it was easier for HFAs with prior experience to meet the requirements. We previously reported that HFAs viewed Davis- Bacon and NEPA requirements as a challenge and followed up with HFAs and project owners on ways that they have been meeting the requirements. To comply with Davis-Bacon wage requirements, some HFAs developed new processes for data collection and planned to apply additional scrutiny to data received from project owners or more frequent reporting, and other HFAs developed training for project owners. To comply with NEPA requirements, some HFAs and project owners drew upon their experience administering HOME funds, which also require NEPA compliance. Project owners said that in some cases they allocated additional resources to projects to complete environmental reviews ahead of project closings. In Response to New Asset Management Responsibilities, HFAs Have Increased Long-Term Monitoring and Put in Place Stricter Requirements for Project Owners In response to the new asset management responsibilities HFAs have accepted under TCAP and the Section 1602 Program, all HFAs we interviewed reported that they had strengthened their procedures for long- term monitoring to meet the program requirements, mitigate risks, and help ensure projects’ long-term physical and financial viability. Approaches to long-term asset management varied depending on an HFA’s resources, workload, and asset management experience. However, all nine of the HFAs we interviewed have implemented some oversight changes, such as increasing the number of inspection visits over the 15-year tax credit compliance period and the frequency of reporting, as well as enhancing financial monitoring of projects receiving TCAP and Section 1602 Program funds when compared with standard LIHTC projects. Of the nine HFAs we interviewed, four HFAs said that instead of inspecting projects every 3 years as required by the LIHTC program, they will inspect projects annually or more often. Seven HFAs said that they will require reports from project owners on a monthly, quarterly, or as- requested basis that may include information such as project income statements. Five of the nine HFAs we interviewed have the ability to approve and remove the project’s management agent and general partner of the project owner if the project is in noncompliance with LIHTC requirements or the terms of the HFA’s agreement with the project owner. Two HFAs said that they have new software systems in place to manage asset management activities, and four said they plan to provide additional training for staff to manage the monitoring and reporting for TCAP and Section 1602 Program projects. HFAs said that they have also strengthened financial requirements for project owners. All nine HFAs require annual financial audits or reports. Other changes HFAs have made include requiring or performing capital needs assessments to determine the condition and expected life of the physical infrastructure, calculating replacement costs, and assessing whether a project’s replacement reserve will be adequate to meet the expected capital needs of a project. Some HFAs also require project owners to provide guaranties that the project owner will ensure compliance with program requirements or the project owner will be personally liable to repay TCAP and Section 1602 Program funds to the HFA. Some HFAs also have strengthened requirements for financial reserves or changed how and when the reserves can be accessed to ensure that there is a source of funds to draw upon in the event the project encounters operating difficulties. Some project owners with whom we spoke said that HFAs have been careful in structuring requirements to protect the HFAs’ interests and that in some cases the HFAs’ requirements and plans for monitoring were stricter than those typically required by third-party investors. Nearly all HFAs we interviewed noted that a third-party investor provides additional oversight and monitoring or financial interest in a project. TCAP requires tax credits to remain in transactions, and project owners typically sell the tax credits to third-party investors. Therefore, most TCAP projects have some level of private investment and oversight. In contrast, the Section 1602 Program allows HFAs to exchange all of the tax credits awarded to a project in return for Section 1602 Program funds. As a result, many Section 1602 Program projects do not have third-party investor oversight. However, some HFAs have required third-party investor participation in all or the majority of their Section 1602 Program projects, and they plan to work in coordination with investors on asset management activities. Based on information from our survey, 32 HFAs expected to have a total of 485 projects without third-party investors out of a total of 825 projects expected to be financed with Section 1602 Program funds. In our survey, about half of the HFAs planned to outsource asset management functions for TCAP and the Section 1602 Program. Based on our interviews with nine HFAs, we found that HFAs with past asset management experience and HFAs with a smaller volume of projects often chose to conduct their own asset management activities over the 15-year compliance period. In contrast, HFAs with little asset management experience or many projects requiring oversight often chose to hire a third-party contractor to perform asset management activities. However, one HFA in each of these categories chose to work in coordination with individual investors on asset management activities rather than relying solely on its own asset management efforts or the work of outsourced asset managers. Five of the nine HFAs we interviewed are conducting their own asset management activities because they have significant experience managing loan portfolios or because the number of projects is manageable. One HFA we interviewed has 35 years of asset management experience, and two have 20 years of asset management experience. One of these HFAs also conducts asset management for HUD’s performance-based contract administration program and has won awards for its asset management systems. Six HFAs we interviewed said they have or are developing policies, procedures, or “watch lists” to assess project performance and identify projects that may be in need of additional monitoring. One of the two HFAs we interviewed planning to outsource asset management activities has contracted with a national syndicator to provide asset management for its projects without private investment. The syndicator has said that it will provide the same asset management services to the HFA as it would provide to investors in its LIHTC investment funds. The HFA has a staff person that is receiving an asset management certification and will work closely with the syndicator to ensure that asset management functions are performed in accordance with the syndicator’s scope of work. The syndicator’s scope of work covers both the leasing and asset management phases and includes activities such as providing quarterly project performance reports that rate the risk of the project based on market conditions and project owner capacity, conducting annual property inspections, and performing annual long-term financial analysis. The syndicator said that it helped the HFA structure a more comprehensive scope of work because it felt that the asset management activities started too late to ensure project success. HFAs noted a range of challenges associated with asset management. One HFA we interviewed said that explaining the HFA’s new asset management role to developers has been a challenge because the HFA does not usually act as a lender or party with long-term interests in the projects. Rather, the agency’s primary role is that of tax credit allocation with compliance monitoring as required by IRS. HFAs also noted the cost of asset management as a challenge. A few HFAs are charging low or no fees for asset management because of the stress the fee puts on the project budgets. Other HFAs have estimated a fee based on market research and costs associated with their current operations, but they are not sure the fee will be sufficient to cover costs. Most HFAs we interviewed estimated that their initial asset management costs would be highest during the first years implementing TCAP and the Section 1602 Program, including the initial construction monitoring period. For example, one HFA estimated that 20 - 30 percent of its asset management costs would be incurred within the first 2 years of overseeing TCAP and Section 1602 Program projects. However, some HFAs and investors noted future challenges as projects age. They said that between the fifth to twelfth year of a project’s life, projects may begin to show signs of physical and financial stress due to capital replacement needs, diminishing reserves, or resident turnover. One investor said that HFAs may not have the financial resources to support troubled projects in the same way as an investor would. HUD officials told us that the agency has been relying on existing monitoring systems to determine whether funds have been spent properly or to track projects that have not been complying with the terms and conditions of TCAP agreements. The monitoring systems consist of HUD Office of Inspector General (OIG) audits (thus far ongoing in three states), HUD Office of Fair Housing and Equal Opportunity (OFHEO) reviews in 10 states, HOME reviews done by HUD field offices when projects include both TCAP and HOME funds, and HFA reviews. HUD officials told us that they can rely on existing Office of Community Planning and Development (CPD) field staff to carry out HUD’s monitoring and also would plan to look for patterns of problems identified by the OIG, OFHEO, CPD staff, or HFAs during oversight and review activities. HUD officials noted that the agency’s emphasis so far has been on the obligation, outlay, and tracking of funds to the HFAs and their disbursement to project owners. As well as HFAs, HUD officials also expect that third-party investors will monitor TCAP projects for compliance in the same way that these stakeholders have been responsible for monitoring LIHTC projects. TCAP requires tax credits to remain in transactions, and project owners typically sell the tax credits to third-party investors. However, we found that in some cases projects included a limited amount of LIHTCs and project owners chose not to sell these credits to a third-party, thereby limiting or precluding third-party oversight of these projects. In traditional LIHTC projects, third-party investors play an important role in ensuring compliance with tax credit program requirements because they risk losing their ability to claim the tax credits if the project is not in compliance with these requirements. Some HFAs told us that they will coordinate with and rely on reviews and audits that investors and private construction lenders perform to satisfy the HFAs’ asset management obligations under TCAP. In cases when an HFA is coordinating with a third-party investor, the investor may provide early warning information that would be useful to the HFA if the HFA had to act quickly to assist the project or ensure compliance with TCAP requirements. But, some TCAP projects received a nominal amount of tax credits, and project owners chose not to sell the tax credits. These projects lack the additional oversight provided by third- party investors. In these cases, HFAs may be the sole monitor, other than HUD, ensuring that funds are spent properly and that the project owners comply with TCAP terms and conditions. HUD officials acknowledged that in the absence of a significant third-party investment, the amount of overall scrutiny a TCAP project would receive is reduced; however, HUD officials told us that at this point in time they were not aware of how many projects either had nominal LIHTC awards or lacked third-party investors. Our limited review showed that some TCAP projects in Florida received a nominal amount of tax credits and lacked third-party investors that otherwise would provide an added layer of oversight for compliance with TCAP requirements. Specifically, we found that 13 of 25 projects (52 percent) that were allocated TCAP funds in Florida had received a nominal amount of LIHTCs. The Florida Housing Finance Corporation (FHFC) explained that it had awarded $100 in LIHTCs to each of these projects and that the project owners made $650 equity investments to the projects in return for the tax credit awards instead of selling the tax credits to a third-party investor. FHFC plans to institute oversight activities for all of its TCAP and Section 1602 Program projects. Nonetheless, HUD has not required HFAs to enhance their oversight or take other actions to account for the absence or limited involvement of third-party investors. Without the oversight provided by third-party investors and with the limited monitoring planned by HUD, these TCAP projects may constitute a higher risk to HUD and to the HFAs that they will become troubled or fall out of compliance with LIHTC requirements. In addition, although HUD’s monitoring strategy relies partly on monitoring by third-party investors and HOME program reviews, HUD officials told us that they will not know how many TCAP projects have third-party investors or how many also have HOME funds until projects are completed and HFAs submit final reports on the projects. Therefore, HUD cannot currently determine the number of projects that are being monitored by others. Additionally, HUD does not currently know how many TCAP projects will be covered through HOME reviews. According to HUD officials, once projects are complete and all project information has been reported to HUD, it plans to use that information to tailor a monitoring plan to these projects. It will be important for HUD’s TCAP monitoring strategy to recognize the differences in risk for projects without third-party investor oversight and those with investor oversight as well as those projects not covered by HOME reviews. As discussed above, HUD officials said they have been focused on getting Recovery Act funds to HFAs. Since beginning TCAP, HUD has drawn upon limited staff resources in headquarters to administer and track the spending of TCAP funds—its Office of Affordable Housing Programs administers TCAP, and four existing headquarters staff from the HOME program work on TCAP (three part-time and one full-time). HUD officials noted that the Recovery Act does not set aside administrative resources to HUD to either implement the TCAP program, which was performed by existing HOME program staff, or to monitoring HFAs for compliance. In comparison, the Recovery Act provided additional resources for monitoring under the Neighborhood Stabilization Program, which HUD’s CPD also performs. Without a plan for identifying projects without third-party investor oversight and ensuring sufficient oversight when investors are absent, HUD will face constraints in ensuring that TCAP projects remain in compliance with program requirements, some of which apply for 15 years or more. Furthermore, without knowing whether projects involve third- party investors, HUD cannot focus its limited monitoring resources on the projects with the least oversight by others. Unlike HUD, which relies on existing program oversight resources, Treasury has developed a system to conduct compliance reviews to ensure that the HFAs are following the terms and conditions of the Section 1602 Program agreement, and are providing oversight over the project owners receiving the awards. Treasury officials told us that their Office of Fiscal Assistant Secretary received $3 million to administer the Section 1602 Program and the Section 1603 Renewable Energy Program from the total funds appropriated to Treasury for administrative expenses under the Recovery Act. According to Treasury officials, they have designed a risk- based system in which they plan to conduct compliance monitoring on-site for 23 HFAs and remote monitoring for the remainder of the HFAs by the end of calendar year 2010. Whether monitoring is conducted on-site or remotely depends on factors such as identified risks and the size of the grant. The review generally consists of an interview, followed by a review of program files, a review of a sample of project files, a review of financial management information, and a cross-check to the records held at Treasury. After the review is completed, if there are any findings, staff request a corrective action or action plan, depending on the nature and severity of the noncompliance. According to Treasury officials, if staff recommend a corrective action or action plan, Treasury will follow up to ensure that the HFA takes the necessary corrective action. If the agency fails to take the corrective action, Treasury will take steps to bring the HFA into compliance and, if necessary, recapture funds. As of August 2010, Treasury officials told us that they had completed nine compliance monitoring reviews and have been conducting six additional HFA reviews. Treasury officials said that the kinds of issues they found in their reviews relate to failure to properly document files, lack of a policy to handle fraud by project owners, and, in one case, unresponsiveness to Treasury’s request for documentation. Treasury officials told us that these issues were often resolved during the compliance review, but that some issues required additional follow-up with the HFAs. In the case of the unresponsive HFA, Treasury officials said they have put a hold on the HFA’s Section 1602 Program funds until they are sure the HFA has provided all materials required to satisfy Treasury’s requests. Recovery Act recipient reporting requirements are different and more complex for TCAP than for the Section 1602 Program. More specifically, the Recovery Act describes recipient reporting requirements, including that of estimated jobs created and retained. The Recovery Act recipient reporting requirements apply only to programs under Division A of the Recovery Act, which includes TCAP. The Section 1602 Program is under Division B of the Recovery Act, and, therefore, not subject to recipient reporting requirements. As Recovery Act-funded recipients, HFAs must file quarterly reports through FederalReporting.gov on a number of data elements, including the number of full-time equivalent jobs funded by TCAP funds during that quarter. Jobs must be counted in accordance with methodology provided by OMB. OMB guidance limits the number of jobs reported to the actual use of the funds in each quarter. In cases of construction funding based on a mix of financing sources, HFAs can count the jobs created or retained based on the proportion of TCAP funds. In addition to reporting through FederalReporting.gov, HFAs report information on TCAP projects through two HUD systems. HFAs use HUD’s Integrated Disbursement and Information System to report on the selection of TCAP projects by HFAs as well as disbursement of TCAP funds. HFAs also use the Recovery Act Management and Performance System to report on project compliance with environmental reviews. Although not subject to recipient reporting, Treasury chose to collect project information through quarterly performance reports submitted by HFAs on an Excel spreadsheet. HFAs need only make one report of all jobs created or retained by Section 1602 Program funds for each project. HFAs submit estimated information on the number of FTE jobs to be created or retained by the entire project with the first quarterly report for each project. The number of jobs reported to Treasury need not be reduced to reflect parts of the project not funded under the Section 1602 Program. Except for requiring the use of FTEs, Treasury has not issued detailed guidance specifying job estimation methodology under the Section 1602 Program. Job counts between the programs and across HFAs are not comparable. About two-thirds of the HFAs in our survey said that they will conduct a review of the information being provided by the project owners, but others said that they relied on signed statements from the project owners attesting to the accuracy of the jobs estimates. Furthermore, because of the differences in job reporting methodology for TCAP and the Section 1602 Program, job counts reported for the programs varied widely. We previously reported that some HFAs were concerned about underreporting jobs that TCAP funds created because of OMB’s requirement that they count only jobs directly funded by TCAP. They said that because projects funded under TCAP would not have moved forward without TCAP funds, all the jobs associated with the projects should be counted. For example, $2 million in TCAP funds could enable an $8 million project to be constructed that otherwise would not have been built, but only the jobs directly related to the $2 million TCAP expenditure would be reported. Although constrained by limited resources or time, HUD and Treasury developed two new programs, TCAP and the Section 1602 Program, respectively, that are designed to provide capital investment to LIHTC projects hit hard by the economic crisis. TCAP and the Section 1602 Program have had a strong impact on the LIHTC market. However, our review identified two areas of concern: one that relates to HUD’s identification of higher-risk TCAP projects and another that relates to challenges that some project owners may face in meeting a December 2010 deadline for spending funds in Treasury’s Section 1602 Program. Under TCAP, HFAs have increased responsibilities for asset management and monitoring compliance of project owners with the terms and conditions of the program. However, some projects with a nominal amount of tax credits may lack the benefit of oversight by third-party investors. Nonetheless, HUD has not identified projects that lack this additional level of oversight and thus may be at higher risk of noncompliance with TCAP and LIHTC requirements. Although HUD relies in part on HFAs to provide oversight, HUD does not know the extent to which the HFAs will provide additional oversight for projects that lack third-party investors. HUD is relying on existing monitoring systems and resources, but has not fully identified those projects that may be subject to review under its existing system (such as TCAP projects that also have HOME funds) or developed additional guidance or oversight of TCAP projects where there is little or no third-party oversight. HUD could take a more active role in monitoring TCAP projects—first by identifying those projects that may present a higher risk of noncompliance, and second by identifying those projects that also have HOME funds. HUD could also more effectively use limited oversight resources by using a risk-based approach that considers whether a TCAP project has third-party investors and whether HFAs are providing enhanced oversight. Likewise, by gathering information about the number of the projects that have TCAP and HOME funding, HUD could more effectively plan reviews and deploy staff. Without a more rigorous approach to oversight, HUD will be limited in its efforts to ensure that TCAP projects meet program requirements and continue to provide a source of affordable housing. Treasury’s regulations require project owners to spend 30 percent of eligible project costs by December 31, 2010, to continue receiving additional Section 1602 Program funds in 2011. However, some of the HFAs and project owners expressed concerns about meeting the 30 percent requirement because of unexpected delays stemming from the time needed to assemble funding, litigation, or construction or permitting issues. For instance, as of June 30, 2010, about 39 percent of Section 1602 Program projects that we reviewed have yet to close, leaving little time to meet the spending deadline. Projects that do not meet the deadline would not be eligible to receive any additional Section 1602 Program funds. In response, other sources of funding might withdraw from the projects, and project owners would face difficulty finding replacement financing. Thus, the 30 percent spending requirement might stop projects already under way—an unintended irony for a program designed to jumpstart stalled projects. Should there be a significant number of such projects, Treasury will be challenged in ensuring that the program achieves its intended goals. Specifically, although Treasury has been developing guidance for how HFAs should monitor project spending, it has yet to develop contingency plans in the event that significant numbers of projects stall again. Because the absence of third-party investors reduces the amount of overall scrutiny TCAP projects would receive and HUD is currently not aware of how many projects lack third-party investors, HUD should develop a risk- based plan for its role in overseeing TCAP projects that recognizes the level of oversight provided by others. Treasury should expeditiously provide HFAs with guidance on monitoring project spending and develop plans for dealing with the possibility that projects could miss the spending deadline and face further project interruptions. We provided a draft of this report to HUD for review and comment. HUD responded by saying it will identify projects that are not funded by HOME funds and projects that have a nominal tax credit award. HUD said it will make these identifications after projects are complete and develop a monitoring plan tailored to these projects. It will be important to ensure that HUD’s approach includes a risk-based plan. We revised our section to recognize actions that HUD proposed in their response. HUD also provided technical comments that we incorporated as appropriate. We provided a draft of this report to Treasury for review and comment. Treasury responded by saying that it has taken a number of steps to ensure HFAs and project owners have a complete understanding of the 30 percent deadline and are prepared to comply with that requirement. Further, Treasury said it plans to continue monitoring the impact of the 30 percent spending deadline on the program and to provide additional guidance necessary to address unforeseen or unexpected circumstances. In our review of nine HFAs, we found that about 39 percent of the projects awarded funds in those nine states had not yet closed, which is the first step to being able to draw funds from entities that provide financing. Treasury's development of timely guidance may be particularly important because the December 31 deadline for spending 30 percent of program funding is quickly approaching. Treasury also provided technical comments that we incorporated as appropriate. According to Recovery.gov, as of August 24, 2010, recipients reported on close to 200,000 awards indicating that the Recovery Act funded approximately 750,000 jobs during the quarter beginning April 1, 2010, and ending June 30, 2010. As reported by the Recovery Accountability and Transparency Board (the Board), the job calculations are based on the number of hours worked in a quarter and funded under the Recovery Act and expressed in FTEs. Officials from many states reported that the recipient reporting process was, by this fourth round, becoming routine. Given that no new reporting guidance was issued by OMB during the quarter and that a time extension was again granted by the Board, recipients indicated they had few problems reporting. The FTE calculations, however, continue to be difficult for some recipients as evidenced by our field work in selected jurisdictions covering two energy programs. We reviewed 74,249 prime recipient report records from Recovery.gov for this fourth round. This was 3,592 more than submitted in the previous quarter and represents about a 5 percent increase from round three. For our analyses, in addition to the round four recipient report data, we also used the round one, round two, and round three data as posted on Recovery.gov as of July 30, 2010. We examined recipient reports to identify the extent to which progress was being made in addressing several key limitations we had found in our prior reports, including the inability to link reports for the same project across quarters; reporting errors; unusual values, such as award amounts of zero, or relationships between values requiring further review because they are unexpected; or flaws in the data logic and consistency, such as reports marked final that show a significant portion of the award amount not spent. Our analysis showed better linkage of reports across quarters, but we still found instances where it appears reporting on projects was discontinued and may indicate possible issues with linking. The ability to link reports across quarters is critical to tracking project funding and FTEs that are key indicators of project results. For example, if two consecutive quarterly reports on the same project are not linked, they become identified as two separate records, having an impact on the cumulative funding calculation and the ability to associate FTEs reported in the separate quarters with one another. Similarly, mislinked reports would result in funding and FTEs from two different projects being incorrectly associated with one another. For the data in Recovery.gov, the award key data field is used to track recipient reports across quarters. In our previous report, we performed a series of matching operations between the three rounds of prime recipient reports using the award key data field. We extended these matches to the current fourth round of prime recipient reports to continue reviewing the tracking of reports from one quarter to the next and to identify potential mismatches of reports. We identified 1,111 fourth round prime recipient reports—1.3 percent of the fourth round prime recipient reports—that reflected a break in reporting (e.g., recipient reports that appeared in rounds one and four but not rounds two and three or, similarly, appeared in rounds two and four but not round three, etc.). Even though the number of prime recipient reports has increased for this fourth round, this is a smaller number of reports showing a break in reporting than we observed in the previous quarter. In our previous match across three rounds of reports, we identified 1,358 prime recipient reports that appeared in rounds one and three but not round two. We performed another analysis using the final report and project status indicator fields that also suggested some concerns with missing linkages or potential errors in one of the reporting fields. As before, we identified recipient reports that only appeared in prior rounds, but not in round four. For prime recipients whose last report appeared in one of the prior three rounds, we examined the final report status and the project status fields, as those would presumably be the last reports from these projects. As shown in table 11, of the total 14,542 prime recipients that did not report in round four, overall, 34 percent of their last prior round reports were not marked as final and 27 percent showed project status as being less than 50 percent complete or not started. These data suggest that, among other reasons, the projects may not have been completed, or they should have been linked to a report in a subsequent quarter, or that recipients were locked out of the reporting system. The percentage not marked as final is less than we observed in our previous analysis. However, the number of recipient reports from round three that did not appear in round four, with no indication that the round three report was final or that it was not close to completion, is quite similar to the number of discrepancies found in our last report. Based on these results showing projects that were not marked as final and indicating that they were in the earlier stages of implementation, it seems reasonable to expect that a fourth round quarterly report should have been filed, but the necessary linkage has not been made. Alternatively, these fields may not show the correct status. During the most recent reporting quarter, recipients were able to reorganize unlinked or mislinked reports between rounds three and four. This may have facilitated the reduction in the proportion of reports that did not appear in round four, but that were not marked as a final report. In addition to our examination of report linking across quarters, we continued our monitoring of errors or potential problems by repeating many of the analyses and edit checks reported in our earlier reports using the fourth reporting period data. The results of such analyses can help improve the accuracy and completeness of the Recovery.gov data and inform planning for analyses of recipient reports over time. In general, the overall results were similar to what we observed in the previous round. For example, we identified a mismatch of 128 reports for Treasury Account Symbol (TAS) codes and 115 for Catalog of Federal Domestic Assistance (CFDA) numbers. This is a small increase from the previous round, where 117 reports for TAS codes and 112 reports for CFDA number were mismatched to the agency name fields. We also checked the data fields on the number and total amount of small subawards of less than $25,000 and identified 443 reports where the amount reported in both small subawards and small subawards to individuals were the same. This may be an indicator of improper keying of data or inaccurate placement of award data in a data field, both of which negatively affect data accuracy. The 443 reports is a small increase from the 436 reports identified in the previous round. However, the number of reports where the same value was entered for the number of subawards and the total dollar value of subawards was reduced, from 110 in round three to 101 in round four. Unusual or atypical data values alert the analyst to potential inaccuracies. We checked unusual or atypical data values by identifying reports where the award amount was zero or less than $10. We know that it is highly improbable that grants were awarded in these small amounts. Finding numbers like these suggests improper keying of data or a misinterpretation of the guidance for FederalReporting.gov, both of which negatively affect data quality. We determined that the number of reports where this occurred was reduced to 37 reports in this round out of the 74,249 prime recipient report records, down from 74 reports in round three. Data logic and consistency inform the analyst about whether the data are believable, given program guidelines and objectives. To assess consistency in the range between award amount and amounts reported as received or expended, we repeated our analyses of reports marked as final to identify possible over or underspending or misreporting by identifying final reports where the amount received or expended by the recipient was less than 75 percent of the award amount or exceeded the award amount by 10 percent or more. If the final report status is correct, this check can help agencies identify where award funds were not being spent, which may indicate project implementation problems. If more funds were spent than were awarded, it may indicate problems with project financial accounts or controls. Similar to round three, 3 percent of the round four reports marked as final showed an amount received or expended that was not within 75 percent of the award amount, and no reports exceeded the award amount by 10 percent or more. Many state officials noted that the reporting process is starting to become routine. They highlighted the fact that guidance remained stable for this round of reporting and that the early decision to extend the reporting deadline from July 10 to July 14 contributed to the success of the reporting process. For example, officials in California stated that the fourth round of recipient reporting went a lot smoother than prior rounds; further, the extension of the deadline to July 14 allowed many of the state agencies to obtain more complete data through the end of the month of June and report this to FederalReporting.gov. Similarly, officials in Colorado reported that the deadline extension to July 14 allowed for three additional working days for recipients to review their submissions and make necessary corrections, which they felt improved the data quality. Officials in the District of Columbia reported that in general there are no difficulties in the District’s recipient reporting process and the process has become smoother with each subsequent reporting period, while officials in Illinois stated that with the reporting guidance remaining the same, their agencies are becoming familiar with the reporting process. Officials in Georgia noted that they did not hear negative feedback from the state agencies regarding the recipient reporting process or the FederalReporting.gov Web site during this round. A number of the states we reviewed are anticipating leadership changes in the upcoming gubernatorial elections. In preparation, a few states noted that they are planning or are undertaking changes in procedures to ensure continuity during a transition. For example, Michigan Economic Recovery Office officials told us that because of anticipated changes to the state’s administration, they moved to a decentralized process during this round of reporting to allow time for state agencies to adjust to reporting. Michigan state agencies submitted quarterly recipient reports directly to the federal government via FederalReporting.gov rather than to the state’s Economic Recovery Office, which had previously served as a centralized reporting point. The officials told us that the decentralized reporting process for the quarter ending June 30, 2010, went as smoothly as they had anticipated, and the quality of the data submitted by state agencies to FederalReporting.gov has improved over time. The governor’s office in Colorado is in the initial planning phase of transitioning to a new administration. Colorado state officials commented that the recipient reporting process has become a stable activity that should be able to move into a new administration with relatively little disruption. Officials in Georgia did not have any real concerns regarding a transition in administration, as the state now has recipient reporting systems and processes in place. California officials stated that steps have already been taken to ensure continuity in recipient reporting for the duration of the Recovery Act, while New Jersey officials noted that there were not many challenges related to recipient reporting amid a transition to a new administration in their state. Many states noted that their Web sites were designed to provide information about Recovery Act programs, funding, and eligibility to the people of their states. For example, officials in California commented that the state Web site was designed for use by the average Californian to keep citizens informed about the Recovery Act’s impact in California. Officials in Arizona noted that their Web site was designed to provide transparency to the public on how stimulus funds are being spent in the state. Several state Web sites were also used to provide potential applicants information on how to obtain grants, assistance, and contracts. For example, officials in the District of Columbia noted that their Web site provides information about Recovery Act funding received by the city and is a resource for people and organizations who are seeking opportunities to apply for grant funding, assistance, and potential contracts involving Recovery Act funds. A number of state officials reported that they are continuing to add content to their Web sites. For example, Ohio’s Recovery Act Web site recently added an interactive searchable map of funds awarded by location and enhanced information on the use of funds that are not covered by recipient reporting requirements. Officials in Texas said that enhancements in the past year have included new tracking reports to follow dollars, an interactive county map, and disbursement information. As another example, the Massachusetts Recovery and Reinvestment Office recently created a new Recovery Act Web site using an outside firm to help develop the most important features. An official from that office felt that the Recovery Act data collection and reporting effort will positively affect state government by improving policy and management discussions through the use of data. The EECBG program is administered within DOE and was funded for the first time with the passage of the Recovery Act. Because over 2,300 state, local, and tribal governments are eligible for direct formula EECBG grants and the grants are also awarded on a competitive basis, the program has many different types and sizes of recipients. For example, each state-level recipient must use at least 60 percent of its allocation to provide subgrants to local government units that are not eligible for direct grants, making the state the prime recipient while the local government unit is a subrecipient. Larger local government units receive grants directly from DOE, making them prime recipients. For the fourth round of reporting, 2,116 prime recipients of the program reported, as of July 30, 2010, that they created or retained about 2,265 FTEs funded by the Recovery Act. We interviewed 13 EECBG state-level and 19 local government recipients from our 17 selected jurisdictions about their FTE calculations for the fourth round of reporting. Given that the EECBG program is new, some of them had not yet reported. For example, District of Columbia officials from the District’s Department of the Environment told us that their work under the EECBG program had not started in time for them to report for the period that ended on June 30, 2010. California Energy Commission officials noted that they had only a few EECBG recipients for the last reporting round, but there were 50 or 60 recipients for this fourth round. Another recipient commented that reporting was fairly easy now because they were only reporting internal data they controlled, as compared to contractor data, but the official anticipated more complexity as the program expands. Officials from all of the state-level government units we interviewed that had FTEs to report said they followed OMB’s December 18, 2009, guidance on FTE calculations. Specifically, they collected the number of hours worked that were funded by the Recovery Act and divided that total by the number of hours in a full-time work schedule, with defined processes in place to collect the EECBG recipient reported data. For example, Arizona Department of Commerce officials said that their office is responsible for reporting EECBG recipient data to the state’s Office of Economic Recovery centralized reporting team. The Office of Economic Recovery works closely with the Arizona Department of Commerce to ensure that the reporting data are accurate. Additionally, Arizona officials said there is a review and approval process in place to check that the hours reported by the program’s subrecipients are accurate. Officials in the Colorado state energy office noted that it has been easier collecting hours worked from EECBG subrecipients because DOE requires reporting the hours worked and the same data is used to convert hours to FTEs for OMB reporting. However, officials from a few other states said that generating the most comprehensive and complete job numbers available from subrecipients is still a challenge. The same challenge surfaced in education and housing programs that we previously reviewed. A few local government EECBG recipients we interviewed used methods other than the OMB guidance to estimate their number of FTEs, possibly resulting in over or undercounting. For example, while DOE guidance explicitly states that the job-year estimate issued by the Council of Economic Advisers for job creation potential is not appropriate in determining direct jobs created or retained and should not be used for reporting to either OMB or DOE, a New Jersey recipient informed us that she planned to use this number to estimate her township’s FTEs. We informed the recipient that this was incorrect. In New York, a county official said that an EECBG contractor was conducting work under a Recovery Act contract, but the county did not report any FTEs in its most recent quarterly report because the official did not think the contractor had any documented jobs created or saved. Related to the problem with complete FTE numbers, several EECBG recipients reported confusion about including data from subrecipients on jobs, which OMB guidance states should be included. For example, officials from a county in California stated they received conflicting information about including jobs from subrecipients and vendors in their recipient reports. The officials said that the conflicting information emanated from different levels within DOE and between DOE’s and OMB’s guidance. The county officials believed they did not get a clear answer from DOE as to the difference between subrecipients and vendors. Deciding that it was better to over report jobs than to under report jobs, they included subrecipient and vendor hours that could be project-related in their recipient reports. Based on the recipients we interviewed, there was some evidence that larger EECBG direct grant recipients seemed to conduct more thorough recipient reporting data quality reviews than smaller direct grant recipients, possibly due to their enhanced administrative capacity. For example, a large EECBG grant recipient in Georgia reported that it improves data accuracy by prepopulating reports for subrecipients so they only need to include job numbers and vendor disbursements. In some instances, it also compares the subrecipient data to other documents, such as invoices and Davis-Bacon reports. However, according to a city official in Georgia, for their small grant, no specific data quality reviews are conducted other than a city official reviewing the hours worked. Colorado state officials said that communities that received under $2 million in direct formula grants have more difficulty administering EECBG grants and meeting the reporting requirements because they have limited staff resources. As an example, they mentioned a Colorado city, which received approximately $1 million in its EECBG grant. Because of limited resources, the city has the person who administers its housing programs also administer the EECBG grant. The Colorado state officials believed that in the case of smaller communities, it would work better if the state administered the EECBG grants and could report for the locality. According to DOE officials, their EECBG program project officers have as minimum responsibilities making sure the recipients that need to report are reporting, reviewing the quality of the recipient reporting data submitted, and ensuring that recipients correct the data if the project officers detect errors. DOE monitors grant recipients primarily through its project officers, and project officers work directly with recipients to provide guidance and evaluate performance. Project officers also gather and analyze information about project planning and implementation and outcomes to help ensure data quality and to ensure that statutory requirements are met. DOE stated that it has updated the checklist that project officers use to monitor recipients, and it is also developing guidance that includes best practices on how states should monitor their subrecipients. Such increased attention to monitoring recipients, including the quality of their data, could likely reduce the errors made by recipients. During the fourth round of recipient reporting, 58 prime recipients of DOE’s Weatherization Assistance Program submitted their quarterly data to FederalReporting.gov, and as of July 30, 2010, reported approximately 12,980 FTEs funded by the Recovery Act. We interviewed 8 state-level and 17 local weatherization assistance recipients from our 17 selected jurisdictions about their FTE calculations for the fourth round of reporting. As with the EECBG grants, we found that most of the weatherization assistance recipients we interviewed followed OMB’s December 18, 2009, guidance regarding FTE calculations. A few recipients, however, did not estimate the number of FTEs correctly for this round of reporting, resulting in under or over counting. For example, in one case, subrecipients in Florida did not include the hours worked by contractors who performed weatherization work at individual homes, which they attributed to a lack of awareness of the requirement to report the hours. In California, a local weatherization assistance provider also expressed confusion regarding reporting subcontractor hours. In Pennsylvania, a state official indicated some weatherization subrecipients experienced difficulties, at least in their initial reports, in submitting FTE information through a new Web-based reporting system that collects and calculates FTE information from the subrecipients. The Pennsylvania weatherization recipients report hours through this system to the Department of Community and Economic Development, but the system does not currently provide a method for subrecipients to certify the accuracy of what they report. A few states had processes in place to help ensure weatherization assistance recipient reporting accuracy. For example, a District of Columbia official said the weatherization program staff and Recovery Act grant managers review submitted recipient reporting data from community-based organizations on a monthly basis before it is reported into the District’s centralized reporting system. A New York official reported reviewing data submitted by a sample of subrecipients and comparing jobs data to contract and payment information in the program base, while in Georgia, the state weatherization program officials reviewed each provider’s submission and called each provider to discuss their numbers. This process resulted in some changes to vendor information and the number of jobs created or retained. According to DOE officials, during the quarter ending June 30, 2010, 3,988 DOE recipients submitted reports, an increase of about 7 percent from the quarter ending March 31, 2010, and an increase of about 28 percent from the 2009 year-end reporting period. DOE stated that only eight recipients are considered nonreporters for this quarter, the majority of whom belong to a group with consistent challenges in reporting. According to a senior DOE official in the department’s Recovery Operations Group, the department’s data quality review process for fourth round recipient reports was enhanced by several factors. The DOE official noted that access to FederalReporting.gov during the reporting period helped DOE identify recipients who had not yet filed and helped assist those who had unsuccessfully filed, entered the wrong awarding agency code, or confused the reporting required by OMB with DOE’s system. In addition, he said that communicating the extended time frame for reporting before the reporting period actually began alleviated last-minute confusion or frustration on the part of recipients or reviewers, causing fewer recipients to wait until the last minute to file. Also, the official commented that the July 14 to July 20, 2010, late submission period, during which recipients were still allowed to file, allowed recipients experiencing access issues to FederalReporting.gov to submit reports. During this time period, DOE staff was also able to identify and assist with issues such as Central Contractor Registration numbers and getting new passwords for the last approximately 100 recipients filing reports. The DOE official noted that while the continuous correction provision has added to the workload of the DOE team, the period allows them greater time to review more recipient reported data than previously, identify potential errors, and work with agency reviewers and recipients to improve data quality. The senior official listed a number of frustrations DOE encountered during the fourth round of reporting, most of which are in areas where they felt FederalReporting.gov is technologically limited. For example, according to the official, FederalReporting.gov lacks some basic logic tests for matching award numbers, with most of the mismatches resulting from prefix differences. The lack of this matching capability creates extra work for the DOE staff, but the Board declined to run separate matching routines for each agency. In an April 2010 audit report of DOE’s efforts to ensure the accuracy and transparency of reported Recovery Act results, the DOE Inspector General’s (IG) office found that the department had taken a number of actions designed to do so and made two recommendations, which DOE had already started to address. For example, recipient reported data elements are compared to information maintained in the department’s financial systems. The IG recommended that DOE adjust the quality assurance process to include adding comparisons of other data elements, and a senior DOE official reported that for this round of reporting, the department has added several data elements to the original four that were reviewed centrally by the headquarters Recovery Operations Group. Now reviewers compare recipient reports in FederalReporting.gov against DOE systems to identify recipient information that falls outside expected results in seven different areas. According to DOE, these areas are key project markers being tracked by the public, the administration, Congress, and within the department to determine if the high-level goals of stimulating the economy and creating jobs outlined in the Recovery Act are being met. The DOE official said that increased attention has been placed on data quality within DOE systems as a result of this review process, which has created new communication channels and processes to identify issues and correct them. In line with the other IG recommendation, DOE developed a training program for officials responsible for reviewing recipient data submissions that includes detailed steps and procedures for officials to follow when reviewing recipient quarterly data for significant reporting errors and material omissions. The IG community is also performing data quality audits of federal agencies’ data quality review efforts for their recipient reports. In June, an IG-led Board review of the effectiveness of the agencies’ data quality review processes was completed. To identify material omissions and significant errors that were not identified by the reviewed agencies for the quarter ending December 31, 2009, the IGs performed reviews of the recipient reported data on FederalReporting.gov and attempted to compare that data with the data available in the agency-owned systems. In general, the IGs found that the agency systems were legacy systems that had been developed, designed, and implemented prior to the Recovery Act. As a result, data elements were not always consistent and at times were nonexistent, making matching the data difficult if not impossible. The final report provided three recommendations to the Board to pursue discussions with the appropriate government entities regarding improving the effectiveness of agency data quality reviews. These recommendations included establishing a uniform and consistent governmentwide award numbering system; making mandatory the suggested data logic checks identified in OMB guidance; and issuing guidance to better define material omissions and significant errors. Although consensus was not reached among the IGs regarding the award numbering system, there was general consensus regarding the logic checks and guidance recommendations. The next effort aimed at recipient reported data quality includes a Board review focusing on key data reporting elements and the factors contributing to errors in the recipient reports. The new procedures and tools developed to implement the Recovery Act are reshaping intergovernmental interactions and ways that governments collect, maintain, and report information. For example, the federal government built a huge data warehouse, FederalReporting.gov, which is populated by thousands of governments and other Recovery Act fund recipients, to ensure that the public receives as much information as possible on the implementation of the Recovery Act. Because such a wide variety of information is required and since some elements are being reported for the first time, OMB used a variety of methods to train federal agencies and recipients of Recovery Act funding on how to comply with their reporting responsibilities. OMB and federal agencies provided several types of clarifying information to recipients, as well as opportunities to interact and ask questions or receive help with the reporting process. These included weekly phone calls between OMB and groups representing the state budget and comptrollers offices, weekly calls between state reporting leads, webinars, a call center, and e-mail outreach. In addition, the Board recently reported that, along with the IG community, they have provided more than 2,000 training and outreach sessions to federal, state, and local government employees and to private sector individuals involved in Recovery Act implementation. According to many of the state officials we interviewed, the Recovery Act’s reporting requirements also promoted more interaction between state and federal agencies, state agencies, and within departments of these state agencies. For example, Ohio officials stated that the governor’s stimulus office had established contacts with OMB, administration officials, and other federal agency contacts through work on Recovery Act implementation and monitoring. Officials from Illinois noted that recipient reporting was one of the few efforts that brought their otherwise very independent state agencies together. Colorado state officials reported that the program and accounting staffs within each state agency are working together closely to help ensure the accuracy and quality of the Recovery Act data. A few state officials, however, commented that although communication with federal agencies and other entities has increased due to the recipient reporting requirements, the communication is aimed primarily at dispelling confusion and is not necessarily positive. For example, Texas state officials commented that the one change that has been prompted by recipient reporting is the significant effort required to communicate reporting requirements to subrecipients and to collect, review, and submit the data. Officials from a number of states expressed hopes that the increase in intergovernmental interactions resulting from the Recovery Act reporting requirements will continue after the act’s reporting requirements expire. For example, District of Columbia Recovery Act coordinators schedule a weekly teleconference for all District agencies receiving Recovery Act funds to provide status updates and have discussions relating to the Recovery Act. They intend to continue scheduling the meetings after the Recovery Act funds are expended in order to maintain communication on other grant-related topics. Michigan officials reported that state agencies are working with each other in a way they have not before. They said the Recovery Act has facilitated collaboration, citing that the act removed some barriers to interaction between state agencies because the timeline for complying with Recovery Act requirements has been so short that agencies must work together to meet requirements, which has yielded many positive effects. Michigan officials noted that they hope the changes will be long-standing. As another example, a representative from a state association described Recovery Act-related problem solving between the audit and technology communities. These interactions included discussions where there was a flow of information at the policy, technological, and political levels that they would like to see continued. A recent report issued by the National Association of State Budget Officers echoed the responses from many of the state officials we interviewed. The report noted that during the months before recipient reporting began in October 2009 and in the months since, the Recovery Act helped to foster movement toward a more open and communicative atmosphere between both the federal government and states, as well as between individual states, while also providing important lessons currently being used in the implementation of the Federal Funding Accountability and Transparency Act of 2006 and the recent health care legislation. The report maintained that states have noted that increased transparency on government spending is a worthy goal which they support, as long as the federal government maintains a level of communication that allows for the effective and efficient implementation of any accountability requirements. OMB has indicated that Single Audits play a key role in the achievement of its accountability objectives over Recovery Act funds, which include helping to ensure whether Recovery Act funds are used for authorized purposes and that risks of fraud, waste, error, and abuse are mitigated. A Single Audit includes the auditor’s schedule of findings and questioned costs, internal control and compliance deficiencies, and the auditee’s corrective action plans along with a summary of prior audit findings that includes planned and completed corrective actions. We identified significant concerns with the Single Audit process that (1) diminish the effectiveness of the Single Audit as an oversight accountability mechanism and (2) could allow risks associated with Recovery Act funds to persist. The Single Audit Act and related OMB Circular No. A-133 Audits of States, Local Governments, and Non-Profit Organizations do not adequately address the risks associated with the current environment in which billions of dollars in federal awards are being expended quickly through new and existing programs associated with the Recovery Act. In our prior bimonthly reports, we made several recommendations to improve OMB’s oversight of Recovery Act-funded programs through the use of Single Audits. OMB has implemented some, but not all, of our recommendations. OMB’s Single Audit Internal Control Project (project) highlighted areas where significant improvements in the Single Audit process are needed. OMB encouraged auditors from states that volunteered to participate in the project to communicate internal control deficiencies over compliance for selected Recovery Act programs earlier than required under statute. The project has been a collaborative effort between volunteer states receiving Recovery Act funds, their auditors, and the federal government. One of the project’s goals was to achieve more timely communication of internal control deficiencies for higher-risk Recovery Act programs so that corrective action can be taken more quickly. GAO assessed the results of the project and found that it met several of its objectives and that the project was helpful in identifying critical areas where further OMB actions are needed to improve the Single Audit process over Recovery Act funding. The project also required that auditee management provide, 2 months earlier than required under statute, plans for correcting internal control deficiencies to the cognizant agency for audit for immediate distribution to the appropriate federal awarding agency. The federal agency was then to have provided its concerns relating to management’s plan of corrective actions in a written decision. We found, however, that (1) most federal awarding agencies did not issue their management decisions about the corrective actions within the project’s required time frames, (2) the current reporting time frames for the Single Audit process are not conducive to the timely identification and correction of internal control deficiencies, and (3) OMB’s Single Audit guidance is not timely—specifically for 2010 audits, as well as guidance for a subsequent project. In our May 2010 bimonthly report, we recommended that OMB issue its Single Audit guidance, including guidance for future projects, in a timely manner so that auditors can efficiently plan their audit work, and OMB concurred with our recommendation. According to several state auditors who participated in the project, OMB’s issuance of its guidance in an untimely manner adversely impacts the auditors’ ability to plan and conduct their Single Audits. They added that untimely project guidance would also hinder their ability to participate in future OMB projects intended to provide earlier communication and correction of internal control deficiencies identified in Recovery Act programs. We recommend that the Director of OMB strengthen the Single Audit and federal follow-up as oversight accountability mechanisms by (1) shortening the timeframes required for issuing management decisions by federal awarding agencies to grant recipients and (2) issuing the OMB Circular No. A-133 Compliance Supplement no later than March 31 of each year. OMB has indicated that Single Audits would serve as important oversight accountability mechanism for Recovery Act programs, which have considerable risks. The most significant of these risks are associated with new programs that may not have the internal controls and accounting systems in place to help ensure that funds are distributed and used in accordance with program regulations and objectives, Recovery Act funding increases for existing programs that may exceed the capacity of existing internal controls and accounting systems, the more extensive accountability and transparency requirements for Recovery Act funds that require the implementation of new controls and procedures, and increased risks because of the need to spend funds quickly. We reported in our previous bimonthly reports that we were concerned that, as federal funding of Recovery Act programs accelerates, the Single Audit process may not provide the timely accountability and focus needed to assist recipients in making necessary adjustments to internal controls to provide assurances that the money is being spent as effectively as possible to meet program objectives. We also reported that the Single Audit reporting deadline is too late to provide audit results in time for the audited entity to take action on internal control deficiencies noted in Recovery Act programs. In those prior reports, we made several recommendations to OMB for improving the Single Audit Process to address the increased risks by helping ensure that Recovery Act funds are not used for unauthorized purposes and that risks of fraud, waste, error, and abuse are mitigated. OMB has implemented some, but not all, of these recommendations. In response to one of our recommendations, in October 2009 OMB implemented a project to encourage earlier reporting and timely correction of internal control deficiencies identified in Single Audits that included Recovery Act programs. OMB’s guidance for the project stated that this earlier communication of internal control deficiencies over compliance would allow participating auditees to correct internal control deficiencies related to Recovery Act funds in a timely manner, thereby reducing potential future unallowable costs. We assessed the results of the project and found that the project met its original objectives of (1) achieving more than 10 volunteer states participating in the project, (2) having the participating auditors issue interim internal control reports for the selected programs at least 3 months earlier, and (3) having auditee management issue corrective action plans to resolve internal control deficiencies at least 2 months earlier than required by OMB Circular No. A-133. The project also increased the level of awareness by the auditors of some of the risks associated with Recovery Act funds and, in some cases, increased the communication and interaction between the auditors, program officials, and the cognizant agency for audit concerning internal control deficiencies related to Recovery Act funds. For example, many of the auditors who responded to our survey stated that the project increased awareness of internal control deficiencies and focused attention on the need for federal agencies to be more involved in pursuing corrective actions to develop more timely corrective action plans for internal control deficiencies related to programs receiving Recovery Act funding. The project also called for federal awarding agencies to actively work with auditees to resolve high-risk findings in the most expeditious manner. One of the project’s goals was to achieve more timely communication of internal control deficiencies for higher-risk Recovery Act programs so that corrective action could be taken more quickly. The implementation of corrective actions of internal control deficiencies will help to ensure that Recovery Act funds are used as intended. The project’s guidelines called for the federal awarding agencies to complete two steps by April 30, 2010: (1) perform a risk assessment of the internal control deficiency and identify those with the greatest risk to Recovery Act funding, and (2) identify corrective actions taken or planned by the auditee. OMB guidance called for this information to be included in a management decision that the federal agency was to have issued to the auditee’s management, the auditor, and the cognizant agency for audit. As of April 30, 2010, most federal awarding agencies had not provided their management decisions on the states’ corrective action plans as required under the project’s guidelines. Several of the state auditors and state program officials we surveyed emphasized the need for more timely communication from the federal awarding agencies, which is important for state agencies to gain a clear understanding of needed corrective actions. It is also important for auditors so that they can monitor progress towards addressing Single Audit results. OMB Circular No. A-133 requires management decisions to be issued by federal awarding agencies within 6 months of receipt of the audit report. However, the project’s guidelines required the federal awarding agencies to issue a management decision as promptly as possible and not later than 90 days after the date that the corrective action plan was received by the cognizant agency for audit. The internal control reports for the project identified internal control deficiencies in at least 24 Recovery Act programs awarded by seven federal agencies by December 31, 2009. Moreover, under the project’s guidelines, most corrective action plans were completed by January 31, 2010, 2 months earlier than the time frames under OMB Circular No. A-133 and were concurrently provided to the federal awarding agencies. Despite the federal awarding agencies having the internal control reports and corrective action plans in January 2010, only three of the seven federal awarding agencies had submitted some of the relevant management decisions on corrective actions by May 14, 2010. We asked OMB officials to provide us with an update of the number of management decisions that had been submitted by the federal awarding agencies through August 5, 2010. OMB provided a summarized list of the total number of management decisions by agency where the auditee and the federal agency had agreed on action to be taken to address the report findings but had not traced these totals to the detailed documentation to verify the summary information. It is important to note that an awarding agency’s issuance of a management decision does not mean that internal control deficiencies have been corrected; rather, the management decision reflects the agency’s approval of the auditee’s proposed corrective action. Although some corrective actions can be implemented quickly, others can take months or years to implement. The issuance of timely management decisions by federal agencies is important because it can affect the timeliness of the auditees’ implementation of corrective actions to address internal control deficiencies concerning Recovery Act programs. For example, according to an HHS Office of Inspector General official, auditees sometimes wait until they receive a management decision before taking corrective action on internal control deficiencies. On March 22, 2010, OMB issued memorandum M-10-14, Updated Guidance on the American Recovery and Reinvestment Act, which among other things, instructs federal agencies to take immediate action as appropriate to review and act on Single Audit findings. However, as indicated by the project’s results, further efforts by OMB are needed to help ensure that federal agencies provide their management decisions on the corrective action plans in a timely manner. Under the current time frames for identifying and correcting audit findings provided by the Single Audit Act and OMB Circular No. A-133, it could take years to correct significant deficiencies and material weaknesses that expose Recovery Act funds to misuse or fraud. For example, in accordance with current requirements, a material weakness that has been identified by the auditor for an entity that has a June 30, 2009, fiscal year- end is to be reported in the Single Audit report to be issued by March 31, 2010, along with the auditee’s corrective action plan. The federal awarding agency would have 6 months or until September 30, 2010, from receipt of the Single Audit report to communicate a written management decision to the auditee. As a result, it may take 15 months or more since the end of the fiscal year in which the audit finding was initially identified before any work is begun. Auditee’s management reports their progress in taking corrective action in the schedule of prior audit findings where the status of the finding is reported as either corrected (closed) or not (open). The auditor then reviews this schedule and it is included in the next Single Audit reporting package. If the awarding agency delayed issuing a management decision to the auditee, it is possible that corrective action on the finding was also delayed, and, as a result, the finding may have remained open. In addition, several state auditors have expressed frustration regarding Single Audit findings that remain open years after they were initially identified, without the auditee or the federal awarding agency taking action. The lack of attention to ensuring prompt corrective action impairs the federal government’s ability to ensure that unallowable costs have been repaid or that internal control deficiencies have been corrected. Shortening the timeframes required for issuing management decisions by federal agencies and monitoring the auditee’s implementation of timely corrective actions by the federal agency will help to ensure that appropriate audit follow-up and resolution are achieved. Figure 31 illustrates an example of the Single Audit reporting time frames. As we reported in prior Recovery Act reports, the problem that the Single Audit reports are not due until 9 months after the fiscal year-end was exacerbated by the extensions to the deadline to file Single Audit reports. The federal awarding agencies, consistent with OMB guidance, had routinely granted such extensions. In February 2010, HHS, the cognizant agency for audit, adopted a policy of no longer approving requests for such extensions. Further, in March 2010, OMB issued a memorandum, in response to our recommendation, that directed federal agencies to not grant any requests made to extend the Single Audit reporting deadlines for fiscal years 2009 through 2011. Despite this guidance, we found that the Federal Audit Clearinghouse (FAC) did not receive Single Audit reporting packages for fiscal year ending 2009 from 5 of the 16 selected states and the District of Columbia within the 9-month time frame provided by statute. Single Audit reporting packages include a schedule of internal control deficiencies and the auditee’s plans for correcting them. Thus, when submissions of reporting packages are late, the auditees’ efforts to correct internal control deficiencies may be delayed. According to OMB guidance, late submissions of the Single Audit to FAC in either of the 2 prior fiscal years would prevent the auditor from attaining low-risk auditee status, which could likely result in an increase in the scope of audit coverage to address the additional risk for the subsequent year’s audit of the auditee. While the focus of our bi-monthly reports has been on Recovery Act funds, in general, the Single Audit pertains to federal expenditures awarded from the Recovery Act as well as from other federal sources; thus, internal control deficiencies identified in a program expending Recovery Act funds would generally affect all other sources of federal funds for that program as well. As of August 5, 2010, five of the states participating in the project did not submit their completed fiscal year 2009 Single Audit reports to FAC by the March 31 due date; one of these states had not yet submitted its fiscal year 2009 Single Audit Report as of August 24, 2010. While these states were able to meet the project’s reporting deadline, they were not able to meet the deadline to submit the state’s Single Audit reporting package. We identified other concerns through our review of the project that point to the need for OMB to issue all Single Audit guidance in a more timely manner. Specifically, 12 of the 14 participating state auditors who responded to our survey stated that guidance for any future OMB projects should be more timely. In addition, more than half of the auditors who responded to our survey indicated that they had concerns with timeliness issues relating to the release of OMB’s 2009 Circular No. A-133 Compliance Supplement. OMB issued the Compliance Supplement in two stages, the initial one in May 2009 and an addendum in August 2009. This guidance was issued after the Single Audits for entities with a June 30, 2009, fiscal year-end were already under way. Most of the participating auditors told us that they needed the information as early as February 2009, or at least by April 2009, to effectively plan their work. Some of these state auditors stated that the OMB guidance was issued too late, causing inefficiencies and disruptions in the planning of audit procedures. OMB officials told us that they planned to issue the 2010 Compliance Supplement in late May 2010. In our May 2010 bi-monthly report, we recommended that OMB issue its Single Audit guidance, including guidance for future projects, in a timely manner so that auditors can efficiently plan their audit work. OMB concurred with our recommendation. However, OMB issued the 2010 Compliance Supplement on July 29, 2010—again after the audit planning and work for Single Audits for entities with a June 30, 2010, fiscal year end was already under way. OMB officials stated that the delay in issuing the 2010 Compliance Supplement was primarily due to the additional attention needed to include more Recovery Act programs in the Compliance Supplement and information regarding the audit procedures for reviewing Recovery Act reporting requirements. OMB had provided the American Institute of Certified Public Accountants (AICPA) Governmental Audit Quality Center and the National Association of State Auditors, Comptrollers and Treasurers (NASACT) with draft Single Audit guidance in May 2010. AICPA and NASACT posted the draft to its Web sites for auditors to use for planning their work. However, some auditors we spoke with stated that because the guidance was not in a final form, it still impacted their ability to efficiently plan and conduct their work. We also reported that OMB initiated the first project in October 2009 well after most of the audit work had been underway, resulting in some of the project’s benefits not being realized. The project’s guidance called for the auditors to complete their internal control work as of November 30, 2009, and to report internal control deficiencies by December 31, 2009. The project’s guidelines included incentives to provide the participating auditors with some relief in their workload to encourage them to participate in the project. Under the project’s guidelines, auditors were not required to perform risk assessments of smaller federal programs that they would otherwise need to complete. However, since most of the auditors had already completed the risk assessments by the time the project had started, most of the participating auditors stated that they did not experience any audit relief. OMB has stated that it plans to have a second phase of the Single Audit Internal Control Project for fiscal year 2010. However, as of August 5, 2010, OMB had not yet defined the parameters of the project and issued guidance for potential volunteer participants. OMB has not provided detailed guidance that would explain incentives for volunteering to participate in the project, types of entities that will be permitted to participate, the scope of the project (including the specific programs that participants could select from), the number of participants it is seeking, or the time frames for beginning and ending the project. We continue to report concerns about the Single Audit process because it does not provide a means for the timely identification and correction of internal control deficiencies or other findings relating to Recovery Act programs. This limits the effectiveness of the Single Audit process as an oversight accountability mechanism and exposes Recovery Act funds to increased risk of misuse or fraud. We recommend that the Director of OMB strengthen the Single Audit and federal follow-up as oversight accountability mechanisms by (1) shortening the timeframes required for issuing management decisions by federal awarding agencies to grant recipients, and (2) issuing the OMB Circular No. A-133 Compliance Supplement no later than March 31 of each year. As of August 11, 2010, we have received 224 allegations of Recovery Act wrongdoing from the public. We have closed 137 of these cases because the allegations were nonspecific or lacked information about fraud, waste, or abuse. Another 44 were investigated further and closed by us or the appropriate agency inspector general (IG) when no violations were found. Of allegations that are open, 16 are being handled by us and 27 by an IG. We generally refer allegations to an IG when that office is already pursuing the same or a similar complaint. We periodically contact the IGs to determine the status of our referrals. We will continue to evaluate all Recovery Act allegations received through FraudNet and provide updates in future reports. The Recovery Accountability and Transparency Board (the Board) continues to take steps to identify and report on potential areas for risk to fraud, waste, and mismanagement of Recovery Act funds. The Board recently published its third report in its series of reviews regarding recipient reporting data quality. In addition, the Board continues to augment its various initiatives for detecting potential instances of risk in Recovery Act contracting and turn over information regarding such instances to the appropriate inspectors general for further review. The Board also continues to organize coordinated reviews performed by its inspectors general working group aimed at further assessments of the management and oversight of Recovery Act spending. The Board is also planning to expand on some of its initiatives to strengthen future oversight as implementation of the Recovery Act continues. In June 2010, the Board reported on the third of three phases of its inspectors general working group’s review of actions taken by agencies to improve the quality of data that recipients of Recovery Act funds are providing for posting to the public Web site. Working in conjunction with the Board, six inspectors general reported that their agencies had issued policies and general procedures that follow OMB’s guidance; however, the implementation of their respective guidance differed significantly among the agencies and their subunits. We discuss the results of the inspectors general work in more detail under the recipient reporting section of this report. The Board continues to use a variety of initiatives to monitor Recovery Act spending in an effort to identify potential areas at risk for fraud, waste, and abuse. The Board’s current oversight initiatives include the following: maintaining a Fraud Hotline, which receives complaints of potential fraud, waste, and abuse from the public, and referring potential cases to the respective inspector general for further review. performing data analyses on publicly available information about Recovery Act recipients. The Board continues to modify its analytical efforts to provide insights on potential risk areas for the oversight community. The Board increased its staff, added more software, and obtained new public data sources to provide for additional analyses. As of July 31, 2010, the Board had received 2,398 Fraud Hotline complaints. As a result of these complaints as well as the Board’s data analyses, the Board had referred 184 leads to various inspectors general as of July 31, 2010. Over half of these leads involved the potential misappropriation of funds or nonperformance of services. The Board continues to coordinate audits carried out by the inspectors general working group and monitor the independent efforts of the inspectors general related to the Recovery Act. The inspectors general working group has one audit under way reviewing the accuracy of selected fields of recipient reporting data. In addition, the working group is beginning a review of potential fraud indicators for grants programs in September 2010. The Board continues to review monthly reports submitted by the inspectors general on the number and status of Recovery Act-related audits and investigations each has initiated. As of July 31, 2010, the inspectors general received 3,806 complaints related to the Recovery Act and reported that they have 424 active investigations; 141 investigations closed without action; and 474 audits, inspections, evaluations, or reviews in process. The inspectors general also reported they have completed 689 work products on Recovery Act-related issues since the act was passed— 534 of which are published on Recovery.gov and 155 of which are not publicly available since they contain proprietary or sensitive information. In addition, the inspectors general, in conjunction with the Board, reported that they have conducted 2,231 training and outreach sessions related to Recovery Act issues. According to Board representatives, an outcome of the Board and its work has been to shift the focus of the inspectors general community to the prevention of fraud, rather than just the identification and correction of it. As discussed earlier, the Board’s data analysis capabilities provides the inspectors general with leads regarding potential risks associated with Recovery Act funds and recipients. In addition, over half of the training sessions provided have been focused on preventing fraudulent use of Recovery Act funds. According to Board representatives, the Board’s work has also resulted in changes in the data to provide for better visibility over the use of federal funds. For example, a data field was added in FedBizOpps for recording a company’s DUNS number; a DUNS number is an important data element in tracking companies’ transactions with the government, and including this information is expected to enhance data matching capabilities. Board representatives explained that the Board and its predictive analysis capabilities are considered a template for changing how the government does business. In the short-term, the Board would like to develop predictive analysis tools for federal agencies’ use, such as a list of databases to search and steps to be taken to identify risks. In addition, the Board is considering plans for the transition of its analytic capabilities elsewhere in the federal government when the Board’s authorization expires at the end of fiscal year 2013. In February 2009, the Recovery Act provided for a Recovery Independent Advisory Panel to make recommendations to the Board on ways to prevent fraud, waste, and abuse relating to Recovery Act funds. Four members of the Advisory Panel were appointed by the President in March 2010. At its first public meeting in Cambridge, Massachusetts, in August 2010, state and City of Boston officials presented information and addressed the panel’s questions about their actions to prevent fraud, waste, and abuse. In addition, they discussed the content and structure of the state Recovery Act Web site, as well as continuity among state and local Web sites with the federal government’s Recovery.gov Web site. The panel also held a closed session to discuss techniques to investigate fraud. Currently, the panel plans to hold a series of public meetings across the United States, and has tentatively planned its next public meeting for November 2010. State and local oversight and audit entities across the 16 selected states and the District continue to actively audit Recovery Act funds. As mentioned in our May 2010 report, many of these audits are conducted through the state Single Audit process—an accountability mechanism for overseeing federal funds at the state and local levels. These audits spanned many programs and primarily focused on programs that have been assessed as having higher risk of noncompliance with federal program requirements, such as weatherization, transportation, and Medicaid. However, according to officials from several of our selected states and the District, budget and staffing constraints have limited the number of Recovery Act audit reviews they could perform. Audit report findings have covered various areas including financial management and compliance laws or regulations. In some cases, the audits of Recovery Act funds identified and reported audit findings that were subsequently addressed by audited entities. In other cases, audits of Recovery Act funds did not identify or report findings. Examples of audit findings relating to financial management practices identified in audits of Recovery Act funds include the following: In California, the State Auditor found that cash management practices were not in compliance with federal rules in the state’s Weatherization Assistance Program. The Illinois Office of Internal Audit reported on the failure of state agencies to minimize the time between drawdowns of federal funds and expenditure of those funds and failure to charge hours worked to the proper federal grant at one agency. In Iowa, auditors found that a local school district possibly commingled Recovery Act funds with other school district revenue, which led to the replacement of the district’s accounting supervisor. In New Jersey, an audit of the Weatherization Assistance Program found inadequate policies and controls in place to ensure that federal financial reporting was properly completed, supported by adequate documentation, and reviewed by a supervisor prior to submission. In Ohio, the Auditor of the State identified deficiencies related to unallowable expenditures and inadequate cash management in some programs funded through the Recovery Act. Examples of audit findings relating to program compliance with laws and regulations that were identified in various audits of Recovery Act funds include the following: In Arizona, Single Audits found that the Arizona Department of Education failed to have current central contractor registrations on file prior to awarding Recovery Act ESEA Title I grants to LEAs but have developed a corrective action plan to correct these findings. In Colorado, a local government audit revealed that some Federal Transit Formula Grant funds had been spent without a check on whether the vendor had been suspended or debarred from participating in federal programs. In Florida, state auditors found that the program officials were unable to document that certain individuals were eligible for Medicaid benefits as required by law, and that their procedures did not ensure that all health care providers receiving Medicaid payments had provider agreements in effect. In Massachusetts, the state auditors found that the actual number of youths being reported as participating in the state’s WIA summer jobs program was overstated, that the calculation of job numbers needed to be monitored more closely, and that compliance with participation levels needed to be reviewed. In Michigan, the Single Audit of the Medicaid program found that the Michigan Department of Community Health did not fully monitor its Medicaid payments to ensure that such claims are paid promptly. Failure to comply with the “prompt pay” requirements could result in Michigan not being eligible to receive increased FMAP for certain claims. In Mississippi, auditors found many instances of noncompliance with recipient reporting requirements. In these cases, state agencies were not providing clear and consistent guidance to subrecipients. In North Carolina, the state auditor’s office found that a state department did not consistently perform effective monitoring to ensure that subrecipients of Recovery Act funds were in compliance with Davis-Bacon wage-rate requirements. In Texas, the Single Audit for fiscal year 2009 identified program weaknesses in determining eligibility in Medicaid, Temporary Assistance for Needy Families, and the Supplementary Nutrition Assistance Program. In addition to audits of Recovery Act funds, several states took steps to strengthen their accountability efforts to help to ensure appropriate uses of Recovery Act funds by implementing new work groups or entities to help manage and oversee Recovery Act-funded programs. In addition, these new entities have helped state and local governments address the new requirements associated with Recovery Act funding, coordinate efforts among the accountability community, and inform the public. Other activities performed by these entities included maintaining a Recovery Act Web site, providing technical assistance, tracking the use of funds, issuing advisories, conducting training on internal controls, and providing assistance with recipient reporting. Examples of such activities are as follows: In California, the Recovery Task Force meets regularly with state agencies receiving Recovery Act funds, maintains a Recovery Act Web site as a central repository of information, and has issued more than 30 Recovery Act bulletins providing instructions and guidelines to state agencies. Also, the Recovery Act Inspector General published an advisory which included steps to ensure that contractors perform in accordance with contract terms and to reduce the potential of fraud. In Georgia, the State Accounting Office launched an internal control initiative to enhance accountability for Recovery Act funds that began in June 2010 and provided internal control training to 28 state agencies. More specifically, these agencies completed a self- assessment tool covering internal controls in areas such as financial reporting, revenue, and Recovery Act funds. In Massachusetts, the City of Boston contracted auditor is developing a computerized worksheet in which Recovery Act fund recipients will submit their reporting data in a standardized format that will be centrally stored at the City Auditor’s office. According to city officials, this will make the managing of subrecipients and the reporting process easier and more efficient. In New Jersey, the Recovery Accountability Task force is responsible for monitoring the distribution of Recovery Act funds in the state and promoting the effective and efficient use of those funds. The task force discusses issues related to the oversight of Recovery Act funds and receives updates from state agencies to ensure funds are dispersed with the goals of the Recovery Act in mind. In New York, the Governor created a Stimulus Oversight Panel which meets biweekly to examine the use of Recovery Act funds by each of the 22 state agencies designated to receive them. In addition to other responsibilities, individual panel members also conduct reviews and audits in their areas of expertise. In North Carolina, the Office of Economic Recovery and Investment (OERI) tracks, monitors, and reports on Recovery Act funds and works with state agencies on corrective action plans to help resolve Recovery Act-related findings. OERI also conducted several technical assistance seminars around the state and provides resources such as webinars and checklists on its Web site to help agencies comply with Recovery Act requirements. In Pennsylvania, the Governor appointed the Chief Accountability Officer to help oversee reporting and transparency for Recovery Act activities of state agencies. For the quarter ending June 30, 2010, the office filed 371 recipient reports on behalf of state agencies and posted them to the state’s Recovery Act Web site. In Texas, the Governor’s Stimulus Working Group, which includes representatives from state agencies receiving significant amounts of Recovery Act funding, is a vehicle for sharing information. This group has been used to inform state agencies about recipient reporting requirements, help focus auditing and monitoring efforts, and address program concerns. During our Recovery Act reviews, we tracked and observed 208 contracts awarded by state and local governments. While this is a small number of contracts, our observations indicate that state and local governments receiving Recovery Act funds reported that they are generally using competition and fixed-price contracts, and are not facing major issues with cost, schedule, or contractor performance. Between July 2009 and March 2010, we selected and subsequently analyzed contracts from a variety of programs and held discussions with state and local officials to gain an understanding of the extent to which they believe contracts were awarded competitively and used pricing structures, particularly fixed-price contracts, which reduced the government’s financial risk. The use of competition is generally considered a fundamental tenant of public procurement. In addition, fixed- price contracting generally places the maximum amount of risk on the contractor because the government pays a fixed price even if actual costs of the product or service exceed the contract price. Of the 208 contracts we reviewed, 86 percent were reported by state and local officials as being competed and 79 percent were reported as fixed-price contracts. Further, in five states all of the contracts we reviewed were reported as being competed, and in four states all of the contracts we reviewed were reported as being fixed-price contracts. Almost all contracts for highway projects were reported as competed, and all public housing contracts as fixed-price. Table 12 shows the number of contracts reported by officials as being competed and awarded with fixed prices in the various programs we are monitoring across the selected states. State and local officials cited various reasons why some contracts were awarded noncompetitively. For instance, officials reported that, for several contracts, the contractors provided a unique service and were the only source available. In another instance, officials said that the state was granted a waiver of some competition requirements in order to, in part, expedite the delivery of goods and services. Officials also gave various reasons why some contracts were not awarded as fixed-price contracts. For instance, officials reported that, for many contracts, fixed-price contracts were not used because use of another contract type was the agency’s standard practice for a particular type of project. In other cases, officials stated that other contract types enabled the program to award a contract and begin performance faster than a fixed-price contract would. As part of our overall body of work on the Recovery Act, in July 2010 we reported on the level of insight and oversight regarding the use of noncompetitive Recovery Act contracts in 5 of the 16 states covered in our bimonthly reviews: California, Colorado, Florida, New York, and Texas. We found that the five states varied on the type and amount of data routinely collected on noncompetitive Recovery Act contracts and that the states do not routinely provide state-level oversight of contracts awarded at the local level, where a portion of Recovery Act contracting occurs. According to state officials, they were generally following the contracting policies and practices for awarding and overseeing contracts that were in place prior to passage of the Recovery Act. Officials from the selected states’ audit organizations said that if they were to address Recovery Act contracting issues, it could be done through the annual Single Audit or other reviews of programs that involve Recovery Act funds. Between March and June 2010, we followed up with state and local officials to understand whether the contracts we had selected were achieving the key acquisition outcomes of delivering on cost, on schedule, and with satisfactory performance. State and local officials reported that most of the Recovery Act contracts we reviewed are meeting these goals. According to state and local officials, of the 208 contracts we reviewed, 51 percent had no change to overall contract cost, 12 percent had decreased costs, and 1 percent had changes to cost and prices but remained within the contracts’ total cost permitted. Approximately one-third of the contracts reported cost increases. In addition, officials reported that 52 percent of contracts had no change to schedule and 11 percent delivered early. The remaining 36 percent of contracts reported schedule delays. Thirty-six percent—or 74 contracts—had no changes to either cost or schedule. Table 13 shows the number of contracts reported by officials as having cost or schedule changes by the various programs we are monitoring across the selected states. For three-quarters of the 70 contracts where price increased, state and local officials attributed these increases to conditions that were not anticipated at the time of contract award. For example, officials reported that total costs increased by over $300,000 for one public housing project because materials containing asbestos were found on boilers, which had to be taken apart to remove asbestos before they could be demolished, and several boilers intended to be repaired or reused needed to be replaced instead. The most common factors state and local officials pointed to for schedule delays were circumstances beyond the control of the contractor and conditions not anticipated at the time of contract award. For instance, in several cases, officials noted that severe weather caused schedule delays. According to state and local officials, 91 percent of the contracts we reviewed had no contractor work performance issues that adversely impacted the work being performed or deliverables being provided. Only 14 of 208 contracts we reviewed reported that there had been issues with contractor performance. Of those, seven were highway construction projects at the state and local level and three of the contracts were for public housing projects. While the nature of these issues varied, in most cases officials reported that the contractor was able to satisfactorily continue or complete the project. In some of these cases, the contractor was assessed fees to compensate for the contractor’s performance issues. Officials reported only two instances where the contractor ceased to perform the remaining work, which will now be performed by another contractor or the agency’s staff. For this report, we continue our focus on the use of Recovery Act funds at the local government level while updating our review of states’ uses of Recovery Act funds in proposed and enacted budgets. As shown in figure 32, we visited 24 local governments in our 16 selected states and the District to collect information regarding their use of Recovery Act funds. Similar to the approach taken for our May 2010 report, we identified localities representing a range of types of governments (cities and counties), population sizes, and economic conditions (unemployment rates greater and less than the state’s overall unemployment rate). We balanced these criteria with other considerations, including other scheduled Recovery Act work, local contacts established during prior reviews, and the geographic proximity of the local government entities. Officials from the 24 local governments we interviewed ranged in population from 258 in Steward, Illinois, to approximately 2.5 million in Miami-Dade County, Florida. Unemployment rates in our selected localities ranged from 6.7 percent in Round Rock, Texas, to 13.4 percent in Redding, California. Local officials reported their governments’ continued use of Recovery Act funds in a range of program areas such as public safety (Community Oriented Policing Services (COPS) and Edward Byrne Memorial Justice Assistance Grants (JAG)), energy (EECBG), housing (Homelessness Prevention and Rapid Re-housing Program (HPRP) and Community Development Block Grant (CDBG)), and transportation and transit. Other Recovery Act funds received by the selected localities include grants for lead mitigation, wastewater treatment, and airport improvement. Some examples of the uses of Recovery Act funds appear in table 14. Several local government officials said that Recovery Act funds were used for projects including purchase of law enforcement and transit equipment and investment in public works and infrastructure projects such as road and sewer improvements. For example, Redding, California, officials said Federal Transit Administration Recovery Act funds helped the Redding Area Bus Authority (RABA) accelerate the purchase of three new buses and nine new paratranist vans, thus allowing RABA to avoid implementing service cuts and fare increases. Officials in Farmington Hills, Michigan, reported using Recovery Act funds from the JAG program to purchase a range of public safety equipment, such as radio equipment, digital camcorders, undercover transmitters, and a Digital Eyewitness Media Manager Server System that otherwise would not have been purchased. Officials in Athens-Clarke County, Georgia, reported using Recovery Act funds from the Clean Water State Revolving Fund program to help construct four sewer inceptors. Columbus, Georgia, officials said they were using Recovery Act funds to enhance the implementation of transportation projects including the construction of a bike/pedestrian trail and streetscape improvements. The use of Recovery Act funds also helped several local governments continue to provide local services. Philadelphia, Pennsylvania, officials said that the use of Recovery Act funds from the COPS Hiring Recovery Program (CHRP) grant helped support community policing and crime prevention efforts by allowing the city to hire 50 additional police officers. Similarly, officials in Colorado Springs, Colorado, reported using Recovery Act funds from the JAG program to fund the salaries of community service officers. Officials in Austin, Texas, reported using Recovery Act funds from the Edward Byrne Memorial Justice Assistance Grant program to fund 12 new emergency dispatchers. With regard to local services provided, officials in Weld County, Colorado, and Boston, Massachusetts, reported using Recovery Act funds from the Congregate Nutrition Services program to provide meal deliveries to low-income senior citizens. Officials in Berks County, Pennsylvania, said the county would not have been able to provide rent and utility assistance to persons at risk of becoming homeless without Recovery Act funds from the HPRP grant. In most localities we visited, government officials reported working in partnership with other local entities, such as nonprofit organizations, the private sector, transit authorities, and other local jurisdictions to apply for or administer Recovery Act funds. For example, officials in Round Rock, Texas, said the city partnered with the Capital Area Metropolitan Planning Organization to apply for a Transit Capital Assistance Recovery Act grant. The application was successful with Round Rock receiving $2 million to construct a transit facility consisting of bus lanes, a transit pavilion, bicycle racks, and more than 100 parking spaces. Officials in Marshalltown, Iowa, reported that the city worked extensively with partners from surrounding counties, educational institutions, and other agencies to administer funds for the Lead-Based Paint Hazard Control Program. In Miami-Dade County, Florida, the local officials said the county partnered with private commercial farmers to administer Recovery Act funds from the National Clean Diesel Assistance Program. This program provided $2 million to farmers to purchase approximately 300 more efficient diesel motors used in portable and fixed irrigation equipment. Most local governments we contacted for this review reported experiencing fiscal challenges due to revenue declines or reductions in state aid. In Jersey City, New Jersey, officials said the city faces an $80 million budget deficit and an estimated $27.5 million reduction in state aid for fiscal year 2011. Officials in Steuben County, New York, reported a decline in all categories of revenue receipts and state funding cuts of $858,000. Officials also noted that delays in state reimbursements have resulted in substantial use of county reserves. Officials in Miami-Dade County, Florida, said a decrease in property and sales tax revenue combined with a reduction in state funding contributed to a $426 million budget gap for 2010. Officials in San José and Redding, California, also cited budget gaps for the current fiscal year and reductions in revenue from property taxes and other sources as examples of their governments’ fiscal challenges. In Colorado Springs, Colorado, officials said their fiscal condition has slightly improved due to an unanticipated 4 percent increase in 2010 sales tax revenue over actual 2009 revenue. Despite this increase, Colorado Springs is not planning to expand services in 2010. Officials from the city of Austin reported an increase in sales tax revenue and declines in other revenue sources, such as fees and charges for commercial and residential development. Specifically, Austin, Texas, officials reported a 3.2 percent sales tax revenue increase and anticipate using this revenue to help address the city’s budget gap of between $11 million and $28 million. Officials in several localities reported that they are developing plans to continue funding Recovery Act programs using local government funds or by pursuing other funds after Recovery Act funding ends. For example, Cincinnati, Ohio, officials said the city hopes to continue funding the 50 police officers hired under the 3-year CHRP grant by using city revenues to cover expenditures after 2012. Officials in Wilmington, North Carolina, reported that the city intends to replace JAG funding for law enforcement equipment and services with general funds and other grant funds. San José, California, officials said the city plans to pursue other grant opportunities in order to continue funding city infrastructure projects currently benefiting from the use of Recovery Act funds. In contrast, officials in a number of localities said that because Recovery Act funds were primarily used for one-time projects they do not need to develop a specific plan to prepare for the end of Recovery Act funding. For example, officials in Farmington Hills, Michigan, reported that the city used Recovery Act funds for one-time expenditures, such as equipment purchases and energy-efficiency upgrades, and therefore does not need to develop an exit strategy. Similarly, in Tupelo, Mississippi, officials said the city used Recovery Act funds for infrastructure-related, “stand-alone” projects requiring minimal or no long-term financial support and specifically avoided applying for a CHRP grant because of the requirement to retain officers hired under the grant after Recovery Act funding ends. A few local governments reported that they plan to end Recovery Act-funded projects or reduce staff or funding for these programs after Recovery Act funding ends. Recovery Act funds continued to help states maintain services in areas such as education, health care, and transportation. A few states reported that they recently received additional Recovery Act funding in other areas. For example, New Jersey received $8 million for an energy rebate program and $14 million for energy-efficiency programs. Michigan received $30 million in Recovery Act funds to provide energy-efficiency retrofits for residential, commercial, industrial, and public buildings. Many of our selected states, as well as the District, reported that the Recovery Act continues to have a positive effect on their fiscal stability. As an example, Arizona state officials told us that Recovery Act funds helped their state through the worst part of the recession by preventing deeper cuts in social programs, and giving officials breathing room to figure out what fiscal steps to take in the long term. Officials in Ohio credit the over $7.9 billion in Recovery Act funds they have received as of August 1, 2010, with helping to protect jobs and continue services in their state. Officials in Illinois and the District said that they would be in more dire fiscal condition without SFSF and the increased FMAP funds from the Recovery Act. In Iowa, Recovery Act funds received in fiscal year 2011 helped officials balance their fiscal year 2011 budget while avoiding tax increases and reducing the amount by which officials needed to draw down the state’s reserve fund. Recovery Act funds also reduced the need in Massachusetts to use more of the state’s authorized fiscal year 2010 rainy- day reserve funds to balance the budget, according to state officials. City officials told us that Recovery Act funds helped the District maintain a balanced budget for fiscal year 2011 without tapping into the city’s rainy- day fund. Several states and the District contacted for this review reported that they incorporated measures to prepare for the end of Recovery Act funding in their fiscal year 2011 budget or in budgets in prior cycles. For example, in Mississippi officials told us the legislature sharply reduced spending to offset reductions in Recovery Act funding. According to city officials, the District’s fiscal year 2010 budget, as well as the mayor’s proposed fiscal year 2011 budget, reflects the reduction in revenues that will result from the reduction in Recovery Act funds in fiscal year 2011. Officials in some states reported they were planning for the end of Recovery Act funding. For example, Florida officials told us they were in the early stages of developing their fiscal year 2012 budget which will include a plan to address the phasing out of Recovery Act funds. According to Michigan officials, they have made some structural changes such as reforms to the public school employees’ retirement plan and are working to devise solutions for when the Recovery Act funds run out in fiscal year 2012. In Georgia, officials said they are preparing for the cessation of Recovery Act funds by planning additional budget reductions. They also are projecting moderate revenue growth. New York officials told us that they will address the phasing out of Recovery Act funds this fall when they develop the budget for the next fiscal year. State officials reported mixed assessments of changes to their states’ fiscal conditions since we contacted them for our May 2010 report. Officials in several states noted that they continue to face difficult budget challenges. Several states told us that their fiscal condition has generally remained the same since our May report. Some states have seen signs that their fiscal condition is shifting and showing signs of improvement. For example, state officials reported that tax revenue collections in Massachusetts during the last 2 months of fiscal year 2010 were above revenue estimates by $191 million and $149 million respectively, and the commonwealth ended fiscal year 2010 with tax collections above budget estimates. State officials in Pennsylvania also reported that revenues were $58 million ahead of estimates in June—the first month since December 2007 that revenues exceeded estimates. Arizona officials also told us that their April and May revenues were much better than they had projected, however, they noted that the trend did not continue in June and July. In another example, Michigan officials told us that in June 2010, total wage and salary employment was up 23,400 jobs compared to June 2009. This was the first year-over-year increase in total wage and salary employment in Michigan since March 2005. These signs of improvement, in contrast to revenue declines, are consistent with national trends reported in the June 2010 Fiscal Survey of States issued by the National Governors Association and the National Association of State Budget Officers. According to the Fiscal Survey, states are projecting a slight rise of 3.9 percent in tax collections for fiscal year 2011 recommended budgets relative to fiscal year 2010 estimates. However, states estimate that their 2010 tax revenues will represent an almost 12 percent decline in states’ sales, personal income, and corporate income tax collections since fiscal year 2008, the last fiscal year in which states were not significantly affected by the national recession. The Fiscal Survey attributes reduced state sales, personal income, and corporate income tax collections to the lack of economic expansion and job losses. For this report, GAO both updates the status of agencies’ efforts to implement GAO’s 25 open recommendations and makes 5 new recommendations to the Departments of Transportation (DOT), Housing and Urban Development (HUD), Labor, Energy (DOE), Health and Human Services, and Treasury, and to the Environmental Protection Agency (EPA), and to the Office of Management and Budget (OMB). Agency responses to our new recommendations are included in the program sections of this report. Lastly, we update the status of our Matters for Congressional Consideration. To ensure that Congress and the public have accurate information on the extent to which the goals of the Recovery Act are being met, we recommend that the Secretary of Transportation direct FHWA to take the following two actions: Develop additional rules and data checks in the Recovery Act Data System, so that these data will accurately identify contract milestones such as award dates and amounts, and provide guidance to states to revise existing contract data. Make publicly available—within 60 days after the September 30, 2010, obligation deadline—an accurate accounting and analysis of the extent to which states directed funds to economically distressed areas, including corrections to the data initially provided to Congress in December 2009. To better understand the impact of Recovery Act investments in transportation, we believe that the Secretary of Transportation should ensure that the results of these projects are assessed and a determination made about whether these investments produced long-term benefits. Specifically, in the near term, we recommend the Secretary direct FHWA and FTA to determine the types of data and performance measures they would need to assess the impact of the Recovery Act and the specific authority they may need to collect data and report on these measures. In its response, DOT noted that it expected to be able to report on Recovery Act outputs, such as the miles of road paved, bridges repaired, and transit vehicles purchased, but not on outcomes, such as reductions in travel time, nor did it commit to assessing whether transportation investments produced long-term benefits. DOT further explained that limitations in its data systems, coupled with the magnitude of Recovery Act funds relative to overall annual federal investment in transportation, would make assessing the benefits of Recovery Act funds difficult. DOT indicated that, with these limitations in mind, it is examining its existing data availability and, as necessary, would seek additional data collection authority from Congress if it became apparent that such authority were needed. While we are encouraged that DOT plans to take some steps to assess its data needs, it has not committed to assessing the long-term benefits of Recovery Act investments in transportation infrastructure. We are therefore keeping our recommendation on this matter open. The Secretary of Transportation should gather timely information on the progress they are making in meeting the maintenance-of-effort requirement and to report preliminary information to Congress within 60 days of the certified period (September 30, 2010), (1) on whether states met required program expenditures as outlined in their maintenance-of- effort certifications, (2) the reasons that states did not meet these certified levels, if applicable, and (3) lessons learned from the process. DOT concurred in part with our March 2010 recommendation that it gather and report more timely information on the progress states are making in meeting the maintenance-of-effort requirements. Because more timely information could better inform policymakers’ decisions on the usefulness and effectiveness of the maintenance-of-effort requirements and is important to assessing the impact of Recovery Act funding in achieving its intended effect of increasing overall spending, we are leaving this recommendation open and plan to continue to monitor DOT’s actions. In its August 2010 response, DOT officials stated that DOT will encourage states to report preliminary data for the certified period ending September 30, 2010, and deliver a preliminary report to Congress within 60 days of the certified period. DOT officials said they have developed a timeline for obtaining information to produce this report and will issue guidance by October 1, 2010, requesting that states update actual aggregate expenditure data and provide the data to DOT by November 15, 2010. DOT officials said they will use this information to develop the report to Congress, and it will submit the report no later than November 30, 2010. Because the absence of third-party investors reduces the amount of overall scrutiny TCAP projects would receive and HUD is currently not aware of how many projects lacked third-party investors, HUD should develop a risk-based plan for its role in overseeing TCAP projects that recognizes the level of oversight provided by others. To enhance Labor’s ability to manage its Recovery Act and regular WIA formula grants and to build on its efforts to improve the accuracy and consistency of financial reporting, we recommend that the Secretary of Labor take the following actions: To determine the extent and nature of reporting inconsistencies across the states and better target technical assistance, conduct a one-time assessment of financial reports that examines whether each state’s reported data on obligations meet Labor’s requirements. To enhance state accountability and to facilitate their progress in making reporting improvements, routinely review states’ reporting on obligations during regular state comprehensive reviews. Labor agreed with both of our recommendations and has begun to take some actions to implement them. To determine the extent of reporting inconsistencies, Labor plans to conduct an assessment of state financial reports to determine if the data reported is accurate and reflects Labor’s guidance on reporting of obligations and expenditures. After the assessment, Labor plans to provide technical assistance to states that need further instruction and guidance. To enhance states’ accountability and facilitate their progress in making improvements in reporting, Labor has instructed all its regional offices to begin routinely reviewing state’s reporting on obligations during state comprehensive reviews. In addition, Labor plans to issue guidance on the definitions of key financial terms such as obligations, provide online training to ensure that the terms are accurately and consistently applied, and conduct workshops on financial and administrative management. Our September 2009 bimonthly report identified a need for additional federal guidance in two areas—measuring the work readiness of youth and defining green jobs —and we made the following two recommendations to the Secretary of Labor: To enhance the usefulness of data on work readiness outcomes, provide additional guidance on how to measure work readiness of youth, with a goal of improving the comparability and rigor of the measure. To better support state and local efforts to provide youth with employment and training in green jobs, provide additional guidance about the nature of these jobs and the strategies that could be used to prepare youth for careers in green industries. Labor agreed with both of our recommendations and has begun to take some actions to implement them. With regard to the work readiness measure for WIA Youth summer employment activities, Labor issued guidance on May 13, 2010 for the WIA Youth Program that builds on the experiences and lessons learned during implementation of Recovery Act- funded youth activities in 2009. Labor broadly identified some additional requirements for measuring work readiness of youth that it plans to address in future guidance. This includes having the employer observe and assess workplace performance and determine what worksite skills are necessary to be successful in the workplace. Regarding our recommendation on the green jobs, Labor told us that the Bureau of Labor Statistics published a Federal Register Notice on March 16, 2010 for comment on a proposed definition for measuring green jobs, which includes an approach for identifying environmental industries and counting associated jobs. Labor officials hope this will inform state and local workforce development efforts to identify and target green jobs and their training needs. In addition, Labor is using the Recovery Act-funded green jobs training grants to document lessons learned on designing and providing green jobs training. Given the concerns we have raised about whether program requirements are being met, we recommend that DOE, in conjunction with both state and local weatherization agencies, develop and clarify weatherization program guidance that establishes best practices for how income eligibility should be determined and documented and issues specific guidance that does not allow the self-certification of income by applicants to be the sole method of documenting income eligibility. clarifies the specific methodology for calculating the average cost per home weatherized to ensure that the maximum average cost limit is applied as intended. accelerates current DOE efforts to develop national standards for weatherization training, certification, and accreditation, which is currently expected to take 2 years to complete. develops a best practice guide for key internal controls that should be present at the local weatherization agency level to ensure compliance with key program requirements. sets time frames for development and implementation of state monitoring programs. revisits the various methodologies used in determining the weatherization work that should be performed based on the consideration of cost-effectiveness and develops standard methodologies that ensure that priority is given to the most cost- effective weatherization work. To validate any methodologies created, this effort should include the development of standards for accurately measuring the long-term energy savings resulting from weatherization work conducted. considers and addresses how the weatherization program guidance is impacted by the introduction of increased amounts of multifamily units. In addition, given that state and local agencies have felt pressure to meet a large increase in production targets while effectively meeting program requirements and have experienced some confusion over production targets, funding obligations, and associated consequences for not meeting production and funding goals, we recommend that DOE clarify its production targets, funding deadlines, and associated consequences while providing a balanced emphasis on the importance of meeting program requirements. In our May 2010 report, we provided eight recommendations and raised concerns about whether program requirements were being met. DOE generally agreed with all of our recommendations and has begun to take several steps in response. For example, DOE reported that it has drafted national workload standards to address our concerns regarding training, certification, and accreditation. DOE plans to issue these standards to recipients in October 2010. DOE is still in the process of considering our recommendations and will provide additional information on how they plan to fully implement our recommendations at a later date. We recommend that the EPA Administrator work with the states to implement specific oversight procedures to monitor and ensure subrecipients’ compliance with the provisions of the Recovery Act-funded Clean Water and Drinking Water SRF program. In response to our recommendation, EPA provided additional guidance to the states regarding their oversight responsibilities, with an emphasis on enhancing site specific monitoring and inspections. Specifically, in June 2010, the agency developed and issued an oversight plan outline for Recovery Act projects that provides guidance on the frequency, content, and documentation related to regional reviews of state Recovery Act programs and regional and state reviews of specific Recovery Act projects. For example, EPA’s guidance states that regions and states should be reviewing the items included on the EPA “State ARRA Inspection Checklist” or use a state equivalent that covers the same topics. The plan also describes EPA headquarters role in ongoing Recovery Act oversight and plans for additional webcasts. EPA also reiterated that contractors are available to provide training and to assist with file reviews and site inspections. Our May 2010 bimonthly report identified the need for improved management information on regional offices and grantees’ decisions and activities to consistently oversee the rapid expansion and program performance of Head Start and Early Head Start under the Recovery Act. We made three recommendations to the Director of the Office of Head Start (OHS), part of the Department of Health and Human Services’ Administration for Children and Families. In May, HHS disagreed with our conclusion that a lack of management information limits its ability to consistently oversee the rapid expansion of Head Start and Early Head Start under the Recovery Act. We provided a draft of all materials related to Head Start and Early Head Start to OHS and HHS for comment, but they did not provide comments in time for us to consider them in this report. To provide grantees with appropriate guidelines on their use of Head Start and Early Head Start grant funds, and enable OHS to monitor the use of these funds, the Director of OHS should direct regional office staff to stop allocating all grant funds to the “other” budget category, and immediately revise all financial assistance awards (FAAs) in which all funds were allocated to the “other” category. To facilitate understanding of whether regional decisions regarding waivers of the program’s matching requirement are consistent with Recovery Act grantees’ needs across regions, the Director of OHS should regularly review waivers of the nonfederal matching requirement and associated justifications. To oversee the extent to which grantees are meeting the program goal of providing services to children and families and to better track the initiation of services under the Recovery Act, the Director of OHS should collect data on the extent to which children and pregnant women actually receive services from Head Start and Early Head Start grantees. Treasury should expeditiously provide HFAs with guidance on monitoring project spending and develop plans for dealing with the possibility that projects could miss the spending deadline and face further project interruptions. To strengthen the Single Audit and federal follow up as oversight accountability mechanisms, we recommend that the Director of OMB (1) shorten the timeframes required for issuing management decisions by federal awarding agencies to grant recipients, and (2) issue the OMB Circular No. A-133 Compliance Supplement no later than March 31 of each year. To leverage Single Audits as an effective oversight tool for Recovery Act programs, in our prior bimonthly reports, we recommended that the Director of OMB should 1. provide more direct focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance; 2. take additional efforts to provide more timely reporting on internal controls for Recovery Act programs for 2010 and beyond; and 3. evaluate options for providing relief related to audit requirements for 4. low-risk programs to balance new audit responsibilities associated with the Recovery Act. issue Single Audit guidance in a timely manner so that auditors can efficiently plan their audit work; and 5. explore alternatives to help ensure that federal awarding agencies provide their management decisions on the corrective action plans in a timely manner. OMB has taken several steps in response to our recommendations. Its efforts, however, are ongoing, and further actions are needed to fully implement our recommendations to help mitigate risks related to Recovery Act funds. We include a summary of OMB’s efforts to implement these recommendations. To focus auditor risk assessments on Recovery Act-funded programs and to provide guidance on internal control reviews for Recovery Act programs, OMB worked within the framework defined by existing mechanisms—Circular No. A-133 and the Circular No. A-133 Compliance Supplement (Compliance Supplement). In this context, OMB has made limited adjustments to its Single Audit guidance. OMB issued the Compliance Supplement in May 2009, which focused risk assessments on Recovery Act-funded programs. In August 2009, OMB issued the Circular No. A-133 Compliance Supplement Addendum I, which provided additional guidance for auditors and modified the Compliance Supplement to, among other things, focus on new Recovery Act programs and new program clusters. In October 2009, OMB began a Single Audit Internal Control Project (project), which is nearing its completion as of May 14, 2010. One of the project’s goals is to encourage auditors to identify and communicate significant deficiencies and material weaknesses in internal control over compliance for selected major Recovery Act programs 3 months sooner than the 9-month time frame currently required under OMB Circular No. A- 133. OMB plans to analyze the results to identify the need for potential modifications to improve OMB guidance related to Single Audits. Although OMB noted the increased responsibilities falling on those responsible for performing Single Audits, it has yet to issue proposals or plans to address this issue. States that volunteered to participate in the project were eligible for some relief in their workloads because OMB modified the requirements under Circular No. A-133 to reduce the number of low-risk programs for inclusion in the Single Audits. To provide more direct focus on Recovery Act programs through the Single Audit with regard to smaller programs with higher risk, OMB provided guidance in the 2009 OMB Circular No. A-133 Compliance Supplement that required auditors to consider all federal programs with expenditures of Recovery Act awards to be considered higher risk programs when performing the standard risk-based tests for selection of programs to be audited. OMB also issued clarifying information on determining risk for programs with Recovery Act expenditures. However, since most of the funding for Recovery Act programs will be expended in 2010 and beyond, we remain concerned that some smaller programs with higher risk would not likely receive adequate audit coverage. One approach for OMB to consider in helping to ensure that smaller programs with higher risk have audit coverage is to explore various options to provide auditors with the flexibility needed to select programs that are considered high risk, even though the federal expenditures for a smaller program may be less than the expenditure threshold provided under the Single Audit Act. With regard to taking additional efforts to provide more timely reporting on internal controls for Recovery Act programs for 2010 and beyond, OMB has not yet put into place measures to achieve earlier communication of the reporting of internal control deficiencies for fiscal years 2010 and beyond—years where considerable amounts of Recovery Act funds will be expended. OMB officials have stated in August 2010, that they plan to initiate a subsequent Single Audit Internal Control Project for fiscal year 2010. Similar to the 2009 project, one of the project’s goals will be to encourage more timely identification and earlier communication of internal control deficiencies in selected programs expending Recovery Act funding. OMB designed its Single Audit Internal Control Project to grant some relief to the auditors for the states that volunteered to encourage participation in the project. Specifically, participating auditors were not required to perform risk assessments of smaller federal programs. OMB had also modified the requirements under Circular No. A-133 to reduce the number of low-risk programs that must be included in some project participants’ Single Audits. Although the project which began in October 2009, was designed to provide the auditors some relief in their workload, many auditors had already completed their risk assessment for audits with fiscal years ending June 30, 2009. As a result, the auditors did not experience the audit relief intended by the project. With regard to issuing Single Audit Guidance in a timely manner, we reported in May 2010 that OMB officials told us that they had planned to issue the Compliance Supplement in late May 2010. However, OMB did not issue the Compliance Supplement until July 29, 2010. Several of the auditors that we surveyed stated that they needed the information as early as February, or at least by April, to effectively plan their work. OMB officials stated that the delay in issuing the 2010 compliance supplement was primarily due to the additional attention needed to include more Recovery Act programs in the Compliance Supplement and information regarding the audit procedures for reviewing Recovery Act reporting requirements. In May 2010, OMB provided the American Institute of Certified Public Accountants (AICPA) Governmental Audit Quality Center and the National Association of State Auditors, Comptrollers and Treasurers (NASACT) with draft Single Audit guidance in May 2010. AICPA and NASACT posted the draft to its Web sites for auditors to use for planning their work. However, some auditors we spoke with stated that because the guidance was not in a final form, it still impacted their ability to efficiently plan and conduct their work. In addition, OMB has stated that it plans to have a second phase of the Single Audit Internal Control Project for fiscal year 2010. However, as of August 5, 2010, OMB had not yet defined the parameters of the project and issued guidance for potential volunteer participants. OMB also has not provided detailed guidance that would explain incentives for volunteering to participate in the project, types of entities that will be permitted to participate, the scope of the project (including the specific programs that participants could select from), the number of participants it is seeking, or the timeframes for beginning and ending the project. OMB officials have stated that they have discussed alternatives for helping to ensure that federal awarding agencies provide their management decisions on the corrective action plans in a timely manner but have yet to decide on the course of action that they will pursue to implement this recommendation. As we noted in our July 2009 report, reporting on Recovery Act performance results is broader than the employment-related reporting required by the act. We continue to recommend that the Director of OMB—perhaps through the Senior Management Councils—clarify what other program performance measures recipients are expected to report on to demonstrate the impact of Recovery Act funding. To the extent that appropriate adjustments to the Single Audit process are not accomplished under the current Single Audit structure, Congress should consider amending the Single Audit Act or enacting new legislation that provides for more timely internal control reporting, as well as audit coverage for smaller Recovery Act programs with high risk. GAO continues to believe that Congress should consider changes related to the Single Audit process. To the extent that additional coverage is needed to achieve accountability over Recovery Act programs, Congress should consider mechanisms to provide additional resources to support those charged with carrying out the Single Audit Act and related audits. GAO continues to believe that Congress should consider changes related to the Single Audit process. To provide housing finance agencies (HFA) with greater tools for enforcing program compliance, in the event the Section 1602 Program is extended for another year, Congress may want to consider directing Treasury to permit HFAs the flexibility to disburse Section 1602 Program funds as interest-bearing loans that allow for repayment. GAO continues to believe that Congress should consider directing Treasury to permit HFAs the flexibility to disburse Section 1602 Program funds as interest-bearing loans that allow for repayment. We are sending copies of this report to the Office of Management and Budget and the Departments of Health and Human Services (Centers for Medicare and Medicaid Services), Education, Transportation, Energy, and Housing and Urban Development. In addition, we are sending sections of the report to officials in the 16 states and the District and the 24 local governments covered in our review. The report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-5500. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix describes our objectives, scope, and methodology for this seventh of our bimonthly reviews on the American Recovery and Reinvestment Act of 2009 (Recovery Act). A detailed description of the criteria used to select the core group of 16 states and the District of Columbia (District) and programs we reviewed is found in appendix I of our April 2009 Recovery Act bimonthly report. The Recovery Act specifies several roles for GAO, including conducting bimonthly reviews of selected states’ and localities’ use of funds made available under the act. As a result, our objectives for this report were to assess (1) selected states’ and localities’ uses of and planning for Recovery Act funds, (2) the approaches taken by the selected states and localities to ensure accountability for Recovery Act funds, and (3) state activities to evaluate the impact of the Recovery Act funds they have received to date. We selected programs for review primarily because they have begun disbursing funds to states or because they have known or potential risks. The risks can include existing programs receiving significant amounts of Recovery Act funds or new programs. In some cases, we have also collected data from all states and from a broader array of localities to augment the in-depth reviews. Our teams visited the 16 selected states, the District, and a nonprobability sample of entities (e.g., state and local governments, local educational agencies, public housing authorities) during the period from May 2010 through September 2010. As with our previous Recovery Act reports, our teams met with a variety of state and local officials from executive-level and program offices. During the discussions with state and local officials, teams used a series of program review and semistructured interview guides that addressed state plans for management, tracking, and reporting of Recovery Act funds and activities. We also reviewed state statutes, legislative proposals, and other state legal materials for this report. Where attributed, we relied on state officials and other state sources for descriptions and interpretation of state legal materials. Appendix IV details the states and localities visited by GAO. Criteria used to select localities within our selected states follows below. The act requires that nonfederal recipients of Recovery Act-funded grants, contracts, or loans submit quarterly reports on each project or activity including information concerning the amount and use of funds and jobs created or retained. The first of these recipient reports covered cumulative activity since the Recovery Act’s passage through the quarter ending September 30, 2009. The Recovery Act requires us to comment on the estimates of jobs created or retained after the recipients have reported. We issued our initial report related to recipient reporting, including recommendations for recipient report improvements, on November 19, 2009. A second major focus of the current report is to provide updated information concerning recipient reporting in accordance with our mandate for quarterly reporting. Using criteria described in our earlier bimonthly reports, we selected the following streams of Recovery Act funding flowing to states and localities for review during this report: Medicaid Federal Medical Assistance Percentage (FMAP) grant awards; the State Fiscal Stabilization Fund (SFSF); Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA); Parts B and C of the Individuals with Disabilities Education Act (IDEA); the Federal-Aid Highway Surface Transportation and Transit Capital Assistance programs; the State Energy Program (SEP); the Energy Efficiency and Conservation Block Grant program (EECBG); the Weatherization Assistance Program; the Public Housing Capital Fund; the Tax Credit Assistance Program (TCAP); and Grants to States for Low- Income Housing Projects in Lieu of Low-Income Housing Credits Program under Section 1602 of the Recovery Act (Section 1602 Program). We also reviewed how Recovery Act funds are being used by states and localities. In addition, we analyzed www.Recovery.gov data on federal spending. To examine Medicaid enrollment, states’ efforts to comply with the provisions of the Recovery Act, states’ uses of the grant awards, and other related information, we conducted a Web-based survey, asking the 16 states and the District to provide new information, as well as to update information they had previously provided to us. To establish the reliability of our Web-based survey data, we pretested the survey with Medicaid officials in two states and also conducted follow-up with sample states as needed. For the increased FMAP grant awards, we obtained increased FMAP grant and draw down figures for each state in our sample and the District from the Centers for Medicare & Medicaid Services (CMS). We discussed with CMS issues related to the agency’s oversight of increased FMAP grant awards and its guidance to states on Recovery Act provisions. To assess the reliability of increased FMAP draw down figures, we previously interviewed CMS officials on how these data are collected and reported. Based on these steps, we determined that the data provided by CMS and submitted by states were sufficiently reliable for the purposes of our engagement. To obtain national level information on how Recovery Act funds made available by the U.S. Department of Education under SFSF, ESEA Title I, and IDEA were used at the local level, we designed and administered a Web-based survey of local education agencies (LEAs) in the 50 states and the District of Columbia. We surveyed school district superintendents across the country to learn how Recovery Act funding was used and what impact these funds had on school districts. We conducted our survey between March and April 2010, with a 78 percent final weighted response rate. We selected a stratified random sample of 575 LEAs from the population of 16,065 LEAs included in our sample frame of data obtained from Education’s Common Core of Data (CCD) in 2007-08. We took steps to minimize nonsampling errors by pretesting the survey instrument with officials in 5 LEAs in January and February 2010. Because we surveyed a sample of LEAs, survey results are estimates of a population of LEAs and thus are subject to sampling errors that are associated with samples of this size and type. Our sample is only one of a large number of samples that we might have drawn. As each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 10 percentage points). We excluded 16 of the sampled LEAs for various reasons – because they were no longer operating in the 2009-10 school year, were a duplicate entry, or were not an LEA — and therefore were considered out of scope. All estimates produced from the sample and presented in this report are representative of the in-scope population and have margins of error of plus or minus 7 percentage points or less for our sample, unless otherwise noted. To obtain specific examples of how LEAs are using Recovery Act funds, we selected LEAs in each of the following states: California, Massachusetts and Michigan to visit and interview LEA officials. We selected these states from among the 16 states and the District of Columbia in our review based on geographic diversity and varying state budget situations for K-12 education. Within the selected states, we identified a mix of local districts that would represent urban, rural, and suburban districts, LEAs among the 100 largest LEAs as well as districts that were not as large, and local districts with different budget situations. We also obtained selected additional information from LEA officials in New York. In addition to interviewing local officials, we interviewed selected state officials. Specifically, we interviewed ESEA Title I officials in states with relatively low Recovery Act Title I drawdown rates to assess to what extent state officials in these states are monitoring LEA obligations and also discussed implementation of School Improvement Grants. We also interviewed officials at the U.S. Department of Education (Education) and reviewed relevant laws, guidance, and communications to the states. Further, we obtained information from Education’s website about the amount of funds these states have drawn down from their accounts with Education. We also reviewed data on state level funding changes from the National Association of State Budget Officers (NASBO). To assess the reliability of the NASBO data, we (1) reviewed existing documentation related to the data sources and (2) interviewed knowledgeable agency officials about the data. We determined that the data are sufficiently reliable for the purposes of this report. For highway infrastructure investment, we reviewed status reports and guidance to the states and discussed these with the U.S. Department of Transportation and Federal Highway Administration (FHWA) officials. We obtained funding data for each of the 16 states and the District in our review. We also reviewed data related to contracts and economically distressed areas—submitted by states—in the FHWA Recovery Act Data System (RADS) for completeness and compliance with FHWA guidance. We also interviewed or obtained information from state department of transportation officials in Arizona, California, Florida, Illinois, Massachusetts, Mississippi, North Carolina, Pennsylvania, and Texas. Specifically, we discussed rates of deobligations in suballocated and nonsuballocated areas, accuracy of contract data entered into RADS, and rates of spending from regular FHWA highway program during the period of the Recovery Act. For public transit investment, we reviewed status reports and guidance to the states and transit agencies and discussed these with U.S. Department of Transportation and Federal Transit Administration (FTA) officials as part of our review of the Transit Capital Assistance Program and Fixed Guideway Infrastructure Investment program. We obtained funding data on the amounts of funding transferred from FHWA to FTA and funding levels used for transit operating expenses for each of our urbanized and nonurbanized areas. We also interviewed FTA officials about operating expense data. Finally we interviewed or obtained information from state and transit agency officials in California, Illinois, Massachusetts, Michigan, New Jersey, and Texas regarding these issues. For the State Energy Program (SEP), the Energy Efficiency and Conservation Block Grant program (EECBG), and Weatherization Assistance Program, we reviewed relevant regulations and federal guidance and interviewed Department of Energy officials who administer the programs at the federal level. Specifically, for the SEP and the EECBG programs, we collected information from 6 and 12 of our selected states and the District, respectively. Also, we conducted semistructured interviews of officials in state and local agencies that administer the programs and with local subrecipients who received Recovery Act funds. These interviews covered the respective state’s and locality’s usage of funds, internal controls, and reporting procedures. We also collected data on the number and types of projects funded with Recovery Act money for the SEP and EECBG programs. In addition, for this report, we collected updated information from seven of our selected states and the District on their weatherization programs. We conducted semistructured interviews of officials in the states’ agencies that administer the weatherization program and with local service providers responsible for weatherization production. We interviewed officials at local service providers in the District and the seven states, and reviewed local agencies’ client case files for homes weatherized with Recovery Act funds. We also conducted site visits to interview local providers of weatherization and to observe weatherization activities. For all three programs, we collected data about each state’s total allocation under the Recovery Act, as well as the allocation already provided to the states and the obligations and expenditures-to-date. For public housing, we obtained data from HUD’s Electronic Line of Credit Control System on the amount of Recovery Act funds that have been obligated and expended by each housing agency in the country that received Public Housing Capital Funds. To monitor progress on how housing agencies are using these funds, we visited 12 housing agencies in nine states. For each state, we selected at least 1 public housing agency from a list of 47 housing agencies visited for previous Recovery Act reports to update the status of their grant projects. At the selected housing agencies, we interviewed housing agency officials and conducted site visits of Recovery Act projects. We also interviewed officials of the U.S. Department of Housing and Urban Development (HUD) to follow up on HUD’s efforts in monitoring public housing agency obligations and uses of Recovery Act funds and to understand HUD’s capacity to administer Recovery Act funds. Further, we interviewed HUD officials to understand their procedures for reviewing data that housing agencies reported to FederalReporting.gov. To further assess state implementation of the Tax Credit Assistance Program (TCAP) and Section 1602 program, we asked managers of state housing finance agencies in all 50 states, the District, Puerto Rico, Guam, and the U.S. Virgin Islands to complete a Web survey. Our questionnaire asked about the status of program delivery, program design, safeguards and controls, expected results, and challenges to implementation. We designed and tested the self-administered questionnaire in consultation with experts, representatives of housing finance stakeholders, and state agency managers. Survey data collection took place in November and early December of 2009. We received usable responses from all 54 agencies. While all state agencies returned questionnaires, and thus our data is not subject to sampling or overall questionnaire nonresponse error, the practical difficulties of conducting any survey may introduce other errors in our findings. We took steps to minimize errors of measurement, question-specific nonresponse, and data processing. In addition to the questionnaire development activities listed above, and pretesting the questionnaire with four state agency officials before the survey, GAO analysts also recontacted selected respondents to follow up on answers that were missing or that required clarification. In addition, GAO analysts resolved respondent difficulties in answering our questions during the survey. Before the survey, we also contacted each agency to determine whether our originally identified respondent was the most appropriate and knowledgeable person to answer our questions, and made changes to our contact list as necessary. Finally, analysis programs and other data analyses were independently verified. Owing to the focus on Early Head Start expansion under the Recovery Act, we visited nine Early Head Start expansion grantees in four states: Florida, Georgia, North Carolina, and Ohio. Due to time and resource considerations, we chose these states based on GAO staff expertise in Head Start. For each state, all but one Early Head Start grantee selected had received a grant above the median for all Recovery Act expansion funds awarded in each state in order to focus our limited resources on relatively sizable grants. We also included four grantees that had been awarded expansion funds for constructing or renovating facilities. The grantees we visited included grantees that had not previously provided an Early Head Start program but that had provided Head Start, as well as experienced Early Head Start grantees. For each selected grantee, we reviewed federal assistance award information, enrollment data, proposals for the use of Quality Improvement funds, and facilities under construction or renovation. We conducted structured interviews with grantee officials covering updates on the use of Recovery Act funds, challenges to spending funds within Office of Head Start (OHS) deadlines, OHS monitoring of grantees, and grantees’ interpretation of enrollment and attendance requirements. We also reviewed files on-site at each grantee on enrollment, income eligibility, and health screening. The grantees we visited were purposefully chosen and are not a representative sample of all expansion grantees. The information gathered from these site visits is not generalizable to the population of Early Head Start expansion grantees. The recipient reporting section of this report responds to the Recovery Act’s mandate that we comment on the estimates of jobs created or retained by direct recipients of Recovery Act funds. For our review of the fourth submission of recipient reports, covering the period from April 1, 2010, through June 30, 2010, we built on findings from our three prior reviews of the reports, covering the period from February 2009 through March 30, 2010. We performed edit checks and basic analyses on the fourth submission of recipient report data that became publicly available at Recovery.gov on July 30, 2010. We interviewed federal agency officials from the Department of Energy, who have responsibility for ensuring a reasonable degree of quality across their programs’ recipient reports. We also interviewed representatives from a variety of state associations, such as the National Association of State Auditors, Comptrollers, and Treasurers and the National Association of State Budget Officers, to obtain their views on whether the process of recipient reporting has had an effect on intergovernmental interactions. From the fourth submission of recipient reports, we reviewed reports for two energy programs, EECBG and the Weatherization Assistance Program, to determine whether they had used Office of Management and Budget (OMB) guidance for calculating their full-time equivalents (FTE) funded by the Recovery Act. We interviewed 13 EECBG state-level and 19 local government recipients from our 17 selected jurisdictions about their FTE calculations for the fourth round of reporting. We also interviewed 8 state- level weatherization assistance recipients and 17 local government weatherization assistance subrecipients from our 17 selected jurisdictions about their FTE calculations. In some instances, we reviewed supporting documentation with quarterly FTE reports, and assessed the validity of those calculations in complying with OMB guidance. Due to the limited number of recipients reviewed and the judgmental nature of the selection, GAO’s FTE findings are not generalizable beyond the programs examined. In addition, state teams also interviewed government officials from our 16 selected states and the District to discuss issues that arose in the fourth reporting period statewide, specifically related to any difficulties they encountered during the fourth round of reporting, development of their state Web sites, and their views on whether the recipient reporting requirements have affected intergovernmental interactions. We also asked these officials about ongoing state plans for managing, tracking, and reporting on Recovery Act funding and activities and solicited feedback from state officials regarding how states are using data generated from the recipient reporting effort and ways the recipient reporting process could be improved. To perform our audit work, we interviewed federal officials, state auditors, and officials from the cognizant agency for audit from the Department of Health and Human Services (HHS). We examined documents related to Single Audits, including the 2010 OMB Circular No. A-133 Audits of States, Local Governments, and Non-Profit Organizations Compliance Supplement, OMB’s and HHS’s evaluations of the OMB Single Audit Internal Control project, and related federal agency management decisions. We reviewed Federal Audit Clearinghouse documents, such as selected Single Audit reports. We also conducted a survey of the state auditors and state program and finance officials that participated in the OMB Internal Control Single Audit Project. We analyzed and summarized the responses to our survey. We conducted our surveys in March 2010 and interviewed several state auditors, officials from the Department of Health and Human Services, which is the cognizant agency for audit, and officials from awarding federal agencies whose programs were selected for audit under the project. We also participated in an OMB-led discussion of the project’s participants to obtain their views on the project. To determine the status and results of oversight activities of the Recovery Accountability and Transparency Board (the Board), we met with representatives of the Board to discuss the initiatives they have taken to coordinate and monitor the efforts of the inspectors’ general oversight activities as well as the Board’s initiatives to prevent and detect fraud, waste, and abuse of Recovery funds. We reviewed available documentation related to the Board’s efforts. To provide observations on selected states’ use of competitive procedures and fixed prices in awarding contracts for Recovery Act funds, between July 2009 and March 2010, we met with state and local officials to discuss the contract award process for a sample of 208 contracts in 16 states and the District. Between March and June 2010, we met again with the officials responsible for these same contracts to discuss the extent to which there had been cost or schedule changes or contractor performance issues. The contracts we reviewed with state officials were selected based on a combination of several factors to obtain a mix of various programs and dollar values that varied among the states. Our methodology for selecting these contracts does not allow for reported information to be generalized. To assess actions taken by the state and local audit community to monitor the use of Recovery Act funds, we have interviewed selected state and local auditors and state inspectors general about their ongoing and planned audit activities. We have also reviewed state and local audit reports. We have also spoken to some of the Recovery Act oversight entities created in many of the selected states such as New Jersey’s Recovery Accountability Task Force and the Recovery Task Force in California. In addition, in an effort to update the audit community concerning our Recovery Act work and participate in information sharing about Recovery Act issues, we are working with state and local auditors and their associations to facilitate routine telephone conference calls to discuss Recovery Act issues with a broad community of interested parties. The conference call participants include the Association of Government Accountants; the Association of Local Government Auditors; the National Association of State Auditors, Comptrollers, and Treasurers; OMB; the Board; federal inspectors general; the National Governors Association; and the National Association of State Budget Officers. In an effort to ensure information sharing about allegations of fraud, we are also working with state and local auditors to develop plans for routine sharing of information. We continued our review of the use of Recovery Act funds for the 16 selected states, the District, and selected localities. We conducted interviews with state budget officials and reviewed proposed and enacted budgets and revenue forecasts to update our understanding of the use of Recovery Act funds in the 16 selected states and the District. To update our understanding of local governments’ use of Recovery Act funds, we met with finance officials and city administrators at the selected local governments. The topics covered in our meetings included what Recovery Act funds the states and localities received, how they used the funds, and their exit strategy to prepare for the phasing out of Recovery Act funding. In the course of our discussions with officials we explored the extent to which the receipt of Recovery Act funds has stabilized state and local government budgets. We also reviewed reports and analyses regarding the fiscal conditions of states and localities. The selected states and the District for our review contain about 65 percent of the U.S. population and are estimated to receive collectively about two-thirds of the intergovernmental grant funds available through the Recovery Act. To select local governments for our review, we identified localities representing a range of jurisdictions (cities and counties) and variations in population sizes and economic conditions (unemployment rates greater than or less than the state’s overall unemployment rate). In making our selections, we also considered proximity to our other scheduled Recovery Act work and local contacts established during prior reviews. The GAO teams visited a total of 24 local governments in our 16 selected states that ranged in population from approximately 258 in Steward, Illinois, to approximately 2.5 million in Miami-Dade County, Florida. Unemployment rates in our selected localities ranged from 6.7 percent in Round Rock, Texas, to 13.4 percent in Redding, California. Due to the small number of jurisdictions visited and judgmental nature of their selection, GAO’s findings are not generalizable to all local governments. The list of local governments selected in each state is found in appendix IV. We collected funding data from www.Recovery.gov and federal agencies administering Recovery Act programs for the purpose of providing background information. We used funding data from www.Recovery.gov— which is overseen by the Board—because it is the official source for Recovery Act spending. Except as may be noted with regard to specific analyses appearing in other sections of this report and based on our examination of this information thus far, we consider these data sufficiently reliable with attribution to official sources for the purposes of providing background information on Recovery Act funding for this report. Our sample of states, localities, and entities has been purposefully selected and the results of our reviews are not generalizable to any population of states, localities, or entities. We conducted this performance audit from May 27, 2010, to September 20, 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following are 31 GAO recommendations that Departments of Transportation (DOT), Housing and Urban Development (HUD), Education, Treasury, and the Office of Management and Budget (OMB) have implemented since we began conducting bimonthly reviews in April 2009. We have also closed 2 recommendations. To ensure that the public has accurate information regarding economically distressed areas, we also recommend that the Secretary of Transportation direct FHWA to issue guidance to the states advising them to update information in the Recovery Act Data System to reflect current DOT decisions concerning the special-need criteria. Projects in areas currently lacking documentation showing that the areas meet the criteria to be designated as economically distressed should be reported as a project in a noneconomically distressed area. In July 2010, FHWA directed Arizona, California, and Illinois to revise their designations and to report these projects as being in noneconomically distressed areas. FHWA also directed all states to ensure that future Recovery Act Data System entries be coded as economically distressed only if FHWA division and headquarters offices had approved the designation. Recipients of highway and transit Recovery Act funds, such as state departments of transportation and transit agencies, are subject to multiple reporting requirements. Both DOT and OMB have issued implementation guidance for recipient reporting. Despite these efforts, state and local highway and transit officials expressed concerns and challenges with meeting the Recovery Act reporting requirements. We recommended in our September 2009 report that the Secretary of Transportation should continue the department’s outreach to state departments of transportation and transit agencies to identify common problems in accurately fulfilling reporting requirements and provide additional guidance, as appropriate. In September 2009, in responding to our recommendation, DOT said that it had conducted outreach, including providing technical assistance, training, and guidance to recipients, and will continue to assess the need to provide additional information. For example, in February 2010, FTA continued three training webinars to provide technical assistance in complying with reporting requirements under section 1201(c) of the Recovery Act. In addition, on February 1, 2010, FTA issued guidance to transit agencies instructing them to use the same methodology for calculating jobs retained through vehicles purchased under section 1201 as they had been for the recipient reporting. This reversed previous guidance that had instructed transit agencies to use a different methodology for vehicle purchases under sections 1201 and recipient reporting. DOT and FHWA have yet to provide clear guidance regarding how states are to implement the Recovery Act requirement that economically distressed areas (EDA) are to receive priority in the selection of highway projects for funding. We found substantial variation both in how states identified EDAs and how they prioritized project selection for these areas. To ensure states meet Congress’s direction to give areas with the greatest need priority in project selection, we recommended in our July 2009 report that the Secretary of Transportation develop clear guidance on identifying and giving priority to EDAs that are in accordance with the requirements of the Recovery Act and the Public Works and Economic Development Act of 1965, as amended, and more consistent procedures for FHWA to use in reviewing and approving states’ criteria. In August 2009, in response to our recommendation, FHWA, in consultation with the Department of Commerce, developed guidance that addresses our recommendation. In particular, FHWA’s August 2009 guidance defines “priority,” directing states to give priority to projects that are located in an economically distressed area and can be completed within the 3-year time frame over other projects. In addition, FHWA’s guidance sets out criteria that states may use to identify economically distressed areas based on “special need.” The criteria align closely with special need criteria used by the Department of Commerce’s Economic Development Administration in its own grant programs, including factors such as actual or threatened business closures (including job loss thresholds), military base closures, and natural disasters or emergencies. To ensure housing agencies use the correct job calculation, we recommend that the Secretary of Housing and Urban Development clearly emphasize to housing agencies that they discontinue use of the outdated jobs calculator provided by HUD in the first round of recipient reporting. In response to our recommendation, HUD sent an e-mail to housing agencies on June 30, 2010, that explicitly instructed them not to use the outdated jobs-counting calculator, as it was not correctly computing the FTE calculation per updated OMB guidance. This e-mail also included a link to HUD’s new online jobs-counting calculator and instructed housing agencies to use this calculator for the July and all future reporting periods. To help clarify the recipient reporting responsibilities of housing agencies and to improve the consistency and completeness of jobs data reported by housing agencies, we recommend that the Secretary of Housing and Urban Development issue guidance that explains when FTEs attributable to subcontractors should be reported by the prime recipient. In response to our recommendation, HUD notified housing agencies in a June 30, 2010, e-mail that it had developed additional guidance for housing agencies to use when determining whether prime recipients should report FTEs for subcontractors and provided a link to the guidance on its Web site. The guidance noted that housing agencies should include Recovery Act-funded hours that contractors and subcontractors worked as part of their FTE calculation. To help HUD achieve Recovery Act objectives and address challenges with its continued administration of Recovery Act funds, we recommend that the Secretary of Housing and Urban Development develop a management plan to determine the adequate level of agency staff needed to administer both the Recovery Act funds and the existing Capital Fund program going forward, including identifying future resource needs and determining whether current resources could be better utilized to administer these funds. In response to our recommendation, HUD developed a management plan for administration of Recovery Act funds, including the need for an additional 11 FTEs to carry out Recovery Act responsibilities. In July 2010, HUD also provided us with its management plan for the Public Housing Capital Fund program. The plan summarized the key activities HUD undertakes to monitor and facilitate the use of these funds by program area, including rule and policy development, planning, program awards, program management, technical assistance, and reporting. The plan also included the specific activities, tasks, and resources used for each of these existing program areas, identifying approximately 91 existing FTEs in its headquarters and field offices to support these activities. According to HUD’s management plan, HUD’s current staffing level is sufficient to manage its existing Capital Fund program, but the agency could more efficiently utilize its current resources. As a result, HUD plans to realign current staff to focus on its core missions including Recovery Act responsibilities. We recommended on March 3, 2010 that the Secretary of Housing and Urban Development instruct housing agencies to discontinue use of the jobs calculator provided by HUD in the first round of recipient reporting for subsequent rounds of reporting to ensure the correct job calculation is used. In a March 26, 2010, e-mail to housing agencies, HUD included instructions to discontinue use of the jobs calculator originally posted on the HUD Recovery Act Web site in October 2009. HUD reiterated these instructions in a subsequent e-mail it sent to housing agencies on March 31, 2010. To enhance HUD’s ability to prevent, detect, and correct noncompliance with the use of Recovery Act funds, we recommended in September 2009 that the Secretary of Housing and Urban Development expand the criteria for selecting housing agencies for on-site reviews to include housing agencies with open Single Audit findings that may affect the use of and reporting on Recovery Act funds. In October 2009, HUD expanded its criteria for selecting housing agencies for on-site reviews to include all housing agencies with open 2007 and 2008 Single Audit findings as of July 7, 2009, relevant to the administration of Recovery Act funds. HUD has identified 27 such housing agencies and planned to complete these on-site reviews by February 15, 2010. To ensure that FTEs are properly accounted for over time, we recommend that the Secretary of Education clarify how LEAs and IHEs should report FTEs when additional Recovery Act funds are received in a school year and are reallocated to cover costs incurred in previous quarters, particularly when the definite term methodology is used. In response to our recommendation, Education issued clarifying guidance on August 26, 2010, that addressed how FTEs should be reported when funds are expended in one quarter to cover costs incurred in previous quarters. To ensure that subrecipients do not underreport vendor FTEs directly paid with Recovery Act funds, we recommend that the Secretary of Education re-emphasize the responsibility of subrecipients to include hours worked by vendors in their quarterly FTE calculations to the maximum extent practicable. In response to our recommendation, Education issued clarifying guidance on August 26, 2010, that re-emphasized the responsibility of subrecipients to include hours worked by vendors in their quarterly FTE calculations. To improve consistency in how FTEs generated using the definite term are calculated, we recommend that the Secretary of Education and the Director of OMB clarify whether IHE and LEA officials using this methodology should include the cost of benefits in their calculations. In response to our recommendation, Education issued clarifying guidance on August 26, 2010, that addressed whether benefits should be included in the calculation of jobs under the OMB guidance released December 18, 2009. To improve the consistency of FTE data collected and reported, we recommend that the Secretary of Education and the Director of OMB provide clarifying guidance to recipients on how to best calculate FTEs for education employees during quarters when school is not in session. In response to our recommendation, Education issued clarifying guidance on August 26, 2010, that explained that the length of a full-time contract (i.e., 10 or 12 months) should not affect FTE calculations. We recommended in September 2009 that the Secretary of Education take further action such as collecting and reviewing documentation of state monitoring plans to ensure that states understand and fulfill their responsibility to monitor subrecipients of SFSF funds and consider providing training and technical assistance to states to help them develop and implement state monitoring plans for SFSF. In February 2010, Education instructed states to submit to Education for review their plans and protocols for monitoring subrecipients of SFSF funds. Education also issued its plans and protocols for monitoring state implementation of the SFSF program. The plan includes on-site visits to about half the states and desk reviews of the other states to be conducted over the next year. We recommended in November 2009 that the Secretary of Education take further action to enhance transparency by requiring states to include an explanation of changes to maintenance-of-effort levels in their SFSF funding application resubmissions. Education notified states that, if states made changes to their maintenance-of-effort data in their State Fiscal Stabilization Fund applications, they must provide a brief explanation of the reason the data changed. In order to increase the likelihood that state Housing Finance Agencies (HFA) will comply with Treasury’s requirements for recapturing funds, the Secretary of the Treasury should define what it considers appropriate actions by HFAs to recapture funds in order to avoid liability when they are unable to collect funds from project owners that do not comply. Treasury agreed with our recommendation and in response to our recommendation, Treasury provided additional guidance to state HFAs to clarify what constitutes appropriate actions by HFAs to recapture funds in order to avoid liability in the event of project owner noncompliance. Specifically, in August 2010, the agency developed and issued a Recapture Guidance for Recovery Act projects that receive Section 1602 Program funds that defines a recapture event, specifies the amount of funds owed in the event of recapture, describes an HFA’s obligation and responsibilities in avoiding project owner noncompliance, sets forth the kinds of recapture actions an HFA may take in the event of noncompliance, and directs HFAs on how to report noncompliance. We were concerned that since the scope of Single Audit workloads due to Recovery Act programs being subject to Single Audits will increase, consideration should be given to determining what funds can be used to support Single Audit efforts related to Recovery Act programs, including whether legislative changes are needed to specifically direct resources to cover incremental audit costs related to Recovery Act programs. We recommended that the Director of OMB develop mechanisms to help fund the additional Single Audit costs and efforts for auditing Recovery Act programs. OMB addressed our recommendation by issuing guidance to executive departments and agencies to help states with various approaches to recover administrative costs associated with the wide range of activities to comply with the Recovery Act. Administrative costs include, but are not limited to, oversight and audit costs and the costs of performing additional Single Audits. OMB issued the guidance to clarify actions (within the existing legal framework for identifying allowable reimbursable costs) that states could take to recover administrative costs in a more timely manner. In addition to our recommendation to OMB, as we previously noted in our bimonthly reports, it is our view that, to the extent that additional audit coverage is needed to achieve accountability over Recovery Act programs, Congress should consider mechanisms to provide additional resources to support those charged with carrying out the Single Audit Act and related audits. We reported in July 2009 that OMB was encouraging communication of weaknesses to management early in the audit process, but did not add requirements for auditors to take these steps. This step did not address our concern that internal controls over Recovery Act programs should be reviewed before significant funding is expended. Under the current Single Audit framework and reporting timelines, the auditor evaluation of internal control and related reporting will occur too late—after significant levels of federal expenditures have already occurred. As a result of our recommendation, OMB implemented a Single Audit Internal Control Project under which a limited number of voluntarily participating auditors performing the Single Audits for states would communicate in writing internal control deficiencies noted in the single audit within 6 months of the 2009 fiscal year-end, rather than the 9 months required by the Single Audit Act. We recommended that the Director of OMB take steps to achieve sufficient participation and coverage in OMB’s Single Audit Internal Control Project that provides for early written communication of internal control deficiencies to achieve the objective of more timely accountability over Recovery Act funds. OMB implemented its Single Audit Internal Control Project in October 2009. The project called for a minimum of 10 participants. OMB solicited the 50 states, the District of Columbia, Puerto Rico, and Guam, from which 16 states volunteered to participate. The volunteer states were diverse in geographic characteristics and population and included states that use auditors within state government as well as external auditors to conduct Single Audits. In addition, the volunteer states included California and Texas, which are among the top three states with the highest levels of Recovery Act obligations from the federal government. Each state selected at least two Recovery Act programs from a list of 11 high-risk Recovery Act programs for internal control testing. OMB designed the project to be voluntary and OMB officials stated that, overall, they were satisfied with the population and geographic diversity among the states that volunteered. Although the project’s coverage could be more comprehensive to provide greater assurance over Recovery Act funding, the results of the project could provide meaningful insight for making improvements to the Single Audit process. The Single Audit Act requires that recipients submit their financial reporting packages, including the Single Audit report, to the federal government no later than 9 months after the end of the period being audited. As a result, an audited entity may not receive feedback needed to correct an identified internal control or compliance weakness until the latter part of the subsequent fiscal year. The timing problem is exacerbated by the extensions to the 9-month deadline that are routinely granted by the awarding agencies, consistent with OMB guidance. We made two recommendations in this area. First, we recommended that the Director of OMB formally advise federal cognizant agencies to adopt a policy of no longer approving extensions of the due dates of Single Audit reporting package submissions beyond the 9-month deadline. Second, we also recommended that the Director of OMB widely communicate this revised policy to the state audit community and others who have responsibility for conducting Single Audits and submitting the Single Audit reporting package. On March 22, 2010, OMB addressed these two recommendations by issuing memorandum M-10-14, Updated Guidance on the American Recovery and Reinvestment Act. This guidance directed federal agencies to not grant any requests made to extend the Single Audit reporting deadlines for fiscal years 2009 to 2011. OMB further stated that to meet the criteria for a low- risk auditee in the current year, the auditee must have submitted the prior 2 years’ audit reports by the required due dates. OMB communicated this revised policy though the OMB Web site, the American Institute of Certified Public Accountants, and the National Association of State Auditors, Comptrollers, and Treasurers. OMB should work with the Recovery Accountability and Transparency Board (the Board) and federal agencies, building on lessons learned, to establish a formal and feasible framework for review of recipient changes during the continual update period and consider providing more time for agencies to review and provide feedback to recipients before posting updated reports on Recovery.gov. In our March 3, 2010 report, we recommended that OMB work with the Board and federal agencies to establish a formal and feasible framework for review of recipient changes during the new continuous review period and consider providing more time for federal agencies to review and provide feedback to recipients before posting updated reports on Recovery.gov. On March 22, 2010, OMB issued updated guidance which highlighted the steps federal agencies must take to review data quality of recipient reports during the continuous review period. The guidance specified that federal agencies must, at a minimum, conduct a final review of the data upon the close of the continuous corrections period. In addition, now the Recovery Board reflects corrected data on Recovery.gov approximately every 2 weeks, allowing federal agencies time to review and provide feedback in the interim period. States have been concerned about the burden imposed by new requirements, increased accounting and management workloads, and strains on information systems and staff capacity at a time when they are under severe budgetary stress. We recommended in April 2009 that the Director of OMB clarify what Recovery Act funds can be used to support state efforts to ensure accountability and oversight, especially in light of enhanced oversight and coordination requirements. On May 11, 2009, OMB released M-09-18, Payments to State Grantees for Administrative Costs of Recovery Act Activities, clarifying how state grantees could recover administrative costs of Recovery Act activities. States and localities are expected to report quarterly on a number of measures, including the use of funds and an estimate of the number of jobs created and the number of jobs retained as required by Section 1512 of the Recovery Act. We recommended in our July 2009 report that to increase consistency in recipient reporting of jobs created and retained, the Director of OMB should work with federal agencies to have them provide program-specific examples of the application of OMB’s guidance on recipient reporting of jobs created and retained. OMB has issued clarifications and frequently asked questions (FAQ) on Recovery Act reporting requirements. During the first reporting period, OMB also deployed regional federal employees to serve as liaisons to state and local recipients in large population centers and established a call center for entities that did not have an on-site federal liaison. In addition, federal agencies issued additional guidance that builds on the OMB June 22 recipient reporting guidance for their specific programs. This guidance is in the form of FAQs, tip sheets, and more traditional guidance that builds on what was provided on June 22, 2009. Federal agencies have also taken steps to provide additional education and training opportunities for state and local program officials on recipient reporting, including Web- based seminars. To foster timely and efficient communications, we recommended in April 2009 that the Director of OMB should continue to develop and implement an approach that provides easily accessible, real-time notification to (1) prime recipients in states and localities when funds are made available for their use and (2) states—where the state is not the primary recipient of funds but has a statewide interest in this information. In response to our recommendation, OMB has made important progress in notifying recipients when Recovery Act funds are available, communicating the status of these funds at the federal level through agency Weekly Financial Activity reports, and disseminating Recovery Act guidance broadly while actively seeking public and stakeholder input. OMB has taken the additional step of requiring federal agencies to notify Recovery Act coordinators in states, the District of Columbia, commonwealths, and territories within 48 hours of an award to a grantee or contractor in their jurisdiction. Responsibility for reporting on jobs created and retained falls to nonfederal recipients of Recovery Act funds. As such, states and localities have a critical role in determining the degree to which Recovery Act goals are achieved. Given questions raised by many state and local officials about how best to determine both direct and indirect jobs created and retained under the Recovery Act, we recommended in April 2009 that the Director of OMB continue OMB’s efforts to identify appropriate methodologies that can be used to (1) assess jobs created and retained from projects funded by the Recovery Act; (2) determine the impact of Recovery Act spending when job creation is indirect; and (3) identify those types of programs, projects, or activities that in the past have demonstrated substantial job creation or are considered likely to do so in the future. We also recommended that the Director of OMB consider whether the approaches taken to estimate jobs created and retained in these cases can be replicated or adapted to other programs. On June 22, 2009, OMB issued additional implementation guidance on recipient reporting of jobs created and retained, (OMB memoranda M-09- 21, Implementing Guidance for the Reports on Use of Funds Pursuant to the American Recovery and Reinvestment Act of 2009). This guidance is responsive to much of what we recommended. The June 2009 guidance provided detailed instructions on how to calculate and report jobs as FTEs. It also describes in detail the data model and reporting system to be used for the required recipient reporting on jobs. It clarifies that the prime recipient and not the subrecipient is responsible for reporting information on jobs created or retained. Federal agencies have issued guidance that expanded on the OMB June 22 governmentwide recipient reporting guidance and provided education and training opportunities for state and local program officials. Agency-specific guidance includes FAQs and tip sheets. Additionally, agencies are expected to provide examples of recipient reports for their programs, which is also consistent with what we recommended. In addition to the federal agency efforts, OMB has issued FAQs on Recovery Act reporting requirements. The June 22 guidance and subsequent actions by OMB are responsive to much of what we said in our recommendation. We have noted in prior reports that in order to achieve the delicate balance between robust oversight and the smooth flow of funds to Recovery Act programs, states may need timely reimbursement for these activities. We recommended in September 2009 that to the extent that the Director of OMB has the authority to consider mechanisms to provide additional flexibilities to support state and local officials charged with carrying out Recovery Act responsibilities, it is important to expedite consideration of alternative administrative cost reimbursement proposals. In response to this recommendation, OMB issued a memorandum on October 13, 2009, to provide guidance to address states’ questions regarding specific exceptions to OMB Circular A-87, Cost Principles for State, Local and Indian Tribal Governments. In the memorandum, OMB provided clarifications for states regarding specific exceptions to OMB Circular A-87 that are necessary in order for the states to perform timely and adequate Recovery Act oversight, reporting, and auditing. We believe the October 2009 OMB guidance provides the additional clarification needed for states and localities to proceed with their plans to recoup administrative costs. To improve the consistency of FTE data collected and reported, we recommended in November 2009 that OMB clarify the definition and standardize the period of measurement for the FTE data element in the recipient reports. After the first round of reporting by states on their use of Recovery Act funds in October 2009, OMB updated the recipient reporting guidance on December 18, 2009. According to the agency, this guidance aligns with GAO’s recommendation by requiring recipients to report job estimates on a quarterly rather than a cumulative basis. As a result, recipients will no longer be required to sum various data on hours worked across multiple quarters of data when calculating job estimates. The December guidance incorporated lessons learned from the first round of recipient reporting and also addressed recommendations we made in our November 2009 report on recipient reporting. According to OMB, the December guidance is intended to help federal agencies improve the quality of data reported under Section 1512 and simplifies compliance by revising the definitions and calculations needed to define and estimate the number of jobs saved. To improve the consistency of FTE data collected and reported, we also recommended in November 2009 that OMB consider being more explicit that “jobs created or retained” are to be reported as hours worked and paid for with Recovery Act funds. In response to our recommendation, OMB issued guidance on December 18, 2009, that no longer requires recipients make a subjective judgment of whether jobs were created or retained as a result of the Recovery Act. Instead, recipients will more easily and objectively report on jobs funded with Recovery Act dollars. To improve the consistency of FTE data collected and reported, we also recommended in our November 2009 report that OMB continue working with federal agencies to provide or improve program-specific guidance to assist recipients, especially as it applies to the full-time equivalent calculation for individual programs. In response to our recommendation, OMB issued guidance on December 18, 2009, that required federal agencies to submit their guidance documents to OMB for review and clearance to ensure consistency between federal agency guidance and the guidance released by OMB. To improve the consistency of FTE data collected and reported, we recommended in November 2009 that OMB work with the Recovery Accountability and Transparency Board and federal agencies to re- examine review and quality assurance processes, procedures, and requirements in light of experiences and identified issues with the initial round of recipient reporting and consider whether additional modifications need to be made and if additional guidance is warranted. In response to our recommendation, on December 18, 2009, OMB issued updated guidance on data quality, nonreporting recipients, and reporting of job estimates. The agency stated that the updated guidance incorporates lessons learned from the first reporting period and further addresses GAO’s recommendations. The guidance also provides federal agencies with a standard methodology for effectively implementing reviews of the quality of data submitted by recipients. In our July 2009 report we recommended that to strengthen the effort to track the use of funds, the Director of OMB should (1) clarify what constitutes appropriate quality control and reconciliation by prime recipients, especially for subrecipient data, and (2) specify who should best provide formal certification and approval of the data reported. Although OMB clarified that the prime recipient is responsible for FederalReporting.gov data in its June 22 guidance, no statement of assurance or certification will be required of prime recipients on the quality of subrecipient data. Moreover, federal agencies are expected to perform data quality checks, but they are not required to certify or approve data for publication. We continue to believe that there needs to be clearer accountability for the data submitted and during the subsequent federal review process. OMB agreed with the recommendation in concept but questioned the cost/benefit of data certification given the tight reporting time frames for recipients and federal agency reviewers. OMB staff stated that grant recipients are already expected to comply with data requirements appropriate to the terms and conditions of a grant. Furthermore, OMB will be monitoring the results of the quarterly recipient reports for data quality issues and would want to determine whether these issues are persistent problems before concluding that certification is needed. Through issuance of additional guidance and clarification we are now satisfied OMB has implemented this recommendation. In consultation with the Recovery Accountability and Transparency Board and states, the Director of OMB should evaluate current information and data collection requirements to determine whether sufficient, reliable, and timely information is being collected before adding further data collection requirements. As part of this evaluation, OMB should consider the cost and burden of additional reporting on states and localities against expected benefits. OMB has taken steps to ensure data quality through issuance of additional guidance. OMB has also worked with the states to minimize to the extent possible the new reporting burdens under the Recovery Act. We recommended in our April report the addition of a master schedule for anticipated, new, or revised federal Recovery Act program guidance and a more structured, centralized approach to making this information available, such as what is provided at Recovery.gov on recipient reporting. This recommendation is closed because it is no longer applicable. In addition to providing additional types of program-specific examples of guidance, the Director of OMB should work with federal agencies to use other channels to educate state and local program officials on reporting requirements, such as Web- or telephone-based information sessions or other forums. In addition to the federal agency efforts, OMB has issued FAQs on Recovery Act reporting requirements. The June 22 guidance and subsequent actions by OMB are responsive to much of what we said in our April 2009 report. OMB deployed regional federal employees to serve as liaisons to state and local recipients in large population centers. The objective was to provide on-site assistance and, as necessary, direct questions to appropriate federal officials in Washington, D.C. OMB established a call center for entities that do not have an on-site federal liaison. These actions by OMB, together with an overall increase in state and local program officials’ knowledge of reporting requirements, have made this recommendation inapplicable. Within the Department of Transportation, the Federal Aviation Administration’s Airport Improvement Program provides formula and discretionary grants for the planning and development of public-use airports. The Recovery Act provides $1.1 billion for discretionary Grant-in- Aid for Airports under this program with priority given to projects that can be completed within 2 years. The Recovery Act requires that the funds must supplement, not supplant, planned expenditures from airport- generated revenues or from other state and local sources for airport development activities. The Recovery Act Assistance to Rural Law Enforcement to Combat Crime and Drugs Program is administered by the Bureau of Justice Assistance (BJA), a component of the Office of Justice Programs, Department of Justice. The purpose of this program is to help rural states and rural areas prevent and combat crime, especially drug-related crime, and provides for national support efforts, including training and technical assistance programs strategically targeted to address rural needs. The Recovery Act provides $125 million for this program, and BJA has made 212 awards. The Recovery Act provides $100 million to the Brownfields Program, administered by the Office of Solid Waste and Emergency Response within the Environmental Protection Agency, for cleanup, revitalization, and sustainable reuse of contaminated properties. The funds will be awarded to eligible entities through job training, assessment, revolving loan fund, and cleanup grants. The Broadband Technology Opportunities Program (BTOP), funded by the Recovery Act and administered by the Department of Commerce’s National Telecommunications and Information Administration provides grants to increase broadband infrastructure in unserved and underserved areas of the country. BTOP grants fund projects for new or improved internet facilities in schools, libraries, hospitals, and public safety facilities, projects to establish or upgrade public computer facilities that provide broadband access to the general public or vulnerable populations, and projects that increase broadband internet usage among populations where broadband technology has been underutilized. Projects may include training and outreach activities that will increase broadband activities in people’s everyday lives. Build America Bonds (BAB) administered by the Internal Revenue Service within the Department of the Treasury are taxable government bonds created by the Recovery Act that can be issued with federal subsidies for a portion of the borrowing costs delivered either through nonrefundable tax credits provided to holders of the bonds (tax credit BAB) or as refundable tax credits paid to state and local governmental issuers of the bonds (direct payment BAB). Direct payment BABs are a new type of bond that provide state and local government issuers with a direct subsidy payment equal to 35 percent of the bond interest they pay. Tax credit BABs provide investors with a nonrefundable tax credit of 35 percent of the net bond interest payments (excluding the credit), which represents a federal subsidy to the state or local governmental issuer equal to approximately 25 percent of the total return to the investor. State and local governments may issue an unlimited number of BABs through December 31, 2010, and all BAB proceeds must be used for capital expenditures. The Department of Health and Human Services’ Health Resources and Services Administration has allocated $862.5 million in Recovery Act funds for Capital Improvement Program grants to health centers to support the construction, repair, and renovation of more than 1,500 health center sites nationwide, including purchasing health information technology and expanding the use of electronic health records. Administered by the Administration for Children and Families within the Department of Health and Human Services, Child Care and Development Block Grants, one of the funding streams comprising the Child Care and Development Fund, are provided to states, according to a formula, to assist low-income families in obtaining child care, so that parents can work or participate in education or training activities. The Recovery Act provides $1.9 billion in supplemental funding for these grants. The Department of Energy’s Clean Cities program, administered by the Office of Energy Efficiency and Renewable Energy, is a government- industry partnership that works to reduce America’s petroleum consumption in the transportation sector. The Department of Energy is providing nearly $300 million in Recovery Act funds for projects under the Clean Cities program, which provide a range of energy-efficient and advanced vehicle technologies, such as hybrids, electric vehicles, plug-in electric hybrids, hydraulic hybrids, and compressed natural gas vehicles, helping reduce petroleum consumption across the United States. The program also supports refueling infrastructure for various alternative fuel vehicles, as well as public education and training initiatives, to further the program’s goal of reducing the national demand for petroleum. The Recovery Act appropriated $4 billion for the Clean Water State Revolving Fund (SRF) programs and $2 billion for the Drinking Water SRF programs. These amounts are a significant increase compared to federal funds awarded as annual appropriations to the SRF programs in recent years. From fiscal years 2000 through 2009, annual appropriations averaged about $1.1 billion for the Clean Water SRF program and about $833 million for the Drinking Water SRF program. The Environmental Protection Agency (EPA) distributed the Recovery Act funds to the 50 states, the District of Columbia, and Puerto Rico to make loans and grants to subrecipients—local governments and other entities awarded Recovery Act funds—for eligible wastewater and drinking water infrastructure projects and “nonpoint source” pollution projects intended to protect or improve water quality by, for example, controlling runoff from city streets and agricultural areas. The Clean Water and Drinking Water SRF programs, established in 1987 and 1996 respectively, provide states and local communities independent and permanent sources of subsidized financial assistance, such as low or no-interest loans, for projects that protect or improve water quality and that are needed to comply with federal drinking water regulations and protect public health. In addition to providing increased funds, the Recovery Act included specific requirements for states beyond those that are part of base Clean Water and Drinking Water SRF programs. For example, states were required to have all Recovery Act funds awarded to projects under contract within 1-year of enactment—which was February 17, 2010—and EPA was directed to reallocate any funds not under contract by that date. Further, states were required to use at least 50 percent of Recovery Act funds to provide assistance in the form of principal forgiveness, negative interest loans, or grants. States were also required to use at least 20 percent of funds as a “green reserve” to provide assistance for green infrastructure projects, water or energy efficiency improvements, or other environmentally innovative activities. The Recovery Act provides $650 million to carry out evidence-based clinical and community-based prevention and wellness strategies authorized by the Public Health Service Act that deliver specific, measurable health outcomes that address chronic disease rates. In response to the act, the Department of Health and Human Services launched the Communities Putting Prevention to work initiative on September 17, 2009. The goals of the initiative, which is to be administered by the Centers for Disease Control and Prevention, are to increase levels of physical activity, improve nutrition, decrease obesity rates, and decrease smoking prevalence, teen smoking initiation, and exposure to second-hand smoke through an emphasis on policy and environmental change at both the state and local levels. Of the $650 million appropriated for this initiative, approximately $450 million will support community approaches to chronic disease prevention and control; $120 million will support the efforts of states and territories to promote wellness, prevent chronic disease, and increase tobacco cessation; $32.5 million is allocated for state chronic disease self-management programs; and $40 million is allocated to establish a National Prevention Media Initiative and a National Organizations Initiative to encourage the development of prevention and wellness messages and advertisements. The Community Development Block Grant (CDBG) program, administered by the Office of Community Planning and Development within the Department of Housing and Urban Development, enables state and local governments to undertake a wide range of activities intended to create suitable living environments, provide affordable housing, and create economic opportunities, primarily for persons of low and moderate income. Most local governments use this investment to rehabilitate affordable housing and improve key public facilities. The Recovery Act includes $1 billion for the CDBG. Community Services Block Grants (CSBG), administered by the Administration for Children and Families within the Department of Health and Human Services, provide federal funds to states, territories, and tribes for distribution to local agencies to support a wide range of community- based activities to reduce poverty. The Recovery Act appropriated $1 billion for CSBG. The Recovery Act provided $1 billion through the Department of Justice’s (DOJ) Community Oriented Policing Service’s (COPS) Hiring Recovery Program (CHRP) for competitive grant funding to law enforcement agencies to create and preserve jobs and to increase community policing capacity and crime-prevention efforts. CHRP grants provide 100 percent funding for 3 years to cover approved entry-level salaries and benefits for newly-hired, full-time sworn officers, including those who were hired to fill positions previously unfunded, as well as rehired officers who had been laid off. CHRP funds can also be used in the same manner to retain officers who were scheduled to be laid off as a result of local budget cuts. There is no local funding match requirement for CHRP. When the grant term expires after 3 years, grantees must retain all sworn officer positions awarded under the CHRP grant for at least 1 additional year. The DOJ COPS office selected local law enforcement agencies to receive funding based on fiscal health factors—such as changes in budgets for law enforcement, poverty, unemployment, and foreclosure rates—and reported crime and planned community policing activities. DOJ awards 50 percent of CHRP funds to local law enforcement agencies with populations greater than 150,000 and awards the remaining 50 percent to local law enforcement agencies with populations of less than 150,000. Awards were capped at no more than 5 percent of the applicant agency’s actual sworn force strength (up to a maximum of 50 officers) and a minimum of $5 million was allocated to each state or eligible territory. The program objective of the Diesel Emission Reduction Act Grants, administered by the Office of Air and Radiation in conjunction with the Office of Grants and Debarment, within the U.S. Environmental Protection Agency (EPA), is to reduce diesel emissions. EPA will award grants to address the emissions of in-use diesel engines by promoting a variety of cost-effective emission reduction strategies, including switching to cleaner fuels, retrofitting, repowering or replacing eligible vehicles and equipment, and idle reduction strategies. The Recovery Act appropriated $300 million for the Diesel Emission Reduction Act Grants. In addition, the funds appropriated through the Recovery Act for the program are not subject to the State Grant and Loan Program Matching Incentive provisions of the Energy Policy Act of 2005. The Recovery Act provides $10 billion to help local educational agencies (LEA) educate disadvantaged youth by making additional funds available beyond those regularly allocated through Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA), as amended. These additional funds are distributed through states to LEAs using existing federal funding formulas, which target funds based on such factors as high concentrations of students from families living in poverty. In using the funds, LEAs are required to comply with applicable statutory and regulatory requirements and must obligate 85 percent of the funds by September 30, 2010. The Department of Education is advising LEAs to use the funds in ways that will build the agencies’ long-term capacity to serve disadvantaged youth, such as through providing professional development to teachers. The Recovery Act also appropriated $3 billion for ESEA Title I School Improvement Grants (SIG), which provides funds to states for use in ESEA Title I schools identified for improvement in order to substantially raise the achievement of their students. These funds are awarded by formula to states, which will then make competitive grants to LEAs. State applications for the $3 billion in Recovery Act SIG funding, as well as an additional $546 million in regular fiscal year 2009 SIG funding, were due to the Department of Education on February 28, 2010. SIG regulatory requirements effective in February 2010, prioritize the use of SIG funds in each state’s persistently lowest-achieving Title I schools. To receive funds, states must identify their persistently lowest-achieving schools, and an LEA that wishes to receive SIG funds must submit an application to its state educational agency (SEA) identifying which schools it commits to serve and how it will use school improvement funds to implement one of four school intervention models: (1) turnaround model, which includes replacing the principal and rehiring no more than 50 percent of the school’s staff; (2) restart model, in which an LEA converts the school or closes and reopens it as a charter school or under an education management organization; (3) school closure, in which an LEA closes the school and enrolls the students who attended the school in other, higher-achieving schools in the LEA; or (4) the transformation model, which addresses four specific areas intended to improve schools. The Recovery Act provided supplemental funding for programs authorized by Part B and C of the Individuals with Disabilities Education Act (IDEA) as amended, the major federal statute that supports early intervention and special education and related services for children and youth with disabilities. Part B funds programs that ensure preschool and school-age children with disabilities access to a free and appropriate public education and is divided into two separate grants—Part B grants to states (for school-age children) and Part B preschool grants. Part C funds programs that provide early intervention and related services for infants and toddlers with disabilities—or at risk of developing a disability—and their families. The State Fiscal Stabilization Fund (SFSF) included approximately $48.6 billion to award to states by formula and up to $5 billion to award to states as competitive grants. The Recovery Act created the SFSF in part to help state and local governments stabilize their budgets by minimizing budgetary cuts in education and other essential government services, such as public safety. Stabilization funds for education distributed under the Recovery Act must first be used to alleviate shortfalls in state support for education to LEAs and public institutions of higher education (IHE). States must use 81.8 percent of their SFSF formula grant funds to support education (these funds are referred to as education stabilization funds) and must use the remaining 18.2 percent for public safety and other government services, which may include education (these funds are referred to as government services funds). The SFSF funds are being provided to states in two phases. Phase 1 funds—at least 67 percent of education stabilization funds and all government services funds—were provided to each state after the Department of Education (Education) approved the state’s Phase 1 application for funds. Phase 2 funds are being awarded to states as Education approves each state’s Phase 2 application. The Phase 1 application required each state to provide several assurances, including that the state will meet maintenance-of-effort requirements (or will be able to comply with the relevant waiver provisions); will meet requirements for accountability, transparency, reporting, and compliance with certain federal laws and regulations; and that it will implement strategies to advance four core areas of education reform. The Phase 2 application requires each state to explain the information the state makes available to the public related to the four core areas of education reform or provide plans for making information related to the education reforms publicly available no later than September 30, 2011. States must use education stabilization funds to restore state funding to the greater of fiscal year 2008 or 2009 levels for state support to LEAs and public IHEs. When distributing these funds to LEAs, states must use their primary education funding formula, but they can determine how to allocate funds to public IHEs. In general, LEAs maintain broad discretion in how they can use education stabilization funds, but states have some ability to direct IHEs in how to use these funds. The Recovery Act provided $2 billion through the Department of Justice’s (DOJ) Edward Byrne Memorial Justice Assistance Grant (JAG) Program for grants to state and local governments for law enforcement and criminal justice activities. JAG funds can be used to support a range of activities in seven broad program areas: (1) law enforcement; (2) prosecution and courts; (3) crime prevention and education; (4) corrections; (5) drug treatment and enforcement; (6) program planning, evaluation, and technology improvement; and (7) crime victim and witness programs. Within these areas, JAG funds can be used for state and local initiatives, training, personnel, equipment, supplies, contractual support, research, and information systems for criminal justice. Although each state is guaranteed a minimum allocation of JAG funding, states and localities therein must apply to DOJ’s Bureau of Justice Assistance (BJA) to receive their grant awards. BJA applies a statutory formula based on population and violent crime statistics to determine annual funding levels. After applying the formula, BJA distributes each state’s allocation in two ways: BJA awards 60 percent directly to the state, and the state must in turn allocate a formula-based share of these funds—considered a “variable pass-through,” to its local governments; and BJA awards the remaining 40 percent directly to eligible units of local government within the state. Administered by the Transportation Security Administration (TSA) of the Department of Homeland Security, the Electronic Baggage Screening Program provides funding to strengthen screening of checked baggage in airports. The Recovery Act provided approximately $1 billion to invest in the procurement and installation of checked baggage explosives detection systems and checkpoint explosives detection equipment. According to TSA, it has allocated over $700 million to its Electronic Baggage Screening Program for purposes that include facility modifications; equipment purchase and installation; and programmatic, maintenance, and technological support. The Emergency Food and Shelter Program (EFSP), which is administered by the Federal Emergency Management Agency (FEMA) within the Department of Homeland Security, was authorized in July 1987 by the Stewart B. McKinney Homeless Assistance Act to provide food, shelter, and supportive services to the homeless. The program is governed by a National Board composed of a representative from FEMA and six statutorily designated national nonprofit organizations. Since its first appropriation in fiscal year 1983, EFSP has awarded over $3.4 billion in federal aid to more than 12,000 local private, nonprofit and government human service entities in more than 2,500 communities nationwide. The Energy Efficiency and Conservation Block Grants (EECBG), administered by the Office of Energy Efficiency and Renewable Energy within the Department of Energy, provides funds through competitive and formula grants to units of local and state government and Indian tribes to develop and implement projects to improve energy efficiency and reduce energy use and fossil fuel emissions in their communities. The Recovery Act includes $3.2 billion for the EECBG. Of that total, $400 million is to be awarded on a competitive basis to grant applicants. Under the Recovery Act, the Green Capacity Building Grants program, administered by the Employment and Training Administration within the Department of Labor, provides funds to build the green training capacity of current Department of Labor (Labor) grantees. Grants will help individuals in targeted groups acquire the skills needed to enter and advance in green industries and occupations by building the capacity of active Labor-funded training programs. Grantees are required to give priority to targeted groups, including workers impacted by national energy and environmental policy, individuals in need of updated training related to energy-efficiency and renewable energy industries, veterans, unemployed individuals, and individuals with criminal records. The Department of Health and Human Services’ Health Information Technology Extension Program, administered by the Office of the National Coordinator for Health Information Technology, allocated $643 million to establish 60 Health Information Technology Regional Extension Centers (REC) and $50 million to establish a national Health Information Technology Research Center (HITRC). The first cycle of awards, announced February 12, 2010, provided $375 million to create 32 RECs, while the second cycle of awards, announced April 6, 2010, provided $267 million to establish 28 RECs. RECs offer technical assistance, guidance, and information on best practices for the use of Electronic Health Records (EHR) to health care providers. The HITRC supports RECs’ efforts by collecting information on best practices from a wide variety of sources across the country and by acting as a virtual community for RECs to collaborate with one another and with relevant stakeholders to identify and share best practices for the use of EHRs. The goal of the RECs and HITRC is to enable nationwide health information exchange through the adoption and meaningful use of secure EHRs. The Head Start program, administered by the Office of Head Start of the Administration for Children and Families within the Department of Health and Human Services, provides comprehensive early childhood development services to low-income children, including educational, health, nutritional, social, and other services, intended to promote the school readiness of low-income children. Federal Head Start funds are provided directly to local grantees, rather than through states. The Recovery Act provided an additional $2.1 billion in funding for Head Start and Early Head Start programs. The Early Head Start program provides family-centered services to low-income families with very young children designed to promote the development of the children, and to enable their parents to fulfill their roles as parents and to move toward self-sufficiency. The High-Speed Intercity Passenger Rail Program (HSIPR) is administered by the Federal Railroad Administration, within the Department of Transportation (DOT). The purpose of the HSIPR Program is to build an efficient, high-speed passenger rail network connecting major population centers 100 to 600 miles apart. In the near-term, the program will aid in economic recovery efforts and lay the foundation for this high-speed passenger rail network through targeted investments in existing intercity passenger rail infrastructure, equipment, and intermodal connections. In addition to the $8 billion provided in the Recovery Act, the HSIPR Program also included approximately $92 million in fiscal year 2009 and remaining fiscal year 2008 funds appropriated under the existing State Grant Program (formally titled, Capital Assistance to States—Intercity Passenger Rail Service). The fiscal year 2010 DOT appropriation included $2.5 billion for high speed rail and intercity passenger rail projects. The Homelessness Prevention and Rapid Re-Housing Program, administered by the Office of Community Planning and Development within the Department of Housing and Urban Development, awards formula grants to states and localities to prevent homelessness and procure shelter for those who have become homeless. Funding for this program is being distributed based on the formula used for the Emergency Shelter Grants program. According to the Recovery Act, program funds should be used for short-term or medium-term rental assistance; housing relocation and stabilization services, including housing search, mediation or outreach to property owners, credit repair, security or utility deposits, utility payments, and rental assistance for management; or appropriate activities for homeless prevention and rapid re-housing of persons who have become homeless. The Recovery Act includes $1.5 billion for this program. The Recovery Act provides funding to states for restoration, repair, and construction of highways and other activities allowed under the Federal Highway Administration’s Federal-Aid Highway Surface Transportation Program and for other eligible surface transportation projects. The Recovery Act requires that 30 percent of these funds be suballocated, primarily based on population, for metropolitan, regional, and local use. Highway funds are apportioned to states through federal-aid highway program mechanisms, and states must follow existing program requirements. While the maximum federal fund share of highway infrastructure investment projects under the existing federal-aid highway program is generally 80 percent, under the Recovery Act, it is 100 percent. Funds appropriated for highway infrastructure spending must be used in accordance with Recovery Act requirements. States were given a 1-year deadline (March 2, 2010) to ensure that all apportioned Recovery Act funds—including suballocated funds—were obligated. The Secretary of Transportation was to withdraw and redistribute to eligible states any amount that was not obligated by that time. Additionally, the governor of each state was required to certify that the state would maintain its level of spending for the types of transportation projects funded by the Recovery Act it planned to spend the day the Recovery Act was enacted. As part of this certification, the governor of each state was required to identify the amount of funds the state planned to expend from state sources from February 17, 2009, through September 30, 2010. On March 2, 2009, the Federal Highway Administration apportioned $799.8 million in Recovery Act funds to states for its Transportation Enhancement program. States may use program funds for qualifying surface transportation activities, such as constructing or rehabilitating off- road shared use paths for bicycles and pedestrians; conducting landscaping and other beautification projects along highways, streets, and waterfronts; and rehabilitating and operating historic transportation facilities such as historic railroad depots. The Recovery Act requires that 3 percent of Highway Infrastructure Investment funds provided to states must be used for Transportation Enhancement activities. Additionally, states may decide to use additional Recovery Act Transportation Enhancement funds, beyond the 3 percent requirement, for qualifying activities such as those mentioned above. States determine the share of federal funds used for qualifying Transportation Enhancement projects up to 100 percent of the projects’ costs. The Department of Health and Human Services’ Health Resources and Services Administration (HRSA) has allocated Recovery Act funds for Increased Demand for Services (IDS) grants to health centers to increase health center staffing, extend hours of operations, and expand existing services. The Recovery Act provided $500 million for health center operations. HRSA has allocated $343 million for IDS grants to health centers. Internet Crimes Against Children Initiatives (ICAC), administered by the Department of Justice, Office of Justice Programs’ Office of Juvenile Justice and Delinquency Prevention, seeks to maintain and expand state and regional ICAC task forces to address technology-facilitated child exploitation. This program provides funding to states and localities for salaries and employment costs of law enforcement officers, prosecutors, forensic analysts, and other related professionals. The Recovery Act appropriated $50 million for ICAC. The Recovery Act provided approximately $78 million to the Lead-Based Paint Hazard Control Grant Program through the Department of Housing and Urban Development to assist states and localities in undertaking programs to identify and control lead-based paint hazards in eligible privately owned housing for rental or owner-occupants. Funds will be used to perform lead-based paint inspections, soil and paint-chip testing, risk assessments, and other activities that are in support of lead hazard abatement work. An additional $2.6 million was provided for the Lead Hazard Reduction Demonstration Grant Program which will assist urban areas with the greatest lead paint abatement needs to identify and control lead-based paint hazards in eligible privately owned single- family housing units and multifamily buildings occupied by low-income families. The Recovery Act provided funding to support Local Energy Assurance Planning (LEAP) Initiatives to help communities prepare for energy emergencies and disruptions. The Department of Energy will award funds to cities and towns to develop or expand local energy assurance plans that will improve electricity reliability and energy security in their communities. LEAP aims to facilitate recovery from disruptions to the energy supply and enhance reliability and quicker repairs following energy supply disruptions. Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The Centers for Medicare and Medicaid Services, within the Department of Health and Human Services, approves state Medicaid plans, and the amount of federal assistance states receive for Medicaid service expenditures is determined by the Federal Medical Assistance Percentage (FMAP). The Recovery Act’s temporary increase in FMAP funding will provide all 50 states and the District with approximately $87 billion in assistance. Federal legislation was recently enacted amending the Recovery Act to provide for an extension of increased FMAP funding through June 30, 2011, but at a lower level. The Recovery Act provided $156 million in new funding to the National Clean Diesel Funding Assistance Program to support the implementation of verified and certified diesel emission reduction technologies. The competitive grant program funded projects that would achieve significant reductions in diesel emissions, especially from fleets operating in areas designated as having poor air quality. This is one of the Recovery Act- funded National Clean Diesel Campaign programs which have the goal to accelerate emission reductions from older diesel engines to provide air quality benefits and improve public health. The Recovery Act provides $50 million to be distributed in direct grants by the National Endowment for the Arts to fund arts projects and activities that preserve jobs in the nonprofit arts sector threatened by declines in philanthropic and other support during the current economic downturn. The Neighborhood Stabilization Program (NSP), administered by the Office of Community Planning and Development within the Department of Housing and Urban Development, provides assistance for the redevelopment of abandoned and foreclosed homes and residential properties in order that such properties may be returned to productive use or made available for redevelopment purposes. The $2 billion in NSP2 funds appropriated in the Recovery Act are competitively awarded to states, local governments, and nonprofit organizations. NSP is considered to be a component of the Community Development Block Grant (CDBG) program and basic CDBG requirements govern NSP. The Port Security Grant Program (PSGP) provides grant funding to port areas for the protection of critical port infrastructure from terrorism. The Recovery Act provides $150 million in stimulus funding for the PSGP administered by the Federal Emergency Management Agency (FEMA), an agency of the Department of Homeland Security. PSGP funds are primarily intended to assist ports in enhancing maritime domain awareness, enhancing risk management capabilities to prevent, detect, respond to, and recover from attacks involving improvised explosive devices, weapons of mass destruction and other nonconventional weapons, as well as training and exercises and Transportation Worker Identification Credential implementation. Ports compete for funds and priority is given to cost-effective projects that can be executed expeditiously and have a significant and near-term impact on risk mitigation. The Public Housing Capital Fund provides formula-based grant funds directly to public housing agencies to improve the physical condition of their properties; to develop, finance, and modernize public housing developments; and to improve management. Under the Recovery Act, the Office of Public and Indian Housing within the Department of Housing and Urban Development (HUD) allocated nearly $3 billion through the Public Housing Capital Fund to public housing agencies using the same formula for amounts made available in fiscal year 2008 and obligated these funds to housing agencies in March 2009. HUD was also required to award nearly $1 billion to public housing agencies based on competition for priority investments, including investments that leverage private sector funding or financing for renovations and energy conservation retrofitting. In September 2009, HUD awarded competitive grants for the creation of energy-efficient communities, gap financing for projects stalled due to financing issues, public housing transformation, and improvements addressing the needs of the elderly or persons with disabilities. The Recovery Act appropriated $8.4 billion to fund public transit throughout the country through existing Federal Transit Administration (FTA) grant programs, including the Transit Capital Assistance Program, and the Fixed Guideway Infrastructure Investment Program. Under the Transit Capital Assistance Program’s formula grant program, Recovery Act funds were apportioned to large and medium urbanized areas—which in some cases include a metropolitan area that spans multiple states— throughout the country according to existing program formulas. Recovery Act funds were also apportioned to states for small urbanized areas and nonurbanized areas under the Transit Capital Assistance Program’s formula grant programs using the program’s existing formula. Transit Capital Assistance Program funds may be used for such activities as vehicle replacements, facilities renovation or construction, preventive maintenance, and paratransit services. Recovery Act funds from the Fixed Guideway Infrastructure Investment Program were apportioned by formula directly to qualifying urbanized areas, and funds may be used for any capital projects to maintain, modernize, or improve fixed guideway systems. As they work through the state and regional transportation planning process, designated recipients of the apportioned funds— typically public transit agencies and metropolitan planning organizations—develop a list of transit projects that project sponsors (typically transit agencies) submit to FTA for approval. Funds appropriated for the Transit Capital Assistance Program and the Fixed Guideway Infrastructure Investment Program must be used in accordance with Recovery Act requirements. States were given a 1-year deadline (March 5, 2010) to ensure that all apportioned Recovery Act funds were obligated. The Secretary of Transportation was to withdraw and redistribute to each state or urbanized area any amount that was not obligated within these time frames. Additionally, the governor of each state was required to certify that the state would maintain its level of spending for the types of transportation projects funded by the Recovery Act it planned to spend the day the Recovery Act was enacted. As part of this certification, the governor of each state was required to identify the amount of funds the state planned to expend from state sources from February 17, 2009, through September 30, 2010. The Transit Investments for Greenhouse Gas and Energy Reduction (TIGGER) Grant program, administered by FTA within the Department of Transportation, is a discretionary program to support transit capital projects that result in greenhouse gas reductions or reduced energy use. The Recovery Act provides $100 million for the TIGGER program, and each submitted proposal must request a minimum of $2 million. The Recovery Act includes up to $5 billion for the Race to the Top Fund, administered by the Office of Elementary and Secondary Education within the Department of Education (Education). According to Education, awards in Race to the Top will go to states that are leading the way with ambitious yet achievable plans for implementing coherent, compelling, and comprehensive educational reform. Through Race to the Top, Education asks states to advance reforms in four specific areas: adopting standards and assessments that prepare students to succeed in college and the workplace and to compete in the global economy; building data systems that measure student growth and success, and inform teachers and principals about how they can improve instruction; recruiting, developing, rewarding, and retaining effective teachers and principals, especially where they are needed most; and turning around our lowest- achieving schools. The Recovery Act Assistance to Firefighters Fire Station Construction Grants, also known as fire grants or the FIRE Act grant program, is administered by the Department of Homeland Security, Federal Emergency Management Agency, Assistance to Firefighters Program Office. The program provides federal grants directly to fire departments on a competitive basis to build or modify existing nonfederal fire stations in order for departments to enhance their response capability and protect the communities they serve from fire and fire-related hazards. The Recovery Act includes $210 million for this program and provides that no grant shall exceed $15 million. The Child Support Enforcement (CSE) Program (Title IV-D of the Social Security Act) is a joint federal-state program administered by the Administration for Children and Families (ACF), within the Department of Health and Human Services. The program provides federal matching funds to states to carry out their child support enforcement programs, which enhance the well-being of children by, among other things, establishing paternity, establishing child support orders, and collecting child support. Furthermore, ACF makes additional incentive payments to states based in part on their child support enforcement programs meeting certain performance goals. States must reinvest their incentive fund payments into the CSE program or an activity to improve the CSE program; however, incentive funds reinvested in the CSE program are not eligible for federal matching funds. Funds for the federal matching payments and incentive payments are appropriated annually, and the Recovery Act does not appropriate funds for either of them. However, the Recovery Act temporarily provides for incentive payments expended by states for child support enforcement to count as state funds eligible for the federal match. This change is effective October 1, 2008, through September 30, 2010. Recovery Zone Bonds are administered by the Internal Revenue Service within the Department of the Treasury and come in two types: Recovery Zone Economic Development Bonds (RZEDB) and Recovery Zone Facility Bonds. RZEDB are a type of direct payment Build America Bond (BAB), created under the Recovery Act. Direct payment BABs allow issuers the option of receiving a federal payment instead of allowing a federal tax exemption on the interest payments. RZEDBs provide a 45 percent credit instead of a 35 percent credit like other types of BABs and must meet certain requirements. RZEDBs are targeted to economically distressed areas meeting certain criteria and are to be used for qualified forms of economic development. Recovery Zone Facility Bonds are exempt facility bonds which may be used to finance certain designated recovery zone property. The Recovery Act authorized up to $10 billion for RZEDBs and up to $15 billion for Recovery Zone Facility Bonds to be allocated to states, the District of Columbia, and territories, based to the their employment declines in 2008. The Renewable and Distributed Systems Integration (RDSI) program, administered by the Office of Electricity Delivery and Energy Reliability within the Department of Energy (DOE), focuses on integrating renewable and distributed energy technologies into the electric distribution and transmission system. In April 2008, DOE announced plans to invest up to $50 million over 5 years (fiscal years 2008 to 2012) in nine projects aimed at demonstrating the use of RDSI technologies to reduce peak load electricity demand by at least 15 percent at distribution feeders—the power lines delivering electricity to consumers. The program goal is to reduce peak load electricity demand by 20 percent at distribution feeders by 2015. Retrofit Ramp-Up Program The Recovery Act’s Retrofit Ramp-Up program will provide funding to projects to “ramp-up” energy efficiency building retrofits. The program will target community-scale retrofit projects that make significant, long- term impacts on energy use and can serve as national role models for energy-efficiency efforts. These programs should result in retrofits that lead to significant efficiency improvements to a large number of buildings in communities or neighborhoods. The retrofits must reduce the total monthly operating costs of the buildings including any repayments of loans. The Retrofit Ramp-Up projects are the competitive portion of DOE’s Energy Efficiency and Conservation Block Grant Program and are part of the Recovery Act investment in clean energy and energy efficiency. The Senior Community Service Employment Program (SCSEP), administered by the Employment and Training Administration within the Department of Labor, is a community service and work-based training program which serves low-income persons who are 55 years or older and have poor employment prospects by placing them in part-time community service positions and by assisting them to transition to unsubsidized employment. The Recovery Act provides $120 million for SCSEP. The Recovery Act provides $100 million to the Senior Nutrition Programs, administered by the Administration on Aging (AoA) within the Department of Health and Human Services. AoA distributed funds to 56 States and Territories and 246 tribes and Native Hawaiian organizations to fund three programs at senior centers and other community sites. The Recovery act awarded $65 million for congregate nutrition services provided at senior centers and other community sites, $32 million for home-delivered nutrition services delivered to elders at home, and $3 million for Native American nutrition programs. The Congregate Nutrition Services and Home-delivered Nutrition Services programs specifically targets vulnerable seniors, such as low-income minorities and those residing in rural areas, and aims to help elderly individuals avoid hospitalization and nursing home placement by maintaining their health through meals. The Nutrition Services for Native Americans provides congregate and home- delivered meals and related nutrition services to American Indian, Alaskan Native, and Native Hawaiian elders. Under the Services*Training*Officers*Prosecutors (STOP) Violence Against Women Formula Grants Program, the Office on Violence Against Women within the Department of Justice, has awarded over $139 million in Recovery Act funds to promote a coordinated, multidisciplinary approach to enhance services and advocacy to victims, improve the criminal justice system’s response, and promote effective law enforcement, prosecution, and judicial strategies to address domestic violence, dating violence, sexual assault, and stalking. Under the Recovery Act, states will receive $3.4 billion to deploy and integrate advanced digital technology to modernize the electric delivery network through the Smart Grid Investment Grant Program, administered by the Office of Electricity Delivery and Energy Reliability within the Department of Energy. The program funds a broad range of projects aimed at applying smart grid technologies to existing electric system equipment, consumer products and appliances, meters, electric distribution and transmission systems, and homes, offices, and industrial facilities. The Staffing for Adequate Fire and Emergency Response (SAFER) grants program, administered by the Federal Emergency Management Agency within the Department of Homeland Security, was created to provide funding directly to volunteer, combination, and career fire departments to help them increase staffing and enhance their emergency deployment capabilities. The goal of SAFER is to ensure departments have an adequate number of trained, frontline active firefighters capable of safely responding to and protecting their communities from fire and fire-related hazards. SAFER provides 2-year grants to fire departments to pay the salaries of newly hired firefighters or to rehire recently laid-off firefighters. Fire departments using SAFER funding to hire new fire fighters commit to retaining the SAFER-funded firefighters for 1 full year after the 2-year grant has been expended. The retention commitment does not extend to previously laid-off firefighters who have been rehired. In addition, volunteer and combination firefighter departments are eligible to apply for SAFER funding to pay for activities related to the recruitment and retention of volunteer firefighters. The Recovery Act appropriated $7.2 billion to extend access to broadband throughout the United States. Of the $7.2 billion, $4.7 billion was appropriated to the Department of Commerce’s National Telecommunications and Information Administration (NTIA) and $2.5 billion to the Department of Agriculture’s Rural Utilities Service. Of the $4.7 billion, up to $350 million was available pursuant to the Broadband Data Improvement Act (BDIA) for the purpose of developing and maintaining a nationwide map featuring the availability of broadband service. BDIA directs the Secretary of Commerce to establish the State Broadband Data and Development Grant Program and to award grants to eligible entities to develop and implement statewide initiatives to identify and track the adoption and availability of broadband services within each state. To accomplish the joint purposes of the Recovery Act and BDIA, NTIA has developed the State Broadband Data and Development projects that collect comprehensive and accurate state-level broadband mapping data, develop state-level broadband maps, aid in the development and maintenance of a national broadband map, and fund statewide initiatives directed at broadband planning. Under the Recovery Act, states will receive $3.1 billion for energy projects through the State Energy Program (SEP), administered by the Office of Energy Efficiency and Renewable Energy within the Department of Energy (DOE). States should prioritize the grants toward funding energy- efficiency and renewable energy programs, including expanding existing energy-efficiency programs, renewable energy projects, and joint activities between states. The SEP’s 20 percent cost match is not required for grants made with Recovery Act funds. DOE estimates that SEP funding will have an annual costs savings of $256 million. Under the Department of Health and Human Services’ State Health Information Exchange (HIE) Cooperative Agreement Program, $564 million has been allocated to support states’ efforts to develop the capacity among health care providers and hospitals in their jurisdiction to exchange health information across health care systems through the meaningful use of Electronic Health Records (EHR). The meaningful use of EHRs aims to improve the quality and efficiency of patient care. In order to ensure secure and effective use of HIE technology within and across state borders, grant recipients are expected to use their authority and resources to implement HIE privacy and security requirements, coordinate with Medicaid and state public health programs in using HIE technology, and enable interoperability through the creation of state-level directories and technical services and the removal of barriers. The state HIE program uses a cooperative agreement, or partnership between the grant recipient and the federal government, to administer the awards (when the federal government has a substantial stake in the outcomes or operation of the program). The state HIE cooperative agreements are 4- year agreements and recipients will be required to match grant awards beginning in the second year of the award, 2011. The Statewide Longitudinal Data Systems grant program, administered by the Department of Education’s Institute of Education Sciences, awards competitive grants to state educational agencies for the design, development, and implementation of statewide longitudinal data systems. These systems are intended to enhance the ability of states to efficiently and accurately manage, analyze, and use education data, including individual student records, while protecting student privacy. The first grants were awarded to 14 states in November 2005; 12 states and the District of Columbia were awarded grants in 2007, and 27 states were awarded grants in 2009. The Recovery Act appropriated $250 million for this program. The Supplemental Nutrition Assistance Program (SNAP), administered by the Food and Nutrition Service within the Department of Agriculture, serves more than 35 million people nationwide each month. SNAP’s goal, in part, is to help raise the level of nutrition and alleviate the hunger of low-income households. The Recovery Act provides for a monthly increase in benefits for the program’s recipients. The increases in benefits under the Recovery Act are estimated to total $20 billion over the next 5 years. The Tax Credit Assistance Program administered by the Department of Housing and Urban Development (HUD) provides gap financing to be used by state Housing Finance Agencies (HFA) in the form of grants or loans for capital investment in low-income housing tax credits (LIHTC) projects through a formula-based allocation to HFAs. HUD obligated $2.25 billion in TCAP funds to HFAs. The HFAs were to award the funds competitively according to their qualified allocation plans, which explain selection criteria and application requirements for housing tax credits (as determined by the states and in accordance with Section 42 of the Internal Revenue Code). Projects that were awarded low- income housing tax credits in fiscal years 2007, 2008, or 2009 were eligible for TCAP funding, but HFAs had to give priority to projects that were “shovel-ready” and expected to be completed by February 2012. Also, TCAP projects had to include some low-income tax credits and equity investment. HFAs must commit 75 percent of their TCAP awards by February 2010 and disburse 75 percent by February 2011. Project owners must spend all of their TCAP funds by February 2012. HUD can recapture TCAP funds from any HFA whose projects do not comply with TCAP requirements. In these cases, HFAs are responsible for recapturing funds from project owners. Furthermore, because TCAP funds are federal financial assistance, they are subject to certain federal requirements, such as Davis-Bacon and the National Environmental Policy Act (NEPA). These acts, respectively, require that projects receiving federal funds pay prevailing wages and meet federal environmental requirements. The Section 1602 Program allows HFAs to exchange returned and unused tax credits for a payment from Treasury at the rate of 85 cents for every tax credit dollar. HFAs can exchange up to 100 percent of unused 2008 credits and 40 percent of their 2009 allocation. HFAs may award Section 1602 Program funds to finance the construction or acquisition and rehabilitation of qualified low-income buildings in accordance with the HFA’s Qualified Allocation Plan, which establishes criteria for selecting LIHTC projects. Section 1602 Program funds may be committed to project owners that have not sold their LIHTC allocation to private investors, as long as the project owner has made good faith efforts to find an investor. However, some HFAs have required Section 1602 Program projects to include some tax credit equity from private investors. Section 1602 Program funds are subject to the same requirements as the standard LIHTC program, and like TCAP funds, may be recaptured if a project does not comply with the requirements. HFAs may submit applications to Treasury for Section 1602 Program funds through 2010. The last day for HFAs to commit funds to project owners is December 31, 2010, but they can continue to disburse funds for committed projects through December 31, 2011, provided that the project owners paid or incurred at least 30 percent of eligible project costs by the end of 2010. Congress appropriated ‘such sums as may be necessary’ for the operation of the Section 1602 Program. The Joint Committee on Taxation originally estimated the budget impact of this program at $3 billion. As of the end of April 2010, however, Treasury had obligated more than $5 billion to HFAs in Section 1602 Program funds. Section 1602 Program funds are not considered by Treasury to be federal financial assistance and, therefore, the Section 1602 Program is not subject to many of the requirements placed on TCAP. Administered by the Administration for Children and Families within the Department of Health and Human Services, the Foster Care Program helps states to provide safe and stable out-of-home care for children until the children are safely returned home, placed permanently with adoptive families, or placed in other planned arrangements for permanency. The Adoption Assistance Program provides funds to states to facilitate the timely placement of children, whose special needs or circumstances would otherwise make placement difficult, with adoptive families. Federal Title IV-E funds are paid to reimburse states for their maintenance payments using the states’ respective Federal Medical Assistance Percentage (FMAP) rates. The Recovery Act temporarily increased the FMAP rate effective October 1, 2008, through December 31, 2010, resulting in an estimated additional $806 million that will be provided to states for the Adoption Assistance and Foster Care Programs. Administered by the Department of Transportation’s Office of the Secretary, the Recovery Act provides $1.5 billion in competitive grants, generally between $20 million and $300 million, to state and local governments and transit agencies. These grants are for capital investments in surface transportation infrastructure projects that will have a significant impact on the nation, a metropolitan area, or a region. Projects eligible for funding provided under this program include, but are not limited to, highway or bridge projects, public transportation projects, passenger and freight rail transportation projects, and port infrastructure investments. The Water and Environmental Programs administered by the Department of Agriculture’s Rural Development, provides loans, grants, and loan guarantees for drinking water, sanitary sewer, solid waste, and storm drainage facilities in rural areas and cities and towns of 10,000 or less. The Recovery Act provided nearly $3.3 billion in Rural Water and Waste Disposal funding for these programs. Loans, grants and loan guarantees to rural water and waste systems will be used to construct, improve, rehabilitate, or expand existing water and waste disposal systems to areas initially excluded because service was not economically feasible. The Environmental Protection Agency (EPA) awarded $39.3 million in Recovery Act funding for Water Quality Management Planning Grants to assist states in water quality management planning. Funds are used to determine the nature and extent of point and nonpoint source water pollution and to develop water quality management plans. Funded activities also include green infrastructure planning and integrated water resources planning. The fund is administered by the Office of Water, EPA. The Recovery Act appropriated $5 billion for the Weatherization Assistance Program, which the Department of Energy (DOE) is distributing to each of the states, the District of Columbia, and seven territories and Indian tribes, to be spent by March 31, 2012. The program, administered by the Office of Energy Efficiency and Renewable Energy within DOE, enables low-income families to reduce their utility bills by making long-term energy-efficiency improvements to their homes by, for example, installing insulation; sealing leaks; and modernizing heating equipment, air circulation fans, and air conditioning equipment. Over the past 33 years, the Weatherization Assistance Program has assisted more than 6.2 million low-income families. By reducing the energy bills of low- income families, the program allows these households to spend their money on other needs, according to DOE. The Recovery Act appropriation represents a significant increase for a program that has received about $225 million per year in recent years. DOE has approved the weatherization plans of the 16 states and the District of Columbia that are in our review and has provided at least half of the funds to those areas. The Department of Agriculture’s Forest Service administers the Wildland Fire Management Program funding for projects on federal, state, and private land. The goals of these projects include ecosystem restoration, research, and rehabilitation; forest health and invasive species protection; and hazardous fuels reduction. The Recovery Act provided $500 million for the Wildland Fire Management program. The Workforce Investment Act of 1998 (WIA) Youth, Adult, and Dislocated Worker Programs, administered by the Employment and Training Administration within the Department of Labor (Labor), provide job training and related services to unemployed and underemployed individuals. The Recovery Act provides an additional $2.95 billion in funding for Youth, Adult, and Dislocated Worker employment and training activities under Title I-B of WIA. These funds are allotted to states, which in turn allocate funds to local entities pursuant to formulas set out in WIA. The adult program provides training and related services to individuals ages 18 and older, the youth program provides training and related services to low-income youth ages 14 to 21, and dislocated worker funds provide training and related services to individuals who have been laid off or notified that they will be laid off. Recovery Act funds can be used for all activities allowed under WIA, including core services, such as job search and placement assistance; intensive services, such as skill assessment and career counseling; and training services, including occupational skills training, on-the-job training, registered apprenticeship, and customized training. For the youth program, Labor encouraged states and local areas to use as much of these funds as possible to expand summer youth employment opportunities. In addition, Labor advised states that training for adults and dislocated workers should be a significant focus for Recovery Act funds, and encouraged states to establish policies to make supportive services and needs-related payments available for individuals who need these services to participate in job training. To facilitate increased training for high- demand occupations, the Recovery Act expanded the methods for providing training under WIA and allowed local workforce boards to directly enter into contracts with institutions of higher education and other training providers, if the local board determines that it would facilitate the training of multiple individuals and the contract does not limit customer choice. Appendix IV: Entities Visited by GAO in Selected States and the District of Columbia Total local government entities visited by GAO is 24. Total entities visited is 167. The following staff contributed to this report: Stanley Czerwinski, Denise Fantone, Susan Irving, and Yvonne Jones, (Directors); Thomas James, and Michelle Sager, (Assistant Directors); Sandra Beattie (Analyst-in-Charge); and Marie Ahearn, David Alexander, Judith Ambrose, Peter Anderson, Thomas Beall, Noah Bleicher, Jessica Botsford, Anthony Bova, Richard Cambosos, Ralph Campbell Jr., Virginia Chanley, Tina Cheng, Andrew Ching, Marcus Corbin, Robert Cramer, Fran Davison, Michael Derr, Helen Desaulniers, Ruth “Eli” DeVan, Alexandra Dew, David Dornisch, Kevin Dooley, Abe Dymond, Holly Dye, Janet Eackloff, Lorraine Ettaro, James Fuquay, Alice Feldesman, Alexander Galuten, Ellen Grady, Anita Hamilton, Geoffrey Hamilton, Tracy Harris, Kristine Hassinger, Lauren Heft, David Hooper, Bert Japikse, Mitchell Karpman, Karen Keegan, John Krump, Jon Kucskar, Hannah Laufe, Jean K. Lee, Natalie Maddox, Stephanie May, Sarah M. McGrath, John Mc Grail, Jean McSween, Donna Miller, Kevin Milne, Marc Molino, Mimi Nguyen, Ken Patton, Anthony Pordes, Brenda Rabinowitz, Carl Ramirez, James Rebbe, Beverly Ross, Sylvia Schatz, Sidney Schwartz, Don Springman, Andrew J. Stephens, Esther Toledo, Alyssa Weir, Crystal Wesco, Craig Winslow, Elizabeth Wood, William T. Woods, and Kimberly Young. Federal Medical Assistance Percentage (FMAP) State Energy Program (SEP) and Energy Efficiency Conservation Block Grant (EECBG) Jennifer Alpha, Heather Chartier, Swetha Doraiswamy, Andrew Finkel, John McGrail, Marc Molino, Roberto Piñero, Carl Ramirez, Barbara Roesmann, and Mathew Scire. Recovery Act: Further Opportunities Exist to Strengthen Oversight of Broadband Stimulus Programs. GAO-10-823. Washington, D.C.: August 4, 2010. Recovery Act: States Could Provide More Information on Education Programs to Enhance the Public’s Understanding of Fund Use. GAO-10-807. Washington, D.C.: July 30, 2010. Recovery Act: Most DOE Cleanup Projects Appear to Be Meeting Cost and Schedule Targets, but Assessing Impact of Spending Remains a Challenge. GAO-10-784. Washington, D.C.: July 29, 2010. Recovery Act: Contracting Approaches and Oversight Used by Selected Federal Agencies and States. GAO-10-809. Washington, D.C.: July 15, 2010. GAO Review of LEA Controls over and Uses of Recovery Act Education Funds (Avery County Schools). GAO-10-746R. Washington, D.C.: July 9, 2010. GAO Review of LEA Controls over and Uses of Recovery Act Education Funds (Winston-Salem/Forsyth County Schools). GAO-10-747R. Washington, D.C.: July 9, 2010. Independent Oversight of Recovery Act Funding for Mississippi’s Weatherization Assistance Program. GAO-10-796R. Washington, D.C.: June 30, 2010. High Speed Rail: Learning From Service Start-ups, Prospects for Increased Industry Investment, and Federal Oversight Plans. GAO-10-625. Washington, D.C.: June 17, 2010. Federal Energy Management: GSA’s Recovery Act Program Is on Track, but Opportunities Exist to Improve Transparency, Performance Criteria, and Risk Management. GAO-10-630. Washington, D.C.: June 16, 2010. GAO Proactive Testing of ARRA Tax Credits for COBRA Premium Payments. GAO-10-804R. Washington, D.C.: June 14, 2010. Temporary Assistance for Needy Families: Implications of Recent Legislative and Economic Changes for State Programs and Work Participation Rates. GAO-10-525. Washington, D.C.: May 28, 2010. Recovery Act: Increasing the Public’s Understanding of What Funds Are Being Spent on and What Outcomes Are Expected. GAO-10-581. Washington, D.C.: May 27, 2010. Recovery Act: Clean Water Projects Are Underway, but Procedures May Not Be in Place to Ensure Adequate Oversight. GAO-10-761T. Washington, D.C.: May 26, 2010. Recovery Act: States’ and Localities’ Uses of Funds and Actions Needed to Address Implementation Challenges and Bolster Accountability. GAO-10-604. Washington, D.C.: May 26, 2010. Recovery Act: States’ and Localities’ Uses of Funds and Actions Needed to Address Implementation Challenges and Bolster Accountability (Appendixes). GAO-10-605SP. Washington, D.C.: May, 26, 2010. Head Start: Undercover Testing Finds Fraud and Abuse at Selected Head Start Centers. GAO-10-733T. Washington, D.C.: May 18, 2010. Health Coverage Tax Credit: Participation and Administrative Costs. GAO-10-521R. Washington, D.C.: April 30, 2010. 2010 Census: Plans for Census Coverage Measurement Are on Track, but Additional Steps Will Improve Its Usefulness. GAO-10-324. Washington, D.C.: April 23, 2010. Energy Star Program: Covert Testing Shows the Energy Star Program Certification Process Is Vulnerable to Fraud and Abuse. GAO-10-470. Washington, D.C.: March 5, 2010. Recovery Act: California’s Use of Funds and Efforts to Ensure Accountability. GAO-10-467T. Washington, D.C.: March 5, 2010. Recovery Act: Factors Affecting the Department of Energy’s Program Implementation. GAO-10-497T. Washington, D.C.: March 4, 2010. Recovery Act: One Year Later, States’ and Localities’ Uses of Funds and Opportunities to Strengthen Accountability. GAO-10-437. Washington, D.C.: March 3, 2010. State and Local Governments’ Fiscal Outlook March 2010 Update. GAO-10-358. Washington, D.C.: March 2, 2010. Recovery Act: Officials’ Views Vary on Impacts of Davis-Bacon Act Prevailing Wage Provision. GAO-10-421. Washington, D.C.: February 24, 2010. Electronic Personal Health Information Exchange: Health Care Entities’ Reported Disclosure Practices and Effects on Quality of Care. GAO-10-361. Washington, D.C.: February 17, 2010. Recovery Act: Project Selection and Starts Are Influenced by Certain Federal Requirements and Other Factors. GAO-10-383. Washington, D.C.: February 10, 2010 Recovery Act: IRS Quickly Implemented Tax Provisions, but Reporting and Enforcement Improvements Are Needed. GAO-10-349. Washington, D.C.: February 10, 2010. Status of the Small Business Administration’s Implementation of Administrative Provisions in the American Recovery and Reinvestment Act of 2009. GAO-10-298R. Washington, D.C.: January 19, 2010. Recovery Act: States’ Use of Highway and Transit Funds and Efforts to Meet the Act’s Requirements. GAO-10-312T. Washington, D.C.: December 10, 2009. Recovery Act: Status of States’ and Localities’ Use of Funds and Efforts to Ensure Accountability. GAO-10-231. Washington, D.C.: December 10, 2009. Recovery Act: Status of States’ and Localities’ Use of Funds and Efforts to Ensure Accountability (Appendixes). GAO-10-232SP. Washington, D.C.: December 10, 2009. Recovery Act: Planned Efforts and Challenges in Evaluating Compliance with Maintenance of Effort and Similar Provisions. GAO-10-247. Washington, D.C.: November 30, 2009. Recovery Act: Contract Oversight Activities of the Recovery Accountability and Transparency Board and Observations on Contract Spending in Selected States. GAO-10-216R. Washington, D.C.: November 30, 2009. Recovery Act: Recipient Reported Jobs Data Provide Some Insight into Use of Recovery Act Funding, but Data Quality and Reporting Issues Need Attention. GAO-10-223. Washington, D.C.: November 19, 2009. Recovery Act: Recipient Reported Jobs Data Provide Some Insight into Use of Recovery Act Funding, but Data Quality and Reporting Issues Need Attention. GAO-10-224T. Washington, D.C.: November 19, 2009. Recovery Act: Agencies Are Addressing Broadband Program Challenges, but Actions Are Needed to Improve Implementation. GAO-10-80. Washington, D.C.: November 16, 2009. Recovery Act: Preliminary Observations on the Implementation of Broadband Programs. GAO-10-192T. Washington, D.C.: October 27, 2009. First-Time Homebuyer Tax Credit: Taxpayers’ Use of the Credit and Implementation and Compliance Challenges. GAO-10-166T. Washington, D.C.: October 22, 2009. Federal Energy Management: Agencies Are Taking Steps to Meet High- Performance Federal Building Requirements, but Face Challenges. GAO-10-22. Washington, D.C.: October 30, 2009. High Speed Passenger Rail: Developing Viable High Speed Rail Projects under the Recovery Act and Beyond. GAO-10-162T. Washington, D.C.: October 14, 2009. Tax Administration: Opportunities Exist for IRS to Enhance Taxpayer Service and Enforcement for the 2010 Filing Season. GAO-09-1026. Washington, D.C.: September 23, 2009. Recovery Act: Funds Continue to Provide Fiscal Relief to States and Localities, While Accountability and Reporting Challenges Need to Be Fully Addressed. GAO-09-1016. Washington, D.C.: September 23, 2009. Recovery Act: Funds Continue to Provide Fiscal Relief to States and Localities, While Accountability and Reporting Challenges Need to Be Fully Addressed (Appendixes). GAO-09-1017SP. Washington, D.C.: September 23, 2009. Recovery Act: States’ and Localities’ Current and Planned Uses of Funds While Facing Fiscal Stresses. GAO-09-908T. Washington, D.C.: September 10, 2009. Recovery Act: States’ Use of Highway Infrastructure Funds and Compliance with the Act’s Requirements. GAO-09-926T. Washington, D.C.: July 31, 2009. Unemployment Insurance Measures Included in the American Recovery and Reinvestment Act of 2009, as of July 2009. GAO-09-942R. Washington, D.C.: July 27, 2009. Grants Management: Grants.gov Has Systematic Weaknesses That Require Attention. GAO-09-589. Washington, D.C.: July 15, 2009. Recovery Act: States’ and Localities’ Current and Planned Uses of Funds While Facing Fiscal Stresses. GAO-09-829. Washington, D.C.: July 8, 2009. Recovery Act: States’ and Localities’ Current and Planned Uses of Funds While Facing Fiscal Stresses. GAO-09-831T. Washington, D.C.: July 8, 2009. Recovery Act: States’ and Localities’ Current and Planned Uses of Funds While Facing Fiscal Stresses (Appendixes). GAO-09-830SP. Washington, D.C.: July 8, 2009. Recovery Act: The Department of Transportation Followed Key Federal Requirements in Developing Selection Criteria for Its Supplemental Discretionary Grants Program. GAO-09-785R. Washington, D.C.: June 30, 2009. High Speed Passenger Rail: Effectively Using Recovery Act Funds for High Speed Rail Projects. GAO-09-786T. Washington, D.C.: June 23, 2009. Recovery Act: GAO’s Efforts to Work with the Accountability Community to Help Ensure Effective and Efficient Oversight. GAO-09-672T. Washington, D.C.: May 5, 2009. Recovery Act: Consistent Policies Needed to Ensure Equal Consideration of Grant Applications. GAO-09-590R. Washington, D.C.: April 29, 2009. Recovery Act: Initial Results on States’ Use of and Accountability for Transportation Funds. GAO-09-597T. Washington, D.C.: April 29, 2009. Recovery Act: As Initial Implementation Unfolds in States and Localities, Continued Attention to Accountability Issues Is Essential. GAO-09-580. Washington, D.C.: April 23, 2009. Recovery Act: As Initial Implementation Unfolds in States and Localities, Continued Attention to Accountability Issues Is Essential. GAO-09-631T. Washington, D.C.: April 23, 2009. Small Business Administration’s Implementation of Administrative Provisions in the American Recovery and Reinvestment Act. GAO-09-507R. Washington, D.C.: April 16, 2009. American Recovery and Reinvestment Act: GAO’s Role in Helping to Ensure Accountability and Transparency for Science Funding. GAO-09-515T. Washington, D.C.: March 19, 2009. American Recovery and Reinvestment Act: GAO’s Role in Helping to Ensure Accountability and Transparency. GAO-09-453T. Washington, D.C.: March 5, 2009. Estimated Adjusted Medicaid Funding Allocations Related to the Proposed American Recovery and Reinvestment Act. GAO-09-371R. Washington, D.C.: February 5, 2009. Estimated Temporary Medicaid Funding Allocations Related to Section 5001 of the American Recovery and Reinvestment Act. GAO-09-364R. Washington, D.C.: February 4, 2009. | This report responds to two ongoing GAO mandates under the American Recovery and Reinvestment Act of 2009 (Recovery Act). It is the latest in a series of reports on the uses of and accountability for Recovery Act funds in 16 selected states, certain localities in those jurisdictions, and the District of Columbia (District). These jurisdictions are estimated to receive about two-thirds of the intergovernmental assistance available through the Recovery Act. This report also responds to GAO's mandate to comment on the jobs estimated in recipient reports. GAO collected and analyzed documents and interviewed state and local officials and other Recovery Act award recipients. GAO also analyzed federal agency guidance and interviewed federal officials. As of September 3, 2010, about $154.8 billion of the approximately $282 billion of total funds made available by the Recovery Act in 2009 for programs administered by states and localities had been paid out by the federal government. Of that amount, over 65 percent--$101.9 billion--had been paid out since the start of federal fiscal year 2010 on October 1, 2009. As of July 31, 2010, the 16 states and the District had drawn down $43.9 billion in increased FMAP funds. If current spending patterns continue, GAO estimates that these states and the District will draw down $56.2 billion by December 31, 2010--about 95 percent of their initial estimated allocation. Most states reported that, without the increased FMAP funds, they could not have continued to support the substantial Medicaid enrollment growth they have experienced, most of which was attributable to children. Congress recently passed legislation to extend the increased FMAP through June 2011, although at lower rates than provided by the Recovery Act. As of August 27, 2010, the District and states covered in GAO's review had drawn down 72 percent ($18.2 billion) of their awarded State Fiscal Stabilization Fund (SFSF) education stabilization funds; 46 percent ($3.0 billion) for Elementary and Secondary Education Act, Title I, Part A; and 45 percent ($3.4 billion) for Individuals with Disabilities Education Act, Part B. In the spring of 2010, GAO surveyed a nationally representative sample of local educational agencies (LEA) and found that job retention was the primary use of education Recovery Act funds in school year 2009-2010, with an estimated 87 percent of LEAs reporting that Recovery Act funds allowed them to retain or create jobs. Nationwide, the Federal Highway Administration (FHWA) obligated $25.6 billion in Recovery Act funds for over 12,300 highway projects, andreimbursed $11.1 billion as of August 2, 2010. The Federal Transit Administration obligated $8.76 billion of Recovery Act funds for about 1,055 grants, and reimbursed $3.6 billion as of August 5, 2010. Highway funds were used primarily for pavement improvement projects, and public transportation funds were used primarily for upgrading transit facilities and improving bus fleets. The EECBG program provides about $3.2 billion in grants to implement projects that improve energy efficiency; of this amount, approximately $2.8 billion has been allocated directly to recipients. As of August 2010, DOE has obligated about 99 percent of the $2.8 billion in direct formula grants to recipients, who have in turn, obligated about half to subrecipients. The majority of EECBG funds have been obligated for three purposes: energy efficiency retrofits to existing facilities, financial incentive programs, and buildings and facilities. As of August 7, 2010, housing agencies had obligated about 46 percent of the nearly $1 billion in Recovery Act Public Housing Capital Fund competitive grants allocated to them for projects such as installing energy-efficient heating and cooling systems in housing units. HUD officials anticipate that some housing agencies may not meet the September 2010 obligation deadline, resulting in those funds being recaptured. GAO believes HUD should continue to closely monitor agencies' progress in obligating remaining funds. As of July 31, 2010, HUD had outlayed about $733 million (32.6 percent) of TCAP funds and Treasury had outlayed about $1.4 billion (25.5 percent) of Section 1602 Program funds. GAO updates the status of agencies' efforts to implement GAO's 58 previous recommendations and makes 5 new recommendations to improve management and strengthen accountability to the Departments of Transportation (DOT), Housing and Urban Development (HUD), the Treasury, and the Office of Management and Budget (OMB). |
Wildlife are valuable to society in many ways, providing a wide range of social, ecological, and economic benefits. For example, hunting and birdwatching are important as both recreational and income-generating activities. In 1996, according to the latest national survey by the Department of the Interior’s Fish and Wildlife Service, 40 million U.S. adults (16 years old and older) went fishing and/or hunting and spent over $71.9 billion on related items. Their expenditures included fishing and hunting equipment, trips, licenses and fees, and books and magazines. In the same year, nearly 63 million adults enjoyed “nonconsumptive” activities such as observing, feeding, or photographing wildlife. These wildlife-watching participants spent over $29.2 billion on related items such as equipment, trips, and books and magazines. The total $101.2 billion spent by anglers, hunters, and wildlife-watchers does not include related economic multiplier effects, or ripple effects, on the American economy. Nor does it include the household income (salaries and wages) of jobs supported by wildlife-related activities or the state or federal tax revenues generated by such activities. For example, expenditures related to hunting, fishing, and wildlife-watching activities generated about $5.2 billion in state income and sales tax revenues in 1996, according to reports based on the 1996 Fish and Wildlife Service survey. During the last decade, wildlife seem to have become an almost universal object for concern, a symbol for environmental issues, and a focus for resource management, according to a Cornell University extension publication. However, the publication also notes that actual encounters with wildlife are frequently viewed as a nuisance or are associated with damage and unwanted costs. For example, the coyote is one the most successful and ubiquitous predators in the United States, and coyote predation on livestock is a serious problem for U.S. producers. In the United States, wildlife are a publicly owned resource held in trust and managed by federal and state agencies. In general, the federal government manages threatened and endangered species and migratory birds, while the states manage big game and other mammals and birds. Wildlife Services is authorized by Congress to conduct activities relating to most wildlife damage situations. The primary statutory authority for the program is the Act of March 2, 1931, as amended (7 U.S.C. 426-426c; 46 Stat. 1468), which authorized the Secretary of Agriculture to conduct activities to control injurious animals. In addition, the program operates under the provisions of numerous other laws, including the 1918 Migratory Bird Treaty Act, as amended; the 1947 Federal Insecticide, Fungicide, and Rodenticide Act, as amended; the National Environmental Policy Act of 1969, as amended; and the Endangered Species Act of 1973, as amended. The practice of managing wildlife is not new, nor is the control of predators. For centuries, control of mammalian predators has been practiced worldwide as a means of protecting livestock and enhancing game populations. The first recorded federal involvement in wildlife damage control in the United States occurred in 1885, when a federal agency sent questionnaires to farmers about crop damage caused by birds. By 1915, the Congress was appropriating funds for federal predator control operations directed at wolves and coyotes. In 1931, the Congress passed the Act of March 2, 1931, authorizing the control of injurious animals. Since then, federal wildlife control activities have evolved along with demographic and societal changes. In the program’s early years, for example, the emphasis was on conducting general eradication campaigns that might be directed at the entire statewide population of a particular species of predator. This operating philosophy, as we reported in 1990,contributed to decimating gray wolf populations in the continental United States. As public attitudes changed, the program’s focus changed as well, and it now emphasizes killing only problem animals when necessary. Appendix II summarizes key events in the program’s evolution. Today the Wildlife Services program conducts operational and research activities. The operational activities are headed by the program’s eastern and western regional offices (located in Raleigh, North Carolina, and in Lakewood, Colorado), which in turn oversee 37 state offices, some of which are responsible for program activities in more than one state. Operational activities consist of technical assistance (e.g., providing advice or loaning equipment to individuals who are encountering problems with wildlife) and direct assistance (e.g., diverting, removing, or killing injurious wildlife). Generally, Wildlife Services conducts its operational activities in response to requests for assistance. The program coordinates its operational activities with other entities, such as state departments of wildlife, local agricultural extension services, and private animal removal services. The program’s research activities are headed by the National Wildlife Research Center, located in Fort Collins, Colorado. The center has three research programs: product development research, bird research, and mammal research. Whereas most of the product development research is done at the center, most of the bird and mammal research is done at field stations across the country. To augment their staff of scientists and technicians, the research programs rely on undergraduate and graduate students, post-doctoral appointments, and volunteers. Program funds for both operations and research are provided through congressional appropriations and through cooperative agreements with clients—organizations and individuals—that seek the program’s assistance. Wildlife Services’ clients include other federal agencies (e.g., the Department of the Interior’s Fish and Wildlife Service and Bureau of Land Management, USDA’s Forest Service, and the Department of Defense); state agencies (e.g., state wildlife divisions and departments of transportation); county agencies and city organizations (e.g., parks and recreation departments, zoos, and airports); Indian tribes; associations (e.g., the Farm Bureau, livestock associations, and wool growers’ associations); animal advocacy and conservation groups; private businesses (e.g., hotels and stadiums); and individuals. Some Wildlife Services activities are completely funded by clients. In fiscal year 2000, the program spent about $80.6 million in funds: about $42.3 million in congressional appropriations and about $38.3 million in funds from clients. Of the total funding, research spent about $12.2 million, or 15 percent; operations spent about $59 million, or 73 percent; and program administration spent about $9.5 million, or 12 percent. See appendix III for information on operational expenditures by state, by funding source, and by type of work. People exist as only one element in the natural world. Increasingly, as wildlife habitat shrinks due to human population growth and activities, clashes occur between people and wildlife. These clashes take many forms. For example, mammals and birds can damage crops and forestry resources, deplete aquaculture stock, destroy livestock, and despoil property. Further, they pose threats to human health and safety through the spread of disease (e.g., rabies and West Nile virus); through direct attacks on humans; and through collisions with passenger cars, trucks, trains, and aircraft. The effects of injurious wildlife are not limited to rural populations; suburbanites are grappling with how to best deal with growing deer, geese, and beaver populations that damage property and pose threats to human health. Although they generate substantial economic activity, wildlife of all kinds can cause damage, and that damage can be costly. Wildlife damage to U.S. agriculture alone (including crops and livestock) has been estimated at between $600 million and $1.6 billion annually, with over half of all farmers and ranchers experiencing some type of wildlife-related damage each year. Following are examples of how wildlife can affect agricultural resources, other wildlife, and property. Appendix IV provides, by state, examples of injurious wildlife, the kinds of resources they damage, and emerging concerns. Birds, rodents, deer, and other mammals cause significant damage to a variety of crops such as corn, rice, sunflowers, and lettuce, as well as berries and other fruits and nuts. The estimated annual losses of corn attributed to wildlife exceed $90 million, and those of apples, blueberries, and grapes exceed, in aggregate, $40 million. Deer and bears also eat forestry seedlings and a wide variety of landscape and garden plants. Fish-eating birds (e.g., cormorants, herons, egrets, and pelicans) can cause severe damage at aquaculture farms, eating catfish, crawfish, salmon, bass, trout, and ornamental fish. According to a USDA National Agricultural Statistics Service (NASS) survey of catfish producers from 15 states, 69 percent reported some wildlife-caused losses, with a financial loss of $12.5 million to wildlife predation in 1996. In aggregate, coyotes, mountain lions, bears, and wolves kill thousands of lambs and calves each year. Livestock losses attributed to predators— predominantly coyotes—are about $71 million a year, according to the most recent NASS surveys. According to a Defenders of Wildlife representative, these livestock loss estimates are inflated because they are self-reported by livestock producers. The available evidence, however, suggests otherwise, according to a Wildlife Services study. This study noted that surveys of livestock producers tend to underreport losses, because reports emphasize confirmed kills. The study also noted that NASS survey data typically report lower losses than other national estimates. Table 1 shows the losses reported by NASS. Various forms of damage compensation programs are in effect, at the state or private level, for selected areas and selected wildlife species. For example, several states provide some payment to individuals for damage done (e.g., to property or crops) or livestock killed by certain species (e.g., elk, wolves, eagles, grizzly bears, and mountain lions). Additionally, the Defenders of Wildlife has a compensation fund for damage caused by certain species (e.g., wolves and grizzly bears) in certain areas of the country. Generally, the programs require confirmation by state or federal officials that the damage or loss was inflicted by one of the species covered by the program. According to Wildlife Services officials, for example, before an individual can receive compensation from Defenders of Wildlife for damage caused by wolves, a Wildlife Services official must verify that a wolf caused the damage. Coyotes are the major predator responsible for livestock losses. Of lamb losses to predators in 1999, for example, 64.3 percent were attributed to coyotes. Wildlife Services personnel showed us how, by examining the damage to a lamb carcass (e.g., a broken or missing trachea, the pattern of blood clotting, and other indicators), they can often identify the species that killed the lamb. Figure 1 illustrates the damage that coyotes can do. Threatened and endangered species are sometimes at risk as well. Ravens kill desert tortoises; feral hogs prey on several species of endangered plants, tree snails, and forest birds; and Arctic fox prey on protected Aleutian Canada geese. In Guam, the brown tree snake (a non-native species accidently introduced to the island by humans) has eliminated 9 of the island’s 12 species of forest birds and most of the terrestrial vertebrates, killed many pets, and bitten many children. Beavers gnaw down trees, build dams, and plug up culverts, causing flood damage to timber, roads, bridges, and other property. The monetary value of beaver-related damage is also significant. In North Carolina alone, the beaver damage management program prevented the loss of, or damage to, $8.5 million in property during fiscal year 2000, according to a program report. In addition to their physical and economic impacts, wildlife can also threaten human health and safety. Wildlife can harbor diseases that can spread to livestock, pets, and people. Wildlife can also directly attack people, causing injuries or death. Further, wildlife—particularly deer—are sometimes the cause of automobile accidents. Collisions between aircraft and birds are of particular concern, because such accidents can result in serious and costly damage and, in some cases, injuries or death to pilots or passengers. Birds and mammals sometimes harbor diseases—such as rabies, bubonic plague, Lyme disease, bovine tuberculosis, and West Nile virus—that can be passed along to people through direct or indirect contact. (Such diseases are referred to as zoonotic diseases.) For rabies, the areas at greatest risk are southern Texas (coyote and dog rabies), central Texas (gray fox rabies), and the northeastern United States and Ohio (raccoon rabies). West Nile virus, first documented in the United States in 1999, is now present in the District of Columbia and more than 20 states (primarily in the East and the South) and has been responsible for the deaths of at least 10 people. Birds serve as a host for the virus, which is transmitted to humans and animals through mosquito bites. Excrement poses health risks as well. The excrement of gulls or other birds that nest on rooftops can enter ventilation systems, posing the risk of histoplasmosis (a respiratory disease) to workers who breathe the contaminated air. Similarly, especially in the western states, exposure to rodent excrement poses the risk of hantavirus (a potentially deadly lung disease). The costs associated with these diseases can be substantial. For example, the increased incidence of coyote, raccoon, and fox rabies has resulted in estimated costs of over $450 million annually for additional health care, education, vaccination, and animal control. Safety concerns are also an issue. With their populations expanding and habitats shrinking, wildlife are more likely to come into contact with humans. An attack by wildlife can result in a person’s injury or even death. In August 2001, for example, a black bear broke into a home in a mountain village in New Mexico and killed a 93-year-old woman; in 1997, a mountain lion attacked and killed a 10-year-old child in Colorado’s Rocky Mountain National Park. Bites or attacks from wildlife cause few fatalities, but many injuries. While the number of fatalities from rodent (e.g., mice and rats) bites is unknown, rodents cause about 27,000 injuries each year. Table 2 shows the estimated number of human injuries and fatalities that result each year in the United States from wildlife bites or attacks. In other instances, wildlife have collided with automobiles, trains, and planes. Each year more than a million deer-automobile collisions occur in the United States, resulting in over $1 billion in damage to vehicles, 29,000 human injuries, and 200 human fatalities. Aircraft collisions with wildlife are of particular concern, given their safety and economic consequences. In calendar year 2000, about 6,000 aircraft collisions involving wildlife, primarily birds, were reported in the United States. From 1990 through 2000, wildlife-aircraft strikes resulted in the deaths of about 140 people and the destruction of about 115 aircraft worldwide. The economic toll has been heavy as well. Wildlife-aircraft strikes cost the aviation industry more than $1 billion a year worldwide, with costs to U.S. civil aviation (commercial and private aircraft) estimated at nearly $400 million a year. For U.S. civil aviation, wildlife strikes have also resulted in nearly 500,000 hours of aircraft downtime each year. Effects on military operations are estimated at $30 million a year. A single large bird, such as a goose, can cause serious damage to an aircraft. The average aircraft is designed to withstand a direct hit from a bird weighing up to 4 pounds, whereas a Canada goose typically weighs 8 to 15 pounds. In September 1995, the U.S. Air Force lost 24 airmen and a $190 million AWACS aircraft in a strike involving Canada geese. Although most strikes take place during takeoff or landing, some occur en route. Pilots have reported strikes occurring as high as 30,000 feet. Gulls (which weigh about 2 pounds) are a particular hazard, making up nearly one-third of the reported strikes that identified the type of wildlife struck. In the Great Lakes region alone, the ring-billed gull population has increased about 20-fold over the past 40 years, according to a Wildlife Services bird research official. In an August 2000 incident, a Boeing 747 airplane engine ingested at least one Western gull just after takeoff from the Los Angeles International Airport. The pilot had to dump 83 tons of fuel over the ocean before making an emergency landing. The plane was out of service for 72 hours; the repair cost was $400,000. Figure 2 illustrates the kind of damage a single bird can cause. Even small birds can be damaging. Starlings, for example, which weigh only about 3 ounces, are referred to as “feathered bullets” because their mass is so great for their size, according to a Wildlife Services official. Starlings are especially dangerous, the official said, because they often travel in dense flocks of many thousands. All wildlife-aircraft strike reports are entered into the Federal Aviation Administration’s (FAA) National Wildlife Strike Database, which is managed by Wildlife Services. As of March 2001, the database contained about 34,000 strike reports for the period 1990 through 2000. However, the actual number of such strikes is probably considerably higher because only FAA-certified airports are required to report wildlife-aircraft strikes.While non-certified airports sometimes report such strikes, Wildlife Services estimates that the total number reported represents only about 20 percent of those that have occurred. Certain unauthorized uses of airport land can increase the risk of birdstrikes. In a 1999 report, for example, we cited two instances of landfills that had been established on airport land without FAA’s authorization. Landfills attract wildlife and thereby increase the risk of birdstrikes. In both of the examples we cited, the unauthorized land use had continued undetected or uncorrected for years. Citing weakneses in FAA’s compliance monitoring program, we recommended that FAA revise its compliance policy guidance to require regularly scheduled monitoring, including periodic on-site visits. Although 97 percent of wildlife-aircraft strikes over a 10-year period involved birds, four-legged animals were also involved in some: 418 reported strikes were with deer; 71 were with coyotes; and another 73 strikes involved turtles, alligators, foxes, or woodchucks. See appendix V for excerpts from reports of wildlife-aircraft strikes. To curb the damage done by wildlife, Wildlife Services conducts operational and research activities for the benefit of various public and private clients. Program operations and research activities are focused, in large part, on (1) protecting livestock; (2) protecting game animals, game birds, and threatened or endangered species; (3) protecting property and crops; (4) protecting the flying public; and (5) reducing and monitoring the spread of wildlife diseases. Wildlife Services’ operational activities consist of technical assistance (e.g., providing information, advice, or equipment to property owners and others who are confronted with wildlife problems) and direct assistance (e.g., diverting, relocating, or killing wildlife). Wildlife Services responds to telephone inquiries from the public and has published booklets and pamphlets to help people deal with wildlife problems such as a bat in the attic, a skunk under a porch, or a bear in a hot tub. The program’s research activities include both laboratory research and field experiments. Some research investigates particular species’ behavior and biology; other research is aimed at improving controls, both lethal and nonlethal. The type of assistance Wildlife Services provides to a client varies, depending on the situation, the location, and the species involved. In response to a request for assistance from a farmer or a rancher, for example, Wildlife Services officials will provide advice over the phone, mail information, or visit the site and assess the situation. As appropriate, officials will coordinate with other stakeholders, such as state wildlife departments, other federal agencies, or adjoining neighbors. After assessing the situation, officials may suggest the use of one or more controls, including fences, guard dogs, harassment, traps, or shooting. Once a course of action has been agreed upon, it is documented in a cooperative agreement between Wildlife Services and the client. The cooperative agreement specifies the work that will be done, the methods that will be used, and the way costs will be shared. Cost-sharing arrangements vary by state, depending largely on the demand for program services and the availability and amount of cooperative funding. Cooperative funding is a critical component affecting program availability and delivery and is a key factor determining variability among Wildlife Services’ state programs. For example, in some cases, a client pays half the cost of services received; in others, counties pay for part of a Wildlife Services employee’s salary, and that employee serves those counties. In still other instances, an organization, such as a wool growers’ association, collects fees from its members, who are then eligible to receive services for no additional charge. And in the case of threatened or endangered species, a government agency (e.g., the Fish and Wildlife Service) enters into an interagency agreement to fund control efforts to protect a certain species. The type of action Wildlife Services takes varies. In some cases, harassment devices (such as noisemakers or bright lights) are effective in deterring the presence of injurious animals. Repellents are sometimes effective, as are devices such as gridwire (to discourage perching or nesting) and fencing. In other cases, relocation is the best option, particularly when a threatened or endangered species is causing damage. But even when suitable habitat is available, relocation is not always in an animal’s best interest, as relocated animals are vulnerable in unfamiliar locations. They may fall prey to predators; they may be seen as interlopers and killed by other members of their own species; or their unfamiliarity with the new habitat and its food and water sources may result in severe stress or even death. In many cases, such as with bears, a relocated animal will immediately return to the area from which it was removed. And moving a bear is no simple task, officials explained. Not only is a bear large and heavy, it is also double-jointed and thus quite floppy. If harassment or relocation is not considered appropriate to the situation, depending on the species involved and the type and extent of damage, lethal means may be needed to halt the depredation or damage. In such cases, Wildlife Services officials strive to select the method that will kill the bird or mammal quickly, effectively, and humanely. Shooting is sometimes considered the best method. Working with state and local agencies, associations, and individuals, Wildlife Services conducts many wildlife control activities. Some of the program’s major efforts include (1) protecting livestock from predation by coyotes and other species; (2) protecting game animals, game birds, and threatened or endangered species from predation by other wildlife; (3) protecting property and crops from damage by mammals and birds; (4) reducing the risk of aircraft striking wildlife around airport runways; and (5) reducing and monitoring the spread of wildlife diseases to livestock, pets, or humans. Both operations and research activities play a part in all of these efforts. This is a major area of program emphasis. The program’s control activities are directed at selected animals or local populations in areas where damage has occurred. When livestock producers find that the controls they have in place to deter predators have proven insufficient, they turn to Wildlife Services for assistance. Livestock producers generally use several nonlethal control methods, such as guard animals, exclusion fences, and scare devices. In 1999, for example, according to a recent survey by NASS, most sheep producers had in place one or more nonlethal control methods. The number and type of control methods in use varied from state to state, but certain methods were more widely used than others. For example, half or more of the producers surveyed in numerous states used fencing, “shed lambing” (confining pregnant ewes to a shed during birthing and for several days afterward), and/or night penning to protect their sheep. In three states, half or more of the producers reported using guard dogs, and in two states, a high percentage of producers (61 percent and 70 percent, respectively) reported using guard llamas. Although nonlethal methods sometimes suffice, in other instances they do not effectively deter predators or may only postpone predation. For example, shed lambing is often thought to be an effective way to keep predators (especially coyotes) from killing newborn lambs. This solution, while effective, is only temporary. Eventually, the young lambs must come out of the shed and when they do, they are at risk of predation. In the four states in which we reviewed Wildlife Services’ operations in 1995 (California, Nevada, Texas, and Wyoming), program personnel said they used lethal methods in essentially all instances to control livestock predators because livestock operators were already using nonlethal control methods but were still losing livestock. Further, nonlethal methods also pose problems. Guard dogs, for example, are helpful in protecting flocks, but they are expensive—not only to buy, but also to train and maintain, according to ranchers we interviewed. And scare devices, such as sirens or spotlights, are generally effective in deterring predators only for a limited time. Most predators—whether birds or mammals—will habituate to any scare device that follows a discernable pattern. Thus, although nonlethal methods have helped reduce losses, they have not brought them to levels that most clients believe are economically viable. For livestock producers who are already operating on a small profit margin, the addition of even a low percentage of losses could drive a business into deficit. Livestock producers we interviewed said they expect and can tolerate predation losses of 2 to 3 percent but could not continue to operate with sustained losses higher than that. For coyotes, Wildlife Services officials have found aerial shooting to be a most efficient and effective means of control. It is, though, one of the program’s most controversial activities. Funded through cooperative agreements with individual ranchers or livestock associations, Wildlife Services personnel carry out aerial shooting in the winter to kill coyotes in areas of several western states considered most vulnerable to livestock predation. Groups opposing this practice, such as Defenders of Wildlife and the Humane Society of the United States, view it as a reckless, indiscriminate killing campaign. According to representatives of these two groups, the aerial shooting program kills coyotes indiscriminately; it does not distinguish between coyotes that are known predators and those that have never preyed on livestock and might never do so. These representatives noted that they have no quarrel with the practice of killing coyotes or other predators that are known to have preyed on livestock, as long as killing is a last resort and is done in the most humane way possible. Wildlife Services officials, however, defend the aerial shooting program as a proven preventive method that is necessary to protect lambs. According to Wildlife Services officials, the program is conducted in areas in which predation routinely occurs and is timed to remove coyotes before or during their mating season. The intent is to reduce the number of coyotes that have pups to feed just as lambing season begins. The officials pointed out that the aerial shooting activities have been shown to be both effective and cost-efficient in preventing livestock losses, according to a 3- year study by Utah State University researchers. Although Wildlife Services officials and farmers and ranchers we interviewed believe that the aerial shooting campaign is instrumental in preventing intolerable levels of livestock loss, representatives of Defenders of Wildlife and the Humane Society maintain that lethal control should never be a first resort; it should be used only after all nonlethal controls have been tried and found unsuccessful. Representatives of both groups expressed concern that Wildlife Services personnel in the field tend to rely on lethal methods as the first and primary means of control, without considering whether nonlethal controls might be effective in preventing or curbing damage. A major concern of both Defenders of Wildlife and the Humane Society, according to their representatives, is that livestock producers are not required to have nonlethal controls in place before requesting assistance from Wildlife Services. If livestock producers are unwilling to take reasonable nonlethal steps to prevent or control further damage, the representatives said, then those producers should not be allowed to avail themselves of Wildlife Services’ assistance. Program research has contributed much to the knowledge base about coyote ecology and behavior, adding to the effort to develop more effective nonlethal controls. For example, two recent studies sought to determine whether coyote packs containing a sterile alpha pair would kill fewer lambs than packs with a fertile alpha pair and whether sterile pairs in the wild would maintain pair-bonds and defend their territories, thereby excluding other coyotes. Study results showed that surgically sterilized coyotes were significantly less likely to prey on lambs than were coyotes with pups to feed and that they maintained their pair-bonds and territories. During a 2-year period, 9 sterile packs killed 4 lambs, while 14 packs with pups killed 33 lambs. Future research efforts will seek practical methods, other than surgery, to sterilize animals in the field. Other research efforts include developing (1) new capture devices and restraint methods that minimize injury to captured animals; (2) new scare devices; and (3) advanced designs for live-capture cages, rather than gripping devices, to restrain predators. In addition, Wildlife Services researchers are looking at ways of using radio-activated conditioning collars (much like those used to train dogs) to modify predators’ attack behavior. Researchers have developed a prototype animal-activated electronic device and system, currently being field-tested, that repels predators from livestock areas. In addition to developing new control methods, researchers also evaluate the effectiveness of nonlethal controls. For example, a study published in the fall 2000 Wildlife Society Bulletin evaluated the effectiveness of guard llamas in reducing coyote predation on domestic sheep. The study found that the llamas reduced coyote depredation on lambs during the first year of the study, but not during the second year. The authors concluded that predation may have to reach a threshold before guard llamas have a noticeable effect on losses. The study also found that producers with llamas strongly supported their use as guard animals for sheep. Based on sheep producers’ assessments, llamas appear to provide depredation protection similar to that provided by guard dogs. For example, llamas will chase coyotes and will “gather” the sheep and place themselves between the sheep and a coyote. Unlike dogs, however, llamas require little or no training or socialization period. Also, llamas pose little threat to humans, are relatively easy to handle (even without training), and may have a guarding tenure longer than 10 years, compared to an average of 2 years for guard dogs. Various game species and threatened and endangered species have also benefited from the program’s operations and research efforts. In rural areas, hunting-related revenue is sometimes critical to the local economy. Accordingly, a growing part of Wildlife Services’ activities involves the protection of game populations from predation by other wildlife. The protection of threatened and endangered species is important to ecosystems as well as individual animals and is often essential to the recovery of a species. As with game species, threatened and endangered species can benefit not only from program activities conducted specifically for their protection, but also from activities conducted for another species’ protection. Killing predators is often crucial to the survival of game species. According to a 2001 study, for example, culling of coyotes in various areas in Utah protected local populations of mule deer and pronghorn antelope fawns. When coyote predation management was implemented in one mule deer area, for example, fawn survival increased from 9 percent to 42 percent. As another example, in one population of sage grouse in Utah, annual adult mortality due to predation (primarily by non-native red fox) was 82 percent without fox control in place, but only 33 percent with fox control. Many threatened and endangered species have benefited from Wildlife Services’ operations and research. For example, for nearly a decade Wildlife Services has conducted a major effort to reduce the brown tree snake’s population on the Island of Guam and to prevent the snake’s introduction to other Pacific islands. Since it was accidently introduced to Guam 50 years ago, the snake—which has no natural predators on the island—has eliminated 9 of the 12 species of the island’s forest birds and most of its terrestrial vertebrates. Program personnel conduct brown tree snake interdiction at Guam’s commercial and military exit ports. Since the program’s inception in 1993, Wildlife Services personnel have captured about 30,000 snakes near high-risk ports and have trained Jack Russell terriers to detect snakes in outgoing cargo shipments. Research has played a major role in the snake control effort. After experimenting with various controls, program researchers devised an effective trap, added an alluring bait (mice), and found an effective poison—acetaminophen, which is deadly to the snake. Field tests indicated a zero-percent survival rate for snakes that ate the treated bait. Acetaminophen bait is currently used on a limited scale, under an Environmental Protection Agency (EPA) emergency use permit. Wildlife Services is pursuing a Section 3 EPA registration that would allow larger scale use of this technique on the island. Wildlife Services has also evaluated and registered methyl bromide as a cargo fumigant for use against snakes, has conducted field tests on two alternative fumigants, and is developing a delivery device for dermal toxicants that it found effective against snakes. Other threatened and endangered species have also benefited from program operations and research. In fiscal year 2000, the program actively protected 142 federal- and state-listed endangered and threatened species. For example, the program’s mongoose control in Puerto Rico has helped conserve the entire population of Puerto Rican parrots. In New Hampshire, killing ground hogs that forage on the wild lupine has helped protect the endangered Karner blue butterfly, whose reproductive cycle depends on the wild lupine. Various research efforts are related to threatened and endangered species. For example, researchers are working to develop more humane, nonlethal techniques for removing endangered wolves that are preying on livestock. These techniques include tranquilizer tab traps to reduce stress to captured animals and electronic collars to deter wolves from killing livestock. Wildlife Services conducts many activities to protect property and crops from mammals and birds. For example, a key program emphasis is eliminating beaver and their dams from areas in which they are causing damage. Particularly in the Southeast, but increasingly in other areas, beavers are responsible for millions of dollars in damage annually; in fact, the resulting dollar loss from beaver damage may be greater than that of any other wildlife species in the United States. Along with eliminating the dams, Wildlife Services personnel usually trap and eliminate the beavers as well. If the beavers are left in place, they will quickly build another dam, according to Wildlife Services biologists. And for beavers, as for other species whose populations are increasing rapidly, relocation is not often a viable option because there are not enough suitable habitats available. To control birds, Wildlife Services personnel often use harassment techniques, such as devices that emit bursts of light or loud noise, to scare birds away and discourage their roosting near fields or aquaculture farms or in urban areas. Wildlife Services research has shown that after several days of harassment birds are likely to seek an alternate roost. According to researchers at Wildlife Services’ bird research station in Ohio, recent experiments using lasers as harassment devices have shown encouraging results with certain species. Similarly, Wildlife Services’ use of low-level laser lights, in conjunction with pyrotechnic harassment techniques, has been very effective in controlling gulls and other birds that were interfering with the work of law enforcement personnel searching for evidence in the debris from the recent terrorist attack on the World Trade Center. The debris is being hauled to the Staten Island landfill, where it is being examined by personnel from the Federal Bureau of Investigation and the New York City Police Department. In some cases, such as to protect crops or livestock feed from consumption by birds or contamination by bird feces, Wildlife Services personnel poison birds. Program researchers have developed several effective poisons and have maintained their registrations with EPA or the Food and Drug Administration (FDA). And in still other cases, such as with Canada geese, which are protected by the Migratory Bird Treaty Act, program personnel oil or addle (shake) bird eggs to interfere with their hatching and thus discourage birds from nesting at that location. After several unsuccessful attempts at breeding in a particular location, birds will leave that location and seek another. A representative of the Humane Society said that, while the Society has no objection to egg oiling or addling, it strongly objects to Wildlife Services’ practice of rounding up and killing geese. Another key effort has been to reduce the risk of aircraft striking wildlife at airports. In 2000, Wildlife Services worked at over 418 airports—a 15- percent increase over the previous year. The airport operator (a city, county, or private company) pays 100 percent of the cost of Wildlife Services’ airport work. According to FAA regulations, a certified airport must conduct a a wildlife hazard assessment if (1) an aircraft has experienced a multiple birdstrike or engine ingestion, (2) an aircraft has experienced a damaging collision with wildlife other than birds, or (3) wildlife of a size or in numbers capable of causing a strike have access to aircraft flight or movement areas. Usually, an airport hires a Wildlife Services biologist to do a wildlife hazard assessment, which is based on periodic observations of the numbers and types of wildlife on or near airport grounds and the challenges posed by the surrounding habitat. Working from the biologist’s report, an airport operator develops a wildlife hazard management plan. For example, a plan might call for using truck- mounted sirens to harass birds or for installing exclusion fences to deter coyotes or deer from wandering onto runways. In collaboration with FAA, Wildlife Services prepared a manual to aid airport personnel in developing, implementing, and evaluating wildlife hazard management plans. The manual, which FAA distributed to all certified airports in the country, includes information on the nature of wildlife strikes, wildlife management techniques, and sources of help and information. Research contributing to wildlife control at airports includes studies to determine whether birds and small mammals are more attracted to mowed or unmowed areas of vegetation. These studies found that birds were more numerous in unmowed plots. Also, the variety and abundance of small mammals was greater in unmowed plots and increased over time, while remaining constant in mowed plots. This finding is important because small mammals are a primary source of food for raptors, which pose a threat of aircraft collisions because of their large size and their habit of soaring. Other research contributing to wildlife control at airports includes research on the use of mesh bags of coyote hair as a repellent for white- tailed deer, the use of amplified distress calls as a harassment technique for birds, and the use of wires installed at various heights to discourage birds from perching on top of signs and other structures near airport runways. This is a large and growing area of emphasis. For several years Wildlife Services has worked in various parts of the country to control rabies. For example, in 1995 it began an oral vaccination campaign in south Texas to control a variant of rabies that had crossed over from domestic dogs to coyotes; in 1994, 166 cases were reported in south Texas. From 1995 to 2000, rabies campaigns—involving the delivery of an oral vaccination enclosed in a bait attractive to coyotes—resulted in the vaccination of between 75 and 90 percent of the coyotes in the area. In 2000, there were no reported cases of the canine variant in south Texas. The rabies program is continuing in 2001, but at reduced levels. In Ohio and the northeastern United States, another rabies control effort has been ongoing for several years. Raccoon rabies entered northeast Ohio in 1996; by the end of 1997, 62 cases had been reported. To halt the westward spread of raccoon rabies, Wildlife Services worked with federal and state agencies to create a vaccination immune barrier from Lake Erie to the Ohio River. Wildlife Services researchers assisted by developing the vaccine and its delivery packet. (The vaccine, encased in a small plastic pouch about the size of a fast-food ketchup pouch, is in turn encased in a bait cube made of fish meal.) Twice a year, in the spring and in the fall, Wildlife Services personnel drop the baits from a small plane equipped with a conveyer-belt-like mechanism that flings out baits at a rate of about 75 per square kilometer. In fiscal year 2000, the program baited an area in eastern Ohio covering about 2,500 square miles. For several weeks following the bait drops, Wildlife Services biologists trap raccoons for examination and subsequent release. The biologists examine each raccoon, take a blood sample to test for rabies antibodies, and pull a tooth (the first pre-molar) for tests to determine how much vaccine the raccoon ingested and when. The Ohio Wildlife Services office maintains a database on the number and health of raccoons trapped and examined. In fiscal year 2000, for example, the Ohio program trapped and examined over 450 raccoons. In addition to their rabies vaccination-related activities, Wildlife Services employees provide technical assistance. In 2000, for example, Wildlife Services biologists in Ohio responded to questions about raccoons from more than 700 people and assisted with educational and training seminars for local health departments. Figure 3 shows raccoons undergoing procedures in the rabies vaccine program. Other wildlife-borne diseases are also of concern. For example, surveillance programs for West Nile virus are active on the East Coast, and the virus appears to be spreading southward and westward. In 2000, Delaware reported that four horses had tested positive for West Nile virus. In 2001 (through October 15), Florida reported that 139 horses had tested positive for the virus. Wildlife Services personnel also assist in surveillance and control activities for wildlife-borne diseases such as hantavirus, bubonic plague, histoplasmosis, and salmonella. We found no independent studies of Wildlife Services’ costs and benefits. The relatively few studies that have analyzed these issues were done by, or in collaboration with, Wildlife Services personnel. However, these studies were peer reviewed and adhered to standards governing their design and conduct. The most comprehensive assessment of costs and benefits, conducted as part of Wildlife Services’ 1994 program-wide final environmental impact statement, concluded that the existing program of lethal and nonlethal controls was preferable to the other four alternatives that were studied in detail because it was the most cost-effective, among other advantages. Other, more narrowly focused studies found that program benefits exceed costs, sometimes by large margins. However, there are several inherent difficulties associated with studies of this nature. For example, estimates of the economic benefits (savings) associated with program activities are based largely on predictions of the damage that would have occurred had the program’s control methods been absent. Such predictions are difficult to make with certainty and can vary considerably depending on the circumstances. A variety of organizations, including environmental and animal rights groups, have written about Wildlife Services’ activities and policies. However, we found no independent studies that rigorously assessed the costs and benefits of the Wildlife Services program; the only studies that we found were conducted by or in collaboration with Wildlife Services scientists and researchers. Nevertheless, these studies were peer reviewed and met other research standards required for publication in a professional journal. For example, to be eligible for publication in the Wildlife Society Bulletin, which has published several of the studies that assessed the costs and benefits of specific Wildlife Services activities, a study must be either peer refereed or peer reviewed. The referees and reviewers assess, among other things, whether a study has design or logic flaws that render its results invalid, biased, or questionable. Referees and reviewers recommend acceptance or rejection of a manuscript submitted for publication. Manuscripts requiring revision are returned to the author for revision and then reviewed again; sometimes a manuscript requires several iterations before a decision is made about its acceptance. Wildlife Services scientists and biologists publish their study results in other professional journals as well. In addition to following requirements that are prerequisites for publication, all Wildlife Services studies adhere to standards governing the design and conduct of the research studies themselves. Wildlife Services researchers follow the standards published by FDA and EPA. The degree to which research must adhere to the standards depends on its purpose. If research were related to the development of a new chemical product, for example, the full standards would apply. On the other hand, if the research were a field ecological study, not all of the standards’ requirements would apply. The standards include requirements governing, among other things, the protocol for and conduct of a study, the reporting of study results, the storage and retention of records, and the humane treatment of any animals used in the study. Of the Wildlife Services’ studies of program costs and benefits, the most comprehensive is its program-wide environmental impact statement (EIS), which was peer reviewed and issued for public comment prior to publication. An EIS assesses the biological, sociocultural, physical, and economic impacts of a federal action and alternatives to that action. The 1994 EIS concluded that, of the alternatives evaluated, the existing program was the most cost-effective, resulting in a favorable ratio of benefits to costs, and offered advantages such as economies of scale and nationwide accountability. The EIS was conducted to comply with requirements of the National Environmental Policy Act (NEPA), which created the Council on Environmental Quality. NEPA requires that federal agencies prepare an EIS for every major federal action that may significantly affect the quality of the human environment. The Council on Environmental Quality’s regulations implementing NEPA do not require a formal benefit-cost analysis to be conducted. However, they require that considerations important to a decision among alternatives be identified and analyzed so that the merits and drawbacks of the alternatives can be compared. Wildlife Services’ EIS, prepared by the Department of Agriculture’s Animal and Plant Health Inspection Service, addressed its ongoing program of wildlife damage management. Its intent was to analyze the impacts associated with the full range of wildlife damage control activities that comprise its program. In addition, the final EIS analyzed the impacts associated with several alternatives to the program. Originally, the potential impacts of three program alternatives were analyzed in detail; as a result of public comments, two additional alternatives were analyzed. The total cost of the 1994 EIS was about $3.5 million, according to a Wildlife Services official. Scoping for that EIS began in 1987, when a notice in the Federal Register sought public input on the issues and alternatives to be addressed. The final EIS, which contained summaries of and responses to the public comments received, was issued in April 1994 and revised in October 1997; it quantified benefit-cost analyses where reliable data existed. According to the EIS, the total economic effects of wildlife damage control are composed of direct and indirect effects—on individuals who sustain damage and on the public. Direct economic effects are those effects that are caused by the action and that occur at the same time and place as the action. For the current damage control program, for example, a direct economic effect on individual farmers or ranchers would be the savings realized from a reduction in livestock losses. For the public, the current program could result in direct effects such as the savings realized and the potential losses of life avoided by improving airport safety through the removal of wildlife from airport runways or flight paths. Indirect effects, on the other hand, are those effects caused by actions occurring later in time or removed in distance from the original action, but still reasonably foreseeable. For example, wildlife damage control on one farm could result in decreased livestock losses on a neighboring farm or ranch, thus benefiting additional farmers and ranchers. Further, by reducing livestock losses, controlling wildlife damage could benefit the public because it could result in lower market prices for agricultural products. The EIS evaluated five alternatives for controlling wildlife damage: (1) a no-action alternative, in which the current federal control program would not exist; (2) the existing program alternative, consisting of technical assistance, nonlethal controls, and lethal controls; (3) a nonlethal controls alternative, in which the program would employ only nonlethal methods; (4) a nonlethal-before-lethal controls alternative, in which the program would use lethal controls only as a last resort, after nonlethal controls had proven unsuccessful; and (5) a damage compensation program alternative, in which the program would compensate property owners monetarily for the losses they incur. The EIS assessed the cost-effectiveness of each of the program alternatives and analyzed the various economic impacts that each alternative would likely produce. Specifically, the EIS analyzed, for each of the alternatives, its direct and indirect economic impacts on affected parties and its direct and indirect economic impacts on the public. Direct impacts on affected parties. This analysis considered the impact, in terms of losses, of wildlife damage on affected parties (e.g., farmers and ranchers). The EIS concluded that the no-action alternative would offer parties at risk the least protection from direct losses, assuming that the current program would not be replaced by other federal, state, or local programs. Under this alternative, where wildlife threatens human health and safety, the affected parties would bear all potential losses, including property damages and insurance and health care costs. The existing control program, offering the widest range of choices in the application of technical assistance and direct assistance methods, could be expected to most efficiently minimize losses and risks. Two other alternatives (a nonlethal control program and a nonlethal-before-lethal control program), restricted by the methods permitted and their order of application, would likely result in higher losses. And finally, the damage compensation program alternative would partially offset agricultural losses, but unverified losses would still be borne by the affected parties and could become significant without a damage control program. Moreover, this alternative would provide monetary compensation only for agricultural damage; in regard to other threats posed by wildlife, such as risks to human health and safety, the damage compensation alternative would be the same as a no-action alternative. The EIS also considered the direct economic effects of damage control expenditures, and concluded that the alternatives compared similarly. Indirect impacts on affected parties. This analysis considered the losses and risks that would be borne by third parties. For example, program activities that prevent the spread of disease by rodents and other wildlife could have a positive effect on the costs of health insurance, even though the individuals paying the lower insurance premiums may never suffer direct losses. The EIS concluded that such indirect impacts could be positive or negative, depending on the alternative considered. For example, a lethal predator damage control program (one option under the current program) on one rancher’s property could reduce the likelihood of losses by neighbors, whereas a nonlethal control program might increase that likelihood. For many agricultural producers, the analysis noted, assistance with wildlife damage control can mean the difference between remaining in or going out of business. Producers might not be able to absorb either increased losses from wildlife damage or added costs of control to prevent those losses. Either or both of these outcomes could result under a no-action alternative, a nonlethal program alternative, or a nonlethal-before-lethal alternative. The continued operation of such producers contributes to the economies of their local communities. Local businesses, therefore, are indirect beneficiaries of damage control activities. For the damage compensation alternative, the affected parties would be on their own in controlling animal damage; the federal role would be one of compensation rather than control. Direct public impacts. These impacts mainly take the form of program expenditures. The EIS concluded that the current program alternative was likely to be the least costly to the public (with the possible exception of the no-action alternative), whereas the nonlethal and the nonlethal-before- lethal alternatives would be more costly, because their damage control activities would likely take longer and have lower success rates. At the other extreme, the damage compensation alternative was judged to be “prohibitively expensive,” with budgeted funds, in effect, determining expenditure levels. In addition to funds for compensation, the administrative costs of verifying losses and processing claims would be considerable. The no-action alternative would not have an impact at the national level unless damage control were undertaken through other federal programs. If state and local governmental entities were to assume animal damage control responsibilities in the absence of a federal program, though, the costs to the public could be collectively comparable to or even greater than the costs of the current program. Indirect public impacts. These impacts were considered in terms of each alternative’s effects on other governmental costs. If, for example, a public airport were held liable by passengers for injuries resulting from an aircraft collision with birds, the cost of compensation would be an indirect effect. Such incidents could be expected to occur most frequently if no governmental wildlife controls were undertaken (i.e., under the no-action alternative or the damage compensation alternative) and least frequently under the current program alternative. For the other two alternatives, nonlethal only and nonlethal-before-lethal, clients’ satisfaction would determine the ultimate impact. That is, if farmers and ranchers were dissatisfied with the approaches used under a federal program, they might demand more appropriate approaches by state, local, or other federal agencies, thus increasing the costs of other government entities. The EIS concluded that, in terms of both avoided losses (benefits) and damage control expenditures (costs), the existing damage control program was the most cost-efficient of the alternatives. The existing program offers several benefits, such as standardizing approaches to wildlife damage management and conducting and disseminating research leading to improvements in wildlife damage management. Many of these advantages could be lost through a no-action alternative. A damage compensation alternative would provide some financial relief to producers for losses due to wildlife predation, but would neglect nonmarket considerations such as the health and safety of airline passengers. A nonlethal-only alternative could result in clients going out of business, as many types of damage could not be successfully addressed, and this would increase the costs to clients who would need to assume their own lethal control activities. A nonlethal-before-lethal program would be more time-consuming and costly to both the program and its clients. Based on its analyses, the EIS concluded that the existing program alternative offered a favorable ratio of benefits to costs, even though the benefits and costs could not be rigorously quantified. Wildlife Services studies other than the EIS have also shown that the benefits of wildlife damage control exceed its costs. These studies primarily address specific aspects of the program, often in specific areas of the country. For example, several studies concluded that the estimated benefit-to-cost ratios for livestock protection from predators (predominantly coyotes) range from 3:1 to 27:1, depending primarily on the types of costs considered. Comparing the market value of all livestock saved in 1998 with the cost of all livestock protection programs in place yielded a benefit:cost ratio of 3 to 1, according to a 2001 Wildlife Services study. In contrast, comparing total savings (including a measure that shows the potential ripple effect of predator losses on rural economies) with federal program expenditures alone would yield a benefit:cost ratio of 27:1. Studies use several measures of program costs and benefits. Estimates of the cost of livestock losses to predators, for example, vary widely, depending on whether one considers only the value of confirmed losses (market value of dead animals found, with predation confirmed by forensic examination) or also the additional costs incurred by livestock producers to reduce predation risk (e.g., the purchase, training, and maintenance of guard animals; fencing; herders; repellent devices; and contributions to private or public predation management programs such as aerial shooting campaigns). These additional costs are significant, and can equal or exceed the cost of predation. The studies discuss various benefits of managing predation. In addition to preventing agricultural losses, predation management activities can provide other substantial benefits. For example, predation management is important for the protection of game animals when their populations are reduced in relation to available habitat. Also, predation management is essential for the successful restoration of threatened and endangered species. Some benefits of the program’s operations and research activities accrue to society at large, such as activities undertaken to reduce risks to public health and safety, and are cost effective as well, according to a recent economic study. For example, the benefits of controlling the spread of raccoon rabies greatly outweigh the costs. This study analyzed the benefits and costs associated with a hypothetical rabies barrier that would stretch from Lake Erie to the Gulf of Mexico. The barrier would be a combination of natural geographic features (the Appalachian Mountains) and oral vaccination zones. The goal of the barrier would be to prevent the raccoon rabies variant from moving west into broader geographic regions of the United States. The study compared the costs of establishing and maintaining this hypothetical barrier with the benefits (avoided costs) of not having to live with raccoon rabies west of its current distribution. The costs of establishing and maintaining an immune barrier include expenditures for baits, distribution of baits, and program evaluation. Benefits are viewed as all costs, including direct medical and nonmedical costs, that would be avoided as a result of the proposed oral rabies vaccination program. Such costs include the costs of public education regarding raccoon rabies, pre-exposure vaccinations and post-exposure treatments, increased compliance rates for dog and cat vaccinations, increased local animal control and surveillance activities, and increased laboratory staff and supplies. The study, based on four variations of an economic model, concluded that a large-scale oral rabies vaccination program should be economically feasible, given the program costs and the avoided costs. The total discounted program cost, over a 20-year period, would be about $95.7 million, and the net benefits (avoided costs minus program costs) of the four model variations would range from $109 million to $496 million, depending on the assumptions employed (i.e., the assumed rate at which the rabies variant would travel and whether animal vaccinations were included or excluded). To test the robustness of the model (i.e., how stable its estimates were in reaction to changes in the range of data used), the study’s economists used a sampling technique known as Monte Carlo, in which they generated a random data set based on specific probability distributions for the data (e.g., barrier area, bait density, bait cost, and aerial distribution cost). The data set was then used in the model, and the resulting variation in the model’s estimates was low, indicating that the model was stable, or robust. Accordingly, the study concluded that the net economic benefits, in terms of avoided costs due to the oral rabies vaccination program, would be substantial. The type of resource, or animal, protected affects the costs and benefits of damage control. Values for threatened or endangered species have been declared “incalculable.” Nevertheless, according to the 2001 study by Bodenchuk et al., such species’ minimum values can be estimated from the funds expended for their restoration. For example, black-footed ferret populations are severely affected by coyote predation, especially following restoration efforts. In studies of restoration success in South Dakota, 30- day survival rates for ferrets averaged 31 percent in the absence of predation management, but 67 percent with predation management in place. Based on an introduction of 50 ferrets, about 18 ferrets would be saved with predation management in place, resulting in a financial benefit of about $524,000. This benefit was calculated using an average individual value of $29,000 per ferret. The individual value was, in turn, calculated by dividing the total reintroduction expenditures in one year ($2,913,220) by the estimated number of individual ferrets in the wild (100). Because of the nature of cost-benefit studies in general, their results should be viewed with some caution. Inherent difficulties bedevil any attempt to quantify the costs and benefits of a program designed to prevent damage. Key among these difficulties are (1) projecting the degree of losses that would have occurred absent the program, (2) valuing those losses, and (3) valuing the program benefits. Moreover, in some instances, the relevancy of data available for quantifying the costs and benefits associated with Wildlife Services activities may be limited by the data’s age. Predictions of the degree of loss that would have occurred had Wildlife Services’ control methods not been in place are difficult to make with any certainty and vary considerably depending on the circumstances. For example, few data exist on livestock losses in the absence of controls. Livestock producers generally have not one control, but a combination of several, in place, such as guard animals, fences, herders, and repellent devices. Yet livestock are taken by predators despite these controls. So the degree of loss that producers would have suffered had they not had controls in place can only be estimated. Predictions about the degree of loss are further complicated by the difficulty in distinguishing between the relative contributions of program activities versus other factors such as weather, disease, or natural fluctuations in predator and prey populations. For example, according to a September 1999 study on coyote depredation control, various interrelated factors influence coyote depredation rates on sheep, including coyote density (e.g., pack size, territory size, and number of coyotes per territory) and the abundance of alternative prey. According to the study, research has shown that coyote predation rates on sheep are closely related to the abundance of natural prey, such as rodents and jackrabbits. A 6-year study in Idaho, for example, showed that predation rates on sheep increased in proportion to changes in the abundance of jackrabbits. When the hare population collapsed, and the coyotes had no alternative food source, their depredation on sheep escalated dramatically. The uneven distribution of damage poses a particular difficulty in predicting losses. That is, although average losses to predators are small compared to overall losses from other causes, such as weather and disease, the damages are not evenly distributed over time or over area. A small proportion of producers absorb high losses, whereas the vast majority of producers sustain less serious economic damage. Thus, using a single average statistic to infer overall program effectiveness would not accurately reflect the distributional variations. For ranchers who are already operating on a small profit margin, additional losses of even a few percentage points could drive their businesses into deficit. The value of losses is difficult to estimate for several reasons. For example, the value of livestock changes with the daily fluctuations in market values. Further, the loss of a pregnant ewe is not simply the loss of that animal, but also the loss of the unborn lamb, as well as any future offspring. Inherent difficulties also exist in the valuation of wildlife. As species, wildlife have positive value for society, but the specific individuals that cause damage and thus, economic losses, have negative value for livestock producers and others who sustain damage. Although the intrinsic value of wildlife is difficult to quantify, the economic value of wildlife can be estimated from the dollar values that wildlife management agencies place on them, according to a 2001 Wildlife Services study. For many common game species, for example, state departments of fisheries and wildlife have established economic values based on estimates of the species’ contributions to the economy. These economic values serve as the basis for civil financial penalties assessed as mitigation for illegal poaching or wildlife kills that result from environmental contamination. For example, according to the study, the weighted average civil penalties assessed for illegally killing wildlife ranged from $26 for an upland game bird to $1,312 for a bighorn sheep. The penalty for taking a mule deer was $350; a pronghorn, $400. Estimates of the value of benefits (avoided costs) also cannot be made with certainty. Estimating the value of controlling wildlife at airports, for example, entails making assumptions about not only the number and severity of wildlife-aircraft collisions that would occur without the program in place, but also about the cost associated with repairs, medical treatment, and loss of human life. Some groups that take issue with Wildlife Services activities suggest that its programs are not cost-effective because the money spent on livestock protection exceeds the value of the losses to ranchers and others. However, Wildlife Services officials believe that it is misleading to focus only on the value of losses that occur with a control program in place and to disregard the value of the damage that is prevented by the program. They compared this type of analysis to having a fire department that costs $10 million a year to operate and keeps fire damage in a community down to $2 million a year in losses. Rather than saying that the department is not worth its cost because losses due to fire damage were only a fraction of the cost of operating the fire department, consideration should be given to what the losses would have been without a fire department. The age of the various cost-benefit studies, and the data upon which they were based, may pose yet another limitation, in terms of both relevance and scope. The environmental impact statement, for example, was based on data that are now over a decade old and may not reflect current conditions. For example, the EIS did not include analyses of the white- tailed deer and the resident Canada goose, both of which have become increasingly problematic. Some of the other, more narrow, studies we reviewed were also based on old data. Wildlife Services has requested funding to update the EIS. The EIS primarily used fiscal year 1988 data to provide a comparable baseline against which to evaluate each alternative. If funded, work on the supplemental EIS would likely start next year and would be expected to take about 3 years to complete. The supplemental EIS would incorporate information on new wildlife management techniques that have been introduced since the early 1990s. Program officials also plan to study and incorporate into the supplemental EIS information on aquaculture depredation issues and on overabundant animals such as white-tailed deer, resident Canada geese, and blackbirds. Wildlife Services researchers believe that considerable potential exists for developing more effective nonlethal controls of wildlife damage through the use of new and improved technologies. In light of the controversy surrounding lethal controls, Wildlife Services devotes most of its research efforts toward this end. Past efforts to develop effective and economical nonlethal controls, however, have met with limited success. Although Wildlife Services research has developed several nonlethal controls that are used on many farms and ranches, these controls have not limited livestock losses to the point where lethal controls are no longer needed. The National Wildlife Research Center conducts research and provides information on a range of methods for managing wildlife damage. Considerable opportunity exists for developing more effective nonlethal means of controlling predators on farms and ranches—for example, through wildlife contraceptives or through the use of scare devices triggered by motion sensors. In fiscal year 2000, about $9 million (75 percent) of the program’s total research funding was spent on efforts related to developing or improving nonlethal controls. A National Wildlife Research Center program manager noted that scientists feel considerable pressure to research and quickly develop nonlethal control methods. The manager noted that the pressure comes not only from animal advocacy groups and personal preferences, but also from a changing environment where experts in the field see the loss or diminishing acceptance of traditional control tools like guns, traps, and poisons. Nevertheless, funding levels have remained static for the past several years, hampering the center’s ability to conduct additional research projects. The center generally has about 19 projects underway in such areas as wildlife contraceptives, wildlife repellents, rodent control methods, and analytical chemistry methodology. Most of the projects are multi-year efforts of 3 to 5 years’ duration. According to the center’s product development manager, research projects dealing with reproductive controls are particularly promising. A goal of these projects is to develop and field-test economical and effective agents to control fertility in populations of mammals and birds involved in human-wildlife conflicts. Researchers are also seeking ways to improve the delivery of contraceptives to wildlife, through, for example, darts or bio-bullets. Some species, however, such as deer, live for a dozen or more years. Using contraceptives to address the problem now that the deer population has surged will mean a long delay before relief can be obtained. Consequently, a researcher stated that such species’ populations should probably first be “culled” and then treated with a reproductive inhibitor. Following are some examples of reproductive control projects recently completed or underway that may lead to the development of more effective nonlethal means of controlling predators on farms and ranches, as well as problem wildlife in urban areas: Researchers recently completed a 5-year study on reproductive intervention strategies for managing coyote predation. The goals of the study were to (1) determine whether sheep losses could be reduced by sterilizing coyotes in territories where sheep and other livestock are pastured and (2) develop and transfer information critical to the registration and/or practical application of sterilant technologies and pharmaceutical products. In addition to determining whether sterilized coyotes kill fewer sheep than do coyotes with pups to feed, researchers evaluated whether surgical sterilization changed the coyotes’ territorial or affiliate behaviors. After extensive field tests, researchers concluded that sterilization reduced, but did not eliminate, coyote predation on sheep. Center scientists are working to tailor an oral contraceptive, Nicarbazin, so that it can be given to geese. They are focusing on developing a more palatable bait to deliver the contraceptive, delaying release of the contraceptive into the bird’s body, and determining effective dose levels. Center scientists are working with a rodent immunocontraceptive, GnRH, which is a hormone vaccine. Officials in a California city have asked for help in controlling ground squirrels that are creating a problem on area beaches. Local laws prohibit poisoning or relocating nuisance animals such as the ground squirrels; consequently, officials are using the rodent immunocontraceptive to resolve the problem. Wildlife Services researchers received FDA approval to assist in a study trial of a single-shot delivery of PZP (porcine zona pellucida), an injection- delivered contraceptive, for use on deer. A large urban area in Ohio has requested assistance in controlling its deer population in city parks and has agreed to be the host site for the proposed PZP study trial. The research program also provides data pertaining to pesticide and drug registrations to EPA and FDA. For example, an application for new registration has been submitted to EPA for methiocarb, a bird repellent. To support the application, researchers submitted data required by EPA for future product registrations. In addition, other program specialists are engaged in projects involving the development of global information system (GIS) applications, statistical and monitoring methods, and electronics designs for use in wildlife damage management. Developing nonlethal control methods is a challenge that involves further research on such tools as chemical repellents and contraceptives. This challenge also involves biological and behavioral science research focusing on the differences both among species and within a single species. However, the nature of scientific research is such that while many research projects are undertaken, relatively few yield effective, marketable results. Moreover, research that looks promising at the outset often encounters problems that cannot be overcome easily. Such has been the case with nonlethal control research. Many nonlethal controls work well, but only in certain situations or locations, and some work only temporarily. According to researchers, certain chemicals show promise as nonlethal repellents. For example, the center developed methyl anthranilate—a chemical that smells like grape soda—which is repugnant to geese and is applied to ponds and grassy areas to repel geese from golf courses, airstrips, and public parks. Although the use of this chemical appears promising, it must be reapplied frequently to be effective. In other cases, chemical research that looked promising has not come to fruition. For example, in one research project, researchers laced lamb carcasses with lithium chloride, a chemical that causes coyotes to vomit. Researchers thought that this chemical showed promise in early laboratory and field tests as a means of conditioning coyotes not to kill lambs. However, while the coyotes in the field tests learned not to eat lambs, they continued to kill them. Another chemical that causes a predator to feel sick is Mesurol. This chemical has proven to be an effective deterrent on ravens, predators of bird eggs such as those of the endangered least tern. The center is working on other nonlethal chemical products such as alpha-chloralose. This chemical is an FDA-approved immobilizing agent that researchers are using to capture waterfowl so that identification bands or radio collars can be attached to the birds as part of research studies. Alpha-chloralose is also used to facilitate removal of nuisance animals such as ducks and geese that have found their way into swimming pools or city reservoirs. The presence of geese in these areas is a serious potential health hazard because of bacteria found in goose fecal matter. In other instances, deterrence devices that appeared to be promising in the lab and during initial testing, such as the Electronic Guard predator scare device, have not received widespread acceptance for use on farms and ranches. The research center’s product development program manager stated her belief that the Electronic Guard, which emits both a bright light and a loud noise to scare coyotes, could be highly effective if used correctly. She said purchasers need to use several of them at random intervals to be effective. Unfortunately, each one is fairly expensive. However, an operations official in Utah told us the Electronic Guard is not particularly useful in his state. He pointed out that the Electronic Guard technology is outdated—utilizing a bulky 12-volt battery—and consequently the device is not very easily transported to Utah’s remote grazing locations, because it does not fit into a saddlebag. The official expressed his belief that the Electronic Guard has potential for other uses, such as deterring deer and other wildlife. Other nonlethal control methods that employ traditional “scare” devices such as pop-up scarecrows, flashing lights, pyrotechnics, and noisemakers are also useful in managing birds. Mylar tape works well, too, because light reflecting off the tape apparently frightens the birds. With most such techniques, however, the birds adapt within a relatively short period of time and the measure is no longer effective. Consequently, adjunct techniques must be used. Lasers are one of the newest scare techniques to show great promise in bird control. According to center researchers in Sandusky, Ohio, the use of lasers has proven effective in dispersing certain species of birds. For example, lasers have worked quite well in low light conditions (after sunset) with geese, double-crested cormorants, and Hawaiian stilts, which shy away from the beam of light emitted by the laser. In a test in Ohio, for instance, lasers were effective in scaring away—within 15 minutes— approximately 18,000 geese at a municipal lake. Because the laser is silent and can be selectively directed at a particular species of bird, the laser is preferable to loud devices where disturbance of people and other wildlife is a concern. Wildlife Services has developed a helium neon gas laser that costs less than $1,000. The beam can extend for a quarter of a mile. The French are marketing a similar laser for animal control for about $7,700. Bird necropsies have shown no damage from lasers, even at 1 meter. Nevertheless, some animal rights groups are protesting their use. Other nonlethal control approaches can be directed at disrupting the animals’ behavior without scaring them. One such nonlethal bird control is “pond gridding,” which involves the placement of gridwire over ponds to prevent landings by geese and other birds. Wildlife Services staff also provide advice to homeowners and commercial building owners on how to alter the structure of buildings to discourage birds from roosting on them. For example, ledges can be boxed in, and spiky steel “porcupine wire” can be placed on ledges to dissuade landings. However, birds sometimes figure out how to build nests right on top of the spiky wire, so other devices may be needed in conjunction with the wire. Supplemental feeding is another nonlethal control directed at changing an animal’s damage-causing behavior without frightening the animal. According to researchers, this approach looks promising for bears that are coming out of hibernation when little food is available. In the Pacific Northwest, bears resort to stripping the bark from trees to eat its sweet inner surface, which kills large sections of forest. Experiments have shown that providing bears with sugar cane deters them from damaging the trees and may discourage their livestock predation as well. A successful bear feeding program for the protection of timber has existed for several years in Washington State. Unfortunately, the problem of bears’ predation on livestock is often more difficult to resolve. According to a Wildlife Services researcher, even when bears have ample alternate food supplies, they simply seem to prefer lambs and ewes. However, in Utah this past summer, the Wildlife Services state director coordinated with various federal and state organizations and with the permittee to try a bear feeder on a remote grazing allotment. The state director hopes that the use of the bear feeder, which contains molasses-sweetened dry pellets, will help deter both livestock and wildlife kills by bears. Wildlife Services has had limited success working to develop effective nonlethal controls for beavers. In addition to landowners’ concerns about beavers flooding timber and croplands, Wildlife Services receives numerous requests to help cities deal with beaver problems at their sewage treatment plants. For beavers, the main nonlethal device currently in use is a water control device, developed at Clemson University, called a Clemson Beaver Pond Leveler. The pond leveler design is intended to suppress the problem of flooding by allowing water to drain through a beaver dam or plugged road culvert, even if beavers build a dam at the mouth of the culvert. The pond leveler is a simple, low-cost device that is made largely from PVC pipe. Pond levelers work better in some geographic locations than others. In North Carolina, Wildlife Services installed seven pond levelers in 2000, with mixed results. The pond levelers’ effectiveness was temporary at best: most failed within 12 months. The beavers either thwart the pond levelers by building their dams 30 feet downstream, thereby backing up water and defeating the purpose of the devices, or they dam up the pond leveler itself. According to Wildlife Services officials, pond levelers seem to work better in locations with hillier, steeper topography than North Carolina’s. Relocation, a nonlethal control method, is rarely a viable option, for several reasons. First, some animals such as beavers, white-tailed deer, and resident Canada geese are considered to be overabundant, so finding a suitable relocation habitat is difficult. Second, relocation is not always effective or in the animal’s best interest. Some animals (e.g., bears) will just return to their original habitat; relocated animals may die in their new habitat because they are unfamiliar with the terrain and food sources or because they are killed by competitors whose territories they have invaded. Third, the risk of wildlife-borne disease sometimes makes people reluctant to accept the relocation of wildlife to areas near their residences. In fact, to help prevent the spread of disease, many states have laws against relocating wildlife. Most nonlethal control methods such as fencing, guard animals, and animal husbandry practices are most appropriately implemented by the livestock producers themselves, with technical assistance from Wildlife Services. According to Wildlife Services officials, by the time producers request assistance from the Wildlife Services program, they have typically already been employing a variety of nonlethal control measures and are experiencing predation on their livestock in spite of these measures. Wildlife Services must use lethal control methods in situations where nonlethal controls are ineffective, impractical, or unavailable. We provided the Department of Agriculture with a draft of our report for its review and comment. We received comments from officials of the Wildlife Services program, including the Deputy Administrator and the Associate Deputy Administrator. The officials agreed with the information presented in the report. They said that the report was thorough and unbiased, and that it competently communicated the need for and complexities associated with wildlife management. The officials acknowledged that there are many emerging wildlife damage concerns, as presented in appendix IV of this report, that exceed the program’s current ability to address, within current resources. In an effort to respond to these emerging needs, Wildlife Services officials said they have at times compromised the program’s infrastructure by providing services rather than upgrading equipment and facilities. The officials said they are committed to fixing the infrastructure problems while concurrently taking steps to target current and future resources toward the most critical emerging issues. The officials also provided a number of technical corrections and clarifications to the draft report, which we incorporated as appropriate. We conducted our review from March 2001 through October 2001 in accordance with generally accepted government auditing standards. Details of our scope and methodology are discussed in appendix I. We are sending a copy of this report to the Secretary of Agriculture and appropriate congressional committees. We will make copies available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-3841. Key contributors to this report are listed in appendix VI. In October 2000, the Conference Committee on the Department of Agriculture’s fiscal year 2001 appropriations directed us to conduct a study of the Department’s Wildlife Services program. Specifically, we agreed to determine (1) the nature and severity of threats posed by wildlife, (2) the actions the program has taken to reduce such threats, (3) the studies Wildlife Services and others have done to assess the specific costs and benefits of program activities, and (4) the opportunities that exist for developing effective nonlethal methods of predator control on farms and ranches. To obtain information about the damage caused by injurious wildlife and the actions Wildlife Services takes to control such damage, we reviewed program documents, research studies, and surveys such as the livestock loss surveys conducted by the U.S. Department of Agriculture’s National Agricultural Statistics Service. We gathered information on both the operations and research arms of Wildlife Services. For the operations arm, we visited Wildlife Services’ western and eastern regional offices and offices in four states (two western and two eastern). For the research arm, we visited Wildlife Services’ National Wildlife Research Center, in Fort Collins, Colorado, and two of its field research stations (one western, near Logan, Utah, and one eastern, in Sandusky, Ohio). The field research station in Utah conducts mammal research; the station in Ohio, bird research. At each of the regional and state offices we interviewed officials and reviewed records such as cooperative agreements, program evaluations, and budget and accounting documents. In selecting states to visit, we strove for geographic diversity as well as a cross-section of the program’s various operational emphases (e.g., protection of agriculture, human health and safety, natural resources, and property). In each state visited, we met with program clients (e.g., farmers, ranchers, representatives of associations such as the Farm Bureau, and federal and state wildlife management officials), and we accompanied Wildlife Services personnel in the field to observe various activities such as removing beaver dams and vaccinating raccoons. We also visited and interviewed officials of the program’s Management Information System Support Center, located in Fort Collins, Colorado, which tracks the number and types of operational activities conducted. To obtain information on the program’s costs and benefits, we conducted literature searches; reviewed economic studies conducted by program researchers, academicians, and others; and interviewed Animal and Plant Health Inspection Service economists who were involved in assessing costs and benefits for the programmatic environmental impact statement. We also discussed the costs and benefits of the Wildlife Services program with program researchers, operations personnel, and cooperators. To obtain information on nonlethal methods of controlling livestock predators, we reviewed research studies and interviewed program researchers and field operations personnel. At the program’s predation ecology and behavioral applications field station, we attended a review of current research on reproductive intervention strategies for managing coyote depredation. We also discussed nonlethal control methods with various livestock operators who were program clients, as well as with representatives of industry associations (e.g., the Farm Bureau and wool growers’ associations). Finally, we discussed nonlethal control methods and general Wildlife Services operations with representatives of the Humane Society of the United States and the Defenders of Wildlife. We conducted our review from March 2001 through October 2001 in accordance with generally accepted government auditing standards. The Wildlife Services program, including its predecessor programs, has evolved over the past century to meet the changing needs and desires of society. This appendix, drawing from the history contained in the program’s final environmental impact statement (EIS), addresses some of the key events that have shaped the program over the years. The first federal government involvement in wildlife damage control efforts occurred in 1885, when the Department of Agriculture’s Branch of Economic Ornithology sent questionnaires to farmers about damage caused by birds. The following year the branch was elevated to division status and renamed the Division of Economic Ornithology and Mammalogy. The Commissioner of Agriculture stated that the new division would be responsible for educating farmers about birds and mammals affecting their interests so that the destruction of useful species might be prevented. Efforts to educate farmers included conducting studies and demonstrations of wildlife damage control techniques in the western United States and testing poisons for control of the house sparrow. Between 1905 and 1907, the program, by then named the Bureau of Biological Survey, investigated and published methods for coyote and wolf control in conjunction with the Forest Service. At the same time, western livestock interests began voicing opposition to fees levied by the federal government for livestock grazing on federal lands in areas with high populations of coyotes and wolves. As agricultural interests began to speak out, more attention was focused on problems with wildlife. In 1913 direct assistance work began under a small administrative allotment of funds to control plague-bearing rodents in California national forests. During the following year, the first cooperative agreement was signed by the president of the New Mexico College of Agriculture and Mechanical Arts and the Secretary of Agriculture. In 1914 the Congress responded to the concerns of farmers and ranchers by appropriating funds for experiments and demonstrations on predator control. The first congressional appropriation for federal predator control operations came in 1915, when the Congress appropriated $125,000 to the Bureau of Biological Survey to control wolves and coyotes. The 1916 Convention between the United States and Great Britain for the Protection of Migratory Birds and its enabling legislation, the 1918 Migratory Bird Treaty Act, authorized the issuance of permits for the taking of migratory birds that were injurious to agriculture and other interests. The need for improved methods and techniques for the control of predators and rodents led to the establishment of a laboratory in Albuquerque, New Mexico, for experimentation with poisons. In 1921 this laboratory, called the Eradication Methods Laboratory, was moved to Denver. Years later, this facility would become known as the National Wildlife Research Center, located today in Fort Collins, Colorado. Although the need for wildlife damage control efforts was acknowledged by the Congress, some felt the federal program was unnecessary. In 1930 the American Society of Mammalogists issued a strong statement of opposition to the federal predator control program. This nearly caused the cancellation of the $1 million congressional appropriation for predator and rodent control. But in 1931, after full congressional hearings, a bill was passed by the Congress and signed by President Hoover giving the federal government authority to conduct wildlife damage control activities. This bill became the Act of March 2, 1931, and remains the primary statutory authority under which the current Wildlife Services program operates. In 1934, the Congress appropriated funds to buy property in Pocatello, Idaho, for a facility to produce baits for the predator and rodent control programs. The facility opened in 1936 as the Pocatello Supply Depot, which remains an integral part of the current program. In 1939, under President Franklin Roosevelt’s government reorganization plan, Agriculture’s Bureau of Biological Survey and Commerce’s Bureau of Fisheries were transferred to the Department of the Interior, forming the U.S. Fish and Wildlife Service. All wildlife damage control functions were transferred to Interior’s new Branch of Predator and Rodent Control. The reorganization was part of President Roosevelt’s attempt to consolidate within the Interior Department all federal activities dealing primarily with wildlife. This presented the Fish and Wildlife Service with the dual objectives of both controlling and enhancing certain wildlife species, depending on the circumstances. In 1946, the Fish and Wildlife Coordination Act of 1934 was amended to authorize the Secretary of the Interior to cooperate with other federal, state, and public or private agencies in minimizing damage caused by “overabundant” species. In 1948 the Lea Act was passed, authorizing the program to purchase or rent up to 20,000 acres in California for the management and control of migratory waterfowl. That same year, a worldwide shortage of cereal foods prompted the Congress to appropriate funds for Agriculture and Interior to become involved with rat control. The Predator and Rodent Control program conducted extensive rodent control activities that further established wildlife damage control efforts in the eastern United States. The federal animal damage control program operated in relative obscurity, with little public opposition, during the 1940s and 1950s. By then the program comprised several components, including research, technical assistance, and both lethal and nonlethal direct assistance activities. The type of assistance provided depended on the location, the local institutions, and the resource being protected. In the 1960s, however, growing environmental awareness brought the program under closer scrutiny. The use of poisons to kill predators increasingly came under criticism, even from traditionally conservative interests such as editors of national hunting and fishing magazines. In 1963, Secretary of the Interior Stewart Udall appointed a group called the Advisory Board on Wildlife Management to investigate federal wildlife damage control efforts. The Board published a report in 1964 officially entitled “Predator and Rodent Control in the United States” (Leopold et al. 1964), but the report is more commonly referred to as the Leopold report, named after A. Starker Leopold, Chairman of the Advisory Board. The report was critical of the animal damage control program in many ways, charging it with indiscriminate, nonselective, and excessive predator control. For example, the report stated that the leghold trap was nonselective, meaning it was apt to capture non-target species, resulting in unnecessary loss of wildlife. Recommendations of the Leopold report were incorporated in the 1969 Animal Damage Control program’s policy manual. For example, professionally trained personnel were added to the program, in-service training for long-time employees was instituted, nearly all predator control practices were reduced, and regulation and supervision of toxicants were tightened. Predator control continued to be the focus of public attention. In 1971, spurred by lawsuits from animal welfare groups over the program’s use of toxicants, the Secretary of the Interior and the President’s Council on Environmental Quality appointed a seven-person Advisory Committee on Predator Control. The report of that committee, like the Leopold report, took on the name of its chairman, Stanley Cain. The Cain report stated that the use of chemicals is likely to be inhumane and nonselective, and it recommended that landowners be trained in the use of leghold traps as a major method of predator damage control. The report was generally critical of federal predator control efforts, and outlined 15 recommendations for changes in the federal program. Among the recommendations was that immediate congressional action be sought to remove all toxic chemicals from registration and use for direct predator control. In February 1972, as a result of the Cain report’s recommendations, President Richard Nixon signed Executive Order 11643, restricting the use of toxicants for predator control by federal agencies or for use on federal lands. In compliance with the order, the Environmental Protection Agency cancelled the registrations of several chemicals: Compound 1080, strychnine, sodium cyanide, and thallium sulfate. In 1974, the program was titled the Office of Animal Damage Control. In 1975, President Nixon’s Executive Order 11643 was amended by President Gerald Ford’s Executive Order 11870, to allow the experimental use, for up to 1 year, of sodium cyanide to control coyote and other predatory mammal or bird damage to livestock on federal lands or in federal programs. Order 11643 was again amended in 1976 by Executive Order 11917 to allow the operational use of sodium cyanide for predator control on certain federal lands or in federal programs. In 1978, the Secretary of the Interior appointed an Animal Damage Control Policy Study Committee to review the federal Animal Damage Control program. This committee, too, was very critical of the program, saying it found insufficient documentation to justify the program’s existence. As a result of this report, and related public hearings, the Department of the Interior prepared a December 1978 report “Predator Damage in the West: A Study of Coyote Management Alternatives.” This report summarized all pertinent information and was developed to serve as a source document for consideration by the Secretary in making decisions about the program. The Committee’s report led to a policy statement issued by Secretary of the Interior Cecil Andrus in November 1979, which stopped the practice of denning (i.e., finding and killing coyote pups at their dens) and research on the use of the chemical Compound 1080. The policy was an attempt to emphasize the use of nonlethal control methods. Adverse reactions to Secretary Andrus’ policy were expressed in a January 1980 memo by the Western Regional Coordinating Committee, composed of 28 university research and extension personnel and various Agriculture and Interior employees. The committee members were concerned that the policy showed minimal understanding of livestock industry problems and minimal knowledge of the realities of predator losses and control. The Committee’s concerns reflected a growing opinion that the Animal Damage Control function would be better served if it were administered by the Department of Agriculture. In 1981, the Environmental Protection Agency held hearings on the predator control issues. At the same time, Secretary of the Interior James Watt rescinded former Secretary Andrus’ policy statement that banned denning. In January 1982, President Reagan signed Executive Order 12342, which revoked President Nixon’s Executive Order 11643 (banning the use of toxicants), as amended. In an amendment to the 1986 continuing federal budget resolution, the Congress transferred all Animal Damage Control program personnel, equipment, and funding from the Fish and Wildlife Service to the Department of Agriculture. By April 1986, transfer of all personnel and resources had been completed. Specifically, the Animal Damage Control program was placed in the Department’s Animal and Plant Health Inspection Service (APHIS). Also in 1986, the National Animal Damage Control Advisory Committee, comprised of agricultural producers, environmental and animal welfare organizations, and academic institutions, was appointed by the Secretary of Agriculture to provide advice on policies and issues of concern to the Animal Damage Control program. At the end of 1987, the Congress, in Public Law 100-202, authorized the program to conduct control activities of nuisance mammals and birds and those that are reservoirs for zoonotic diseases (i.e., diseases that can be passed to people). In 1991, the Congress authorized the Animal and Plant Health Inspection Service to undertake a pilot program to control the brown tree snake on Guam. Since 1993 Wildlife Services has conducted a brown tree snake damage management program on Guam, in cooperation with the Department of Defense, the Department of the Interior, and the governments of Guam and Hawaii. In June 1990, the draft environmental impact statement (EIS) for the Animal Damage Control program was released for public comment. The supplement to the draft EIS, which contained revisions, additional information, and analyses developed in response to comments received, was released for public comment in January 1993. Based on comments received, two additional alternatives and more information were included in the April 1994 final EIS, which provided the basis for future direction of the program. In 1997, the program’s name was changed to Wildlife Services. That same year, the program relocated its laboratory headquarters and established the National Wildlife Research Center in Fort Collins, Colorado. In 2000, the Congress amended Wildlife Services’ authority under the Act of March 2, 1931 (7 U.S.C. 426). The amendment removed specific language that, according to Wildlife Services officials, reflected outdated program goals and philosophy, such as to “. . . promulgate the best methods of eradication . . . of mountain lions, wolves, coyotes, . . .“ and to “. . . conduct campaigns for the destruction . . . of such animals.” The revised section of the act now authorizes the Secretary of Agriculture to “. . . conduct a program of wildlife services with respect to injurious animal species and take any action the Secretary considers necessary in conducting the program. The Secretary shall administer the program in a manner consistent with all of the wildlife services authorities in effect on the day before October 28, 2000.” According to the EIS, the close scrutiny the program has received over the years, together with internal reviews and strategic planning, has resulted in the program’s continual evolution. Increasing emphasis has been placed on the development and implementation of a variety of damage control methods, including multiple forms of technical assistance and direct assistance services. Also, the program has sought to increase its staff’s professionalism and training, to improve its data systems and its relationships with other wildlife management agencies, and to emphasize research and development of new control methods. In consideration of contemporary societal values, the program seeks an acceptable balance between human interests and wildlife needs. This appendix contains details of Wildlife Services’ fiscal year 2000 expenditures for its administrative costs, operational activities, and research activities. Wildlife Services’ total fiscal year 2000 expenditures were $80.6 million. Of these expenditures, about $42.3 million (including just over half a million specifically earmarked for aquaculture) was funded by Wildlife Services’ appropriation; the other $38.3 million was funded by clients (i.e, by cooperative dollars). Administrative expenditures totaled about $9.5 million and included a variety of activities such as administrative support, employee development, and Management Information System (MIS) support. Table 3 shows the breakout of administrative expenditures, funded solely with federal dollars. In fiscal year 2000, the Wildlife Services program spent almost $60 million on operational activities. Of that amount, about $23 million was from Wildlife Services appropriations; the other $36 million was contributed by cooperators (program clients). Table 4 shows the program’s fiscal year 2000 operational expenditures, by state and by source (i.e., Wildlife Services or cooperators). In addition to the $21,275,873 of federal funding allocated specifically for state operations, approximately $2 million of funding managed at the regional level was available for state operations use. According to a Wildlife Services official, the additional amount is managed at the regional office level. The program spends the majority of its operational funds on activities to protect agriculture; in fiscal year 2000, cooperators contributed about 60 percent of these funds. Figure 4 shows Wildlife Services’ fiscal year 2000 operational expenditures, by category (the program’s various operational emphases). Wildlife Services also tracks subcategories of operational expenditures. Within each program category are several subcategories of expenses. For example, the agriculture category includes expenditures for the protection of livestock, crops, forest/range, and aquaculture. Cooperators provide the majority—over 60 percent in fiscal year 2000—of the funds spent on livestock protection. Table 5 shows the program’s fiscal year 2000 agriculture expenditures, by subcategory. Wildlife Services’ expenditures for its research activities totaled $12,226,694 in fiscal year 2000. Wildlife Services covered the majority of these expenditures with $10,357,000; cooperator funding accounted for the remaining $1,869,694. Table 6 provides examples, by state, of the wildlife that pose challenges, the resources they damage, and emerging concerns about wildlife damage. For each state, only a few examples are given (of injurious wildlife and the damage they do); many more problems than these exist in each state. The examples do not include the risk to human health and safety posed by birds at airports. This risk is excluded because it exists in every state, and Wildlife Services performs control activities in every state. In some states, though, particularly coastal ones, the risk to human health and safety posed by migratory birds and the risk of their colliding with aircraft is already significant and is growing. Through an interagency agreement with the Federal Aviation Administration (FAA), Wildlife Services maintains a database of all reported wildlife strikes to U.S. civil aircraft and to foreign carriers that experience strikes within the United States. The database contains about 34,000 strike reports from 1,100 airports for the period 1990 through 2000. In 2000 alone, about 6,000 strikes were reported. Wildlife Services estimates, however, that the number of strikes reported represents only about 20 percent of those that have occurred. The following examples from the database show the serious effects that strikes by birds or other wildlife can have on aircraft. The examples are not intended to highlight or criticize individual airports; strikes have occurred at or near almost every airport in the United States. For more information on wildlife strikes, visit www.birdstrike.org. In January 1990, a Hawker Siddeley aircraft struck several white-tailed deer during takeoff from John Tune Airport in Tennessee. One deer was completely ingested by the left engine. The impact tore the engine loose, and the aircraft had to be replaced at a cost of $1.4 million. In November 1990, a Bae-3200 ingested doves in both engines during takeoff from Michiana Regional Airport in Indiana. The engines were destroyed, and the aircraft was out of service for 60 hours. The repair cost was about $1 million. In November 1991, a DC-10’s number 1 engine ingested one or more American crows during takeoff from Chicago O’Hare International Airport. Parts of the engine came out the side and damaged the number 2 engine. The aircraft made a precautionary landing. In December 1991, a Citation 550’s number 1 engine ingested one or two turkey vultures during takeoff from Angelina County Airport in Texas. The engine experienced an uncontained failure, a fire, and vibrations that caused a 100-percent loss of thrust, causing the takeoff to be aborted. The wing and fuselage received damage from engine shrapnel. The aircraft was out of service for 2 weeks, at a repair cost of $552,500. In February 1992, a Piper 28 was just about to touch down on the runway at Sandstone Airport in Minnesota when a deer ran toward the aircraft and collided with it. The pilot added power and aborted the landing, but lost power during the climb and crashed into trees and then into the ground about a quarter-mile from the airport. The pilot was seriously injured; the aircraft was destroyed. In October 1992, a Boeing 747 struck numerous herring gulls during takeoff from John F. Kennedy International Airport in New York. A gull was ingested by an engine, bending four fan blades and causing the aircraft to make a precautionary landing. The passengers departed on another aircraft the next day. The reported cost—of hotel, lost revenue, and repairs—was $750,000. In October 1993, a Boeing 757 struck about 35 cattle egrets on takeoff from Orlando International Airport. Takeoff was aborted. Three tires on the right side blew out, and the aircraft was towed to the gate. The ingestion of 10 to 12 birds damaged engine fan blades and the engine cowl. In December 1993, a Cessna 550 struck a flock of geese on its climbout from DuPage Airport in Illinois. A loud bang occurred, followed by unstable flight. The number 2 engine lost power, and the aircraft experienced a substantial fuel leak on the left side. The pilot made a safe emergency landing at Midway Airport. Both engines had to be replaced. The aircraft was out of service for 90 days; the cost of repairs was $800,000. In May 1994, a Bell BHT-47 helicopter crashed into the backyard of a residence in Oklahoma, resulting in two fatalities. The pilot of another helicopter, which had been traveling ahead of the one that crashed, said he had warned the other pilot about a flock of birds which he himself had avoided by banking sharply. The probable cause of the crash, according to the National Transportation Safety Board, was the pilot’s loss of control when he maneuvered abruptly to avoid colliding with the flock of birds. In July 1994, a Cessna 172 was seen flying about 200 feet above the water along a beach in Florida. A pelican collided with the windshield; the aircraft rolled upside down and hit the water. The pilot was fatally injured. In June 1995, a Concorde ingested a Canada goose into the number 3 engine upon landing at John F. Kennedy International Airport. The engine suffered an uncontained failure, causing parts to go into the number 4 engine. Both engines were destroyed. The aircraft was out of service for 5 days; repair costs were over $9 million. In an out-of-court settlement 3 years later, the Port Authority of New York and New Jersey paid Air France $5.3 million in compensation for losses. In December 1995, on approach to John F. Kennedy International Airport, a Boeing 747 broke through a cloudbank and struck a flock of snow geese, which sounded like sandbags hitting the aircraft. The impact destroyed one engine, damaged several fan blades on another, and extensively damaged the airframe. The repairs cost about $6 million. In July 1996, a Boeing 737 ingested an American kestrel into the left engine upon takeoff from Nashville International Airport, resulting in a compressor stall and an aborted takeoff. The aircraft overran the runway, and the passengers were evacuated. One passenger was seriously injured; four others received minor injuries. In October 1996, a Boeing 727 struck a flock of gulls just after takeoff from Washington National Airport. An engine ingested at least one bird, began to vibrate, and was shut down. A burning smell entered the cockpit, and an emergency was declared. The aircraft, carrying Housing and Urban Development Secretary Henry Cisneros and 52 other passengers, returned to the airport and made a safe precautionary landing. Engine blades were damaged. In January 1997, a McDonnell Douglas 80 struck over 400 blackbirds just after takeoff from Dallas-Fort Worth Airport. Nearly every part of the plane was hit. The pilot declared an emergency and returned to land uneventfully. The number 1 engine had to be replaced, and damage to the plane was substantial. The cost of repairs was about $219,000. In August 1997, a Boeing 737 struck 12 to 15 mallards after takeoff from Portland International Airport in Oregon. The pilot returned to the airport and landed safely. The radome and all engine fan blades had to be replaced, at a cost of over $100,000. In May 1998, a Boeing 727 struck several Canada geese after takeoff from Colorado Springs Metro Airport. The crew felt moderate to severe vibration after the aircraft ingested one or more birds. The aircraft lost essential electrical power, which was restored by a generator. The number 3 engine suffered an uncontained failure. Shrapnel was ejected through the engine case, severing electrical wires and puncturing the anti-ice bleed air duct. The radome, upper engine cowling, and thrust reverser were also damaged. The aircraft was out of service for 98 hours; the repair cost was $1.4 million. In November 1998, a Boeing 737 struck a buck deer on the runway when taking off from Western Nebraska Regional Airport. The pilot proceeded with the takeoff, but then returned to land. An engine suffered major damage. The flight was canceled; the passengers and crew were rerouted the next day. Total cost was $430,000 for repairs, lost revenue, meals and hotel rooms, and other transportation for passengers. In February 1999, a Boeing 757 encountered a flock of European starlings upon takeoff from Cincinnati/Northern Kentucky International Airport. The first officer tried to climb over the birds but struck several hundred of them. Both engines ingested birds and were damaged; the repair cost was about $500,000. More than 400 dead starlings were picked up from the runway area following the strike. In December 1999, a Boeing 747 encountered a red-tailed hawk upon takeoff from Toledo Express Airport in Ohio. The hawk struck the nose bullet, which shattered and entered the engine. A witness called the sheriff and reported hearing a large boom and seeing one of the engines on fire as the aircraft took off. The pilot dumped fuel and returned to the airport to land. Pieces of fan blades tore large holes in the nose cowling. Time out of service was 84 hours; cost of repairs was $1.3 million. In March 2000, a Boeing 767 ingested a flock of Bonaparte’s gulls after takeoff from Dulles International Airport. The pilot returned to the airport and made a precautionary landing. Fan blades were damaged; the repair cost was $65,000. In August 2000, a Boeing 747 flew through a flock of about 30 Canada geese and ingested 1 or 2 in the number 1 engine after taking off from Philadelphia International Airport. The high-speed aborted takeoff resulted in nine flat tires; the aircraft was towed to the ramp. The engine was a total loss, and the aircraft was out of service for 72 hours. The cost was $3 million. In addition to the individual named above, Carol Bray, Amy Sue Bunting, Nancy Crothers, Brian Eddington, Jerilynn Hoy, Diane Lund, LuAnn Moy, Cheryl Pilatzke, Pam Tumler, and Amy Webbink made key contributions to this report. | Birdwatching, hunting, and wildlife photography provide important recreational, aesthetic, and income-generating benefits to the American public. In addition, wildlife help maintain ecosystems, and the mere knowledge that wildlife exist is viewed as beneficial by many people. At the same time, however, some wildlife destroy crops, kill livestock, damage property, and pose risks to public health and safety. Further, as the U.S. population has grown and impinged upon wildlife habitats, conflicts between wildlife and humans and their property have become increasingly common. Wildlife Services, a program within the U.S. Department of Agriculture's (USDA) Animal and Plant Health Inspection Service, is tasked with controlling damage by wildlife. Mammals and birds damage crops, forestry seedlings, and aquaculture products each year, at a cost of hundreds of millions of dollars. In fiscal year 2000, predators killed half a million livestock--mostly lambs and calves--valued at $70 million. To reduce such threats, Wildlife Services conducts operational and research activities with federal, state, and local agencies; agricultural producers and ranchers; private homeowners; and others. In carrying out these activities, Wildlife Services applies the most appropriate methods, whether lethal or nonlethal, of prevention and control. Considerable opportunity exists for developing effective nonlethal means of controlling damage by wildlife on farms and ranches--for example, through wildlife contraceptives or through the use of scare devices triggered by motion sensors. In view of the growing controversy surrounding the use of lethal controls, Wildlife Services scientists are focusing most of their research on developing improved nonlethal control techniques. GAO identified no independent assessments of the costs and benefits associated with Wildlife Services' program. |
As we and others have reported, taxpayers often have problems obtaining the needed information from IRS to file their tax returns and resolve problems with their accounts. Not only do taxpayers have difficulty in reaching IRS by telephone, but once a taxpayer reaches a CSR, that CSR does not always have easy access to the information needed to resolve the taxpayer’s problems. One of TSM’s major goals is quick and easy access to the data needed by CSRs and other employees to provide better customer service and improve voluntary compliance. Several systems are being developed or are planned to address IRS’ critical data needs. IRS considers ICP to be one of the most important of these undertakings. Information on taxpayers and their accounts are contained in a variety of IRS databases. Until 1995, information on IRS’ primary database for taxpayer account, which is used for assisting taxpayers—known as the Integrated Data Retrieval System (IDRS)—was stored at the service center where the taxpayers filed their returns and could be accessed by employees at the service center, connected district offices, and customer service sites. If the taxpayers called a service center other than that at which their returns were filed, the CSR would be unable to answer questions about their accounts. Either the taxpayers were told to call a different service center or the questions would be written down and referred to the appropriate service center for resolution. Early in 1995, IRS implemented a networking capability among the service centers, district offices, and customer service sites, so that employees could have access to IDRS data nationwide. This networking capability is referred to as Universal IDRS; however, this is only a partial solution to IRS’ data accessibility problems. Although Universal IDRS gives IRS employees access to taxpayer account information nationwide, it does not always provide complete information on a taxpayer’s account. Other information needed to help the taxpayer may be contained in different systems that are not linked to IDRS. Generally, the CSR must access each of the different systems independently. For example, an IRS employee using IDRS will know that a taxpayer was sent a notice of underreported income but would not have access to the actual notice. That notice is contained in IRS’ Automated Underreporter System (AUR). AUR would provide additional information, such as the amount of unreported income and information from the tax return that may indicate, for example, the amount of dividend or interest reported by financial institutions but not by the taxpayer. To obtain these data, the IRS employee must be able to access the AUR database using a different computer terminal. However, the employee may not have access capability. As a result, the employee would have to either (1) refer the taxpayer to another office, (2) research the problem and return the taxpayer’s call, or (3) tell the taxpayer to call back later. With ICP, IRS envisions that customer service staff would have all relevant information from a number of important databases available to them to assist the taxpayer. IRS plans to use ICP to integrate and obtain access to information from each of the existing IRS functional databases that contain taxpayer information. The primary databases include IDRS, AUR, Corporate Files On Line (CFOL), and the Automated Collection System (ACS). ICP was intended to resolve the data accessibility problems by integrating the information from various databases used by CSRs and providing a single computer terminal to do the task. Using a taxpayer’s Social Security number to obtain case information, the ICP software is expected to automatically assemble the relevant information on a computer terminal, provide questions and prompts for CSRs, and perform calculations for updating the account. ICP, as originally envisioned, was expected to support both IRS’ customer service vision and district office compliance operations. It was to be developed and implemented in stages using a multirelease approach. Each release was to build upon the previous release, providing a related set of software, hardware, and telecommunication tools that were to provide incremental improvements in customer service. The first series of ICP releases, commonly referred to as releases 1.0/1.5, 2.0, and 2.5, were intended to meet the needs of IRS’ customer service employees. Later releases are expected to support district office compliance operations, but they have been delayed until sometime after 2000 due to IRS’ recent rescoping of the TSM program. The first release, 1.0/1.5, primarily provided computer hardware and software that eliminated the need for CSRs to use multiple workstations to access data on various databases. It was designed to allow CSRs to use one computer terminal to access the various databases that contain information on taxpayers’ accounts. For example, using an ICP workstation, CSRs could access information stored on IRS’ three major databases—IDRS, ACS, and AUR as well as some smaller databases. It also provided some features that made the existing systems easier to use, such as a summary screen of taxpayer information, menus to look up command codes, and automated forms ordering. The next ICP software release, 2.0, is designed to provide CSRs with a single view of taxpayer data. It is expected to eliminate the need for a CSR to access the separate databases. Instead, information is to be assembled from various databases onto a standard screen. Release 2.0 is expected to also provide CSRs with new tools to enhance their ability to offer taxpayers one-stop service. It is also expected to provide a call-routing feature that would route taxpayers’ calls to the next available representative who would be most skilled at addressing the taxpayer’s question or issue. Some of the tools expected from ICP 2.0 include on-line display of and adjustments to Form 1040 returns and associated schedules including automated tax, interest, and penalty computation; automated installment agreement preparation; automated payment tracer capability; automated refund inquiries; data directed routing; and enhanced history generation. Additionally, ICP 2.0 is expected to eliminate the need for CSRs to remember numerous command codes, which are needed to access and update taxpayer account information. For example, both ACS and IDRS have their own language of command codes, requiring significant training and adequate time to learn. IDRS alone has many codes, requiring two large handbooks of explanation. Not surprisingly, few IRS employees have mastered both systems. ICP 2.0 would eliminate the need for CSRs to know any ACS command codes and most of the IDRS command codes. IRS expects ICP 2.0 to provide improved service to taxpayers. Currently, to answer taxpayers’ questions about whether payments have been properly credited to their accounts, CSRs must access up to five separate databases, searching for payment transaction codes or payment offset codes. This procedure is known as a “payment tracer.” CSRs must then locate the missing payment and manually prepare a credit transfer to move the payment to the proper account. Often the CSR is unable to complete the search while the taxpayer is on the telephone. With ICP, CSRs are expected to complete the payment tracer and resolve the taxpayer’s question while the taxpayer is still on the telephone. Instead of entering five separate search commands, CSRs would simply input the amount of the payment. ICP would automatically search the databases and provide CSRs with the information needed to determine whether the taxpayer’s account had been properly credited for the payment. ICP would also provide CSRs with an easier way to transfer payments between accounts. Release 2.5 is expected to provide CSRs this same level of access to information for business taxpayers. Over the past decade, we have issued several reports and testified before congressional committees on IRS’ costs and difficulties in modernizing its information systems. From 1986 through fiscal year 1995, IRS estimated that it had invested about $2.5 billion in TSM. IRS projects to spend over $8 billion on TSM. By any measure, this is an enormous information systems development effort, much larger than most other organizations have ever undertaken. In September 1993, IRS assessed its software development capability using Carnegie Mellon University’s Software Engineering Institute’s (SEI) Capability Maturity Model (CMM). This model is the generally accepted standard in both industry and government for assessing an organization’s ability to develop software in accordance with modern software engineering methods. This tool focuses on the maturity of certain software development processes called “key process areas (KPA).” The five KPAs are: requirements management, software project planning, software project tracking and oversight, software quality assurance, and software configuration management. The model ranks organizations on a scale of 1 to 5. IRS’ self-assessment placed its software development capability at the lowest level, CMM level 1, because its assessment showed significant weaknesses in all KPAs prescribed for an organization to reach a level 2 capability. Each of the CMM levels are described in appendix I. In February 1995, TSM was added to our list of high-risk areas as a critical information systems project that is vulnerable to schedule delays, cost over-runs, and potential failure to meet mission goals. In July 1995, we issued a comprehensive report on the effectiveness of IRS’ efforts to modernize tax processing. The report discussed pervasive management and technical weaknesses that must be corrected if TSM is to succeed and made over a dozen specific recommendations. In this regard, we reported that unless IRS improved its software development ability, it is unlikely to build TSM in a timely or economically manner, and systems are unlikely to perform as intended. Reflecting continued congressional concern with TSM, the Treasury, Postal Service, and General Government Appropriations Act of 1996, required the Secretary of the Treasury to provide a report to the House and Senate Appropriations Committees regarding the management and implementation of TSM. This report was provided to the Committees in May 1996. As directed by the same legislation that required the report, in June 1996, we reported on our assessment of IRS actions taken to correct its management and technical weaknesses. We found that while IRS had taken some actions, none responded to any of our recommendations in total. As a result, IRS was not in any appreciably better position to ensure Congress that the money spent on TSM would deliver the promised capability, on time, and within budget. Because IRS had not made adequate progress to correct its weaknesses, we suggested that Congress should consider limiting TSM spending to only cost-effective modernization efforts that (1) support ongoing operations and maintenance; (2) correct IRS’ pervasive management and technical weaknesses; (3) are small, represent low technical risk, and can be delivered in a relatively short time frame; and (4) involve deploying already developed systems—only if these systems have been fully tested, are not premature given the lack of a completed architecture, and produce a proven, verifiable business value. Our objectives were to (1) evaluate IRS’ assessment of ICP costs and benefits and obtain users’ perceptions on the system’s benefits, (2) analyze IRS’ testing of ICP, (3) assess IRS’ ongoing efforts to redesign its customer-service work processes to fully utilize ICP capabilities, and (4) assess IRS’ software development processes being used for ICP. To evaluate IRS’ assessment of ICP costs and benefits, we reviewed two IRS studies that were developed to assess the expected costs and benefits of ICP. The first document, known as the Unified Business Case, was developed by IRS in January 1995. During our review, IRS conducted a second analysis of ICP costs and benefits. This ICP Business Case was issued in July 1996. We reviewed both documents for completeness and compared them against IRS’ criteria for business cases, as detailed in its Business Case Handbook. To obtain user views on ICP benefits, we randomly selected and conducted structured interviews with 193 CSRs, 37 customer service managers and 11 system administrators at the Nashville, Cincinnati, and Atlanta customer service sites. We chose these three sites because (1) Nashville was the prototype site for testing ICP and new work processes and (2) Atlanta and Cincinnati were two of the initial sites to receive ICP. To analyze IRS’ testing of ICP, we reviewed the results of the initial pilot test of ICP version 1.5. We met with IRS officials at the National Office, the ICP program office, and the Customer Service Site Executive’s Office to discuss the limitations of the test that IRS identified. We also discussed with IRS officials their plans for a more thorough test of the next ICP version, 2.0, including visiting the Integrated Test and Control Center facility where ICP 2.0 was being tested. We were unable to review specific plans for the pilot test because they had not been completed during our audit work. To assess IRS’ ongoing efforts to redesign its customer service work processes to fully utilize ICP capabilities, we met with IRS officials in charge of efforts to develop new work processes for CSRs. We reviewed documents, such as the Customer Service Work System Design document, that discussed the results of the initial efforts to broaden the scope of telephone assistors’ work. We also reviewed draft reports on the results of recent studies that make further recommendations for redesigning work processes. To assess IRS’s software development processes used to develop ICP 2.0, our fourth objective, an SEI-trained team of GAO specialists used SEI’s Software Capability evaluation (SCE) method. The details of our scope and methodology for this objective are discussed in appendix I. We conducted our work from August 1995 through August 1996 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of IRS or her designee. On November 21, 1996, IRS officials, including the Customer Service Site Executive and the National Director, Customer Service Planning and Systems Division, provided us with oral comments. These comments were supplemented by a memorandum from the National Director, Customer Service Planning and Systems Division, and the Deputy Chief Information Officer (Systems Development) on November 26, 1996. Their comments are summarized on pages 22-24 and incorporated elsewhere in the report where appropriate. Through fiscal year 1995, IRS had invested over $150 million in ICP and, according to data provided to us after a May 6, 1996, Treasury report to the House and Senate Appropriations Committees, IRS had plans to invest about $77 million and $112 million in fiscal years 1996 and 1997. That would bring the total investment to about $340 million, or about 53 percent of the $641.1 million budgeted for ICP through 2000. However budget cuts have caused IRS to reduce planned expenditures for ICP and to reassess how to move forward to meet the needs of front-line assistors. Despite this sizable investment, ICP costs and benefits remain uncertain because the scheduled rollout of ICP and its capabilities continue to change. Since ICP began in 1993, the milestone dates for tasks have slipped, and most recently the testing of software release 2.0 has been delayed at least 3 months. Also, the capabilities of software release 2.0 may be less than originally planned. Finally, the original business case on ICP was never accepted. While a more recent business case indicates that IRS will update projections for cost and benefits as necessary, IRS has made no revisions to the business case, even though changes are expected to the rollout date and to the software capabilities for release 2.0. IRS is reassessing its plans for release 2.0 and plans to revise its business case after a proposal is made to and approved by the Investment Review Board. ICP began in late 1993, and the capability of ICP was to be rolled out incrementally in four phases and was to be completed by 1997. In March 1995, changes in the scheduled rollout date took place. The revised date for ICP being operational was extended to November 1998. As of June 1996, the first increment of ICP was partially deployed at 14 of the 23 customer service centers. There were about 2,500 ICP workstations operating at these sites. IRS was expecting to purchase additional workstations in 1996 and 1997. The latest IRS schedule calls for ICP to be fully deployed by fiscal year 2000, but this may be delayed. For example, pilot testing of release 2.0 was scheduled to begin on September 30, 1996, with initial deployment in April 1997. However, the development team has been unable to deliver the software as scheduled. As a result, the pilot test was delayed, and a risk assessment of the entire ICP project was initiated. The contractor’s interim report on the risk assessment states that the pilot test on ICP release 2.0 should be delayed at least 3 months. The testing and deployment of release 2.0 may be delayed longer than 3 months, because the contractor stated that the number of problems identified during software testing continue to increase and are “not likely to be fixed in near term.” IRS does not know the impact on costs of these delays, but it seems these delays, especially any long delay with release 2.0, will likely increase costs. IRS has spent about $150 million to date for ICP 1.0 /1.5 and to develop ICP 2.0, but IRS officials told us that they never projected any revenue or productivity gain for the early releases of ICP. IRS officials said that ICP activities to date have provided the foundation for development of ICP 2.0 and have put in place the hardware, telecommunications, and other infrastructure components required to implement the customer service vision; and they noted that the real benefit gains of ICP will come from ICP release 2.0. In 1995, IRS’ Information Systems Division developed a “Unified Business Case” for the systems supporting IRS’ customer service and district office operations. The costs and benefits were projected to be $3.2 billion and $5.2 billion, respectively. IRS customer service officials said that this cost and benefit analysis was never accepted by their office because, by the time the analysis was completed, the projects being evaluated were not consistent with their new business vision and no longer represented the scope of ICP. In July 1996, IRS completed another business case for Customer Service/ICP. ICP costs and benefits were estimated to be $774 million and $2.9 billion, respectively. This business case was intended to justify the costs of ICP, including the necessary physical infrastructure, such as real estate, telecommunications, computer equipment, and furniture. Most of the users we interviewed said that ICP 1.5 had provided some advantages. At the time of our review, however, IRS had not taken steps to measure the extent to which ICP has improved service to taxpayers. More than 91 percent of the employees that responded to this question said ICP improved their ability to serve taxpayers at least to some extent, when compared with what they used before development of ICP. About 89 percent of those who responded told us that ICP increased their productivity while 85 percent said it increased their ability to resolve the taxpayers’ questions on the initial contact at least to some extent. While the results of our survey of CSRs were generally positive, IRS had not attempted to measure the extent to which ICP had affected the services provided to taxpayers. Appendix II shows CSRs’ opinions on the extent to which ICP release 1.5 has allowed them to improve customer service and improved their ability to do their jobs. The testing of ICP 1.5 was too limited and did not measure ICP’s impact on business operations. Also, IRS discounted system downtime when analyzing the results of the test. IRS officials recognized the limitations of the ICP 1.5 testing and told us that testing of ICP 2.0 would be more comprehensive. IRS conducted its test of ICP 1.5 at the Nashville customer service site during a 4-week period in July and August 1995. Nashville, IRS’ prototype customer service site, had been using ICP for approximately 9 months before the test. The test was done during a nonpeak period, when IRS is not typically as busy as during the tax season months of January through April. Testing during a nonpeak period may not stress the system’s capacity. IRS officials said that testing was limited because ICP 1.5 was only intended to provide the data access foundation for developing ICP 2.0 and to put in place the hardware, telecommunications, and other infrastructure components required to implement the customer service vision. The National Research Council also reported that the ICP test was too limited to “yield the analytical results needed to appraise ICP in a full site-production mode.” The Council’s report also states that ICP “was tested during only one tax season, on a limited basis, before being deployed to other sites.” The purpose of a pilot test is to evaluate the performance of a system in one location before deciding whether to implement the system at other locations. IRS uses the pilot test to certify that the system is meeting its program or business objectives. IRS refers to this process as the “Business Certification.” During the pilot test, IRS was to collect data on the performance of the system and compare the data against established performance goals to certify that the system is performing as expected. To measure ICP’s impact on business operations, IRS examined six quality indicators—productivity, accuracy, timeliness, revenues, initial contact resolution, and customer satisfaction. IRS had difficulty measuring four of these six indicators, and its measure of the remaining two indicators was very limited in scope. Additionally, IRS based its measure of another indicator—quality of the workplace—on focus group discussions. Despite difficulties in measuring the impact on business, IRS officials decided to roll out the system to other sites because comments on the quality of the system from the workplace focus groups had been generally favorable. IRS discounted the results of its testing of accuracy and revenues collected because it could not isolate the impact of ICP from that of other changes in work processes. For example, during the certification test period, accuracy varied from 85 percent for questions on tax law and procedures to 36 percent for account questions. The national standard is 87 percent. The evaluation team concluded that the results of the accuracy and revenue tests were not comparable to national results because Nashville was “blending” certain collection and taxpayer service work and cross-training its employees to work both areas. IRS used very narrow measures to gauge the system’s effect on timeliness and productivity at the Nashville site. Timeliness and productivity measurements were limited to measuring gains made from a more timely process of ordering forms. According to the test results, ICP reduced the amount of time it took to order forms by 1 day and saved $118.68 per day, compared with fiscal year 1994 costs for direct labor and mail. However, ordering forms is only a small part of customer service. IRS did not measure the timeliness of handling taxpayers’ calls for other services, such as refund inquiries or the productivity of CSRs—concerning the number of calls they were able to answer. IRS had no baseline measures for customer satisfaction and initial contact resolution. This prevented IRS from measuring improvements over the status quo. The certification report gives no results for customer satisfaction and notes that surveys on customer satisfaction were not done. IRS reported the results of measures on initial contact resolution—the percentage of calls that IRS resolved in one contact. However, the rate—43 percent—is much lower than the goal of 95 percent. Nonetheless, IRS gave the system a “pass” mark on that indicator stating that the “increased functionality expected in future releases of ICP should increase the overall ICR rate.” Furthermore, IRS’ measurement of how ICP 1.5 affected the quality of work life was limited to holding focus group discussions. Thirty-one of the 311 employees out of the Nashville office participated in the focus groups. The 3 focus groups were made up of 14 experienced CSRs, 12 inexperienced CSRs, and 5 managers. According to the certification report, the experienced CSRs were “excited about the ICP system,” but they expressed several concerns ranging from technological problems to lack of training. The report cautioned that “unless their concerns are addressed, the impression of the ICP system will turn into that of a curse rather than the now perceived blessing.” Similarly, the managers said the system offered many promises, but they too were concerned about the technical problems associated with the system. The inexperienced CSRs were not as enthusiastic about the system as the experienced CSRs and managers. While they had concerns similar to the experienced CSRs, they were very concerned about the amount of system downtime. In our July 1995 report on TSM, we said that although IRS recognized the importance of testing, it had not yet developed a complete and comprehensive testing plan for TSM. We said that individual TSM systems were developing their own test plans, which IRS described as rudimentary and inadequate. If systems like ICP are not adequately tested, design and development errors may go undetected, leading to performance shortfalls. Similar to ICP, IRS failed to thoroughly test its Service Center Recognition Image Processing System (SCRIPS). The pilot test of SCRIPS was incomplete because it (1) did not certify all software applications that were to be used during 1995 and (2) did not test SCRIPS ability to handle peak processing volumes. Many of the problems IRS experienced with SCRIPS, such as slow processing rates and system failures, might have been anticipated had IRS thoroughly tested the system before placing it into operation. IRS’ technical certification report stated that the system was available to users more than 98 percent of the time during the 19-day test period. However, it reached that percentage by excluding 3 days in which the system was down. IRS excluded the downtime from the test results because officials said they believed they had corrected the technical problem and that it would not recur. Had the 3 days been included, the system would have been available to the users about 95 percent of the time. Downtime may be caused by problems with system elements related to ICP, but not directly measured in the ICP test. Most of the CSRs we interviewed in December 1995 and March 1996 said downtime was a problem at their site. Of the 185 CSRs who responded to this question, 82 percent said downtime had disrupted customer service, at least to some extent. The representatives considered downtime to be those times when they were unable to access information through their workstation, regardless of the cause. They told us that when the system went down they were unable to provide customer service. They said they either called the taxpayer back when the system was back up, or they told the taxpayer to call back later, anticipating that the system would be back in operation when the taxpayer called. IRS officials in Nashville said the downtime stemmed from power outages, telecommunications problems, and connectivity problems with the systems from which ICP pulls data. Both the Cincinnati and Atlanta customer service centers experienced similar problems with connectivity to these old systems. IRS officials told us that they plan to conduct a more thorough pilot test of the next release of ICP. Software acceptance testing began in May 1996 and was scheduled to be completed in September 1996. ICP 2.0 was then to be subjected to 3 months of operational testing, using 40 CSRs at the Fresno customer service center beginning on September 30, 1996. It was to be expanded using about 160 CSRs in January 1997 and then rolled out to other customer service centers beginning in May 1997. However, on July 31, 1996, in a memorandum to the Commissioner and Deputy Commissioner, the Associate Commissioner for Modernization cancelled the September 30, 1996, pilot start date on the recommendation of the Business Site Executive. The Associate Commissioner noted that continued slippage in milestone dates for software programming and testing had jeopardized the pilot start date. To address concerns about the project, IRS hired a contractor to perform a risk assessment of the entire project. We believe that this decision is a positive indication of IRS’ desire to ensure that the system is sound before it is tested in production. The final results of this risk assessment were submitted in October 1996. IRS has prepared a draft evaluation plan that it believes will enable them to make a sound business decision about further investment in ICP. In an interim report dated August 21, 1996, the contractor recommended that the pilot test be delayed at least 3 months. The contractor cited various reasons why the test should be delayed. For example, during software testing, ICP failed to recognize certain data or taxpayer issues when such data or issues existed, and it failed to shut down when data were entered into certain accounts that were supposed to be protected from additional data entry. The contractor also cited problems with (1) the accuracy of data and with the updating of taxpayers’ IDRS accounts; (2) the definition of the user requirements for ICP 2.0; and (3) hardware differences among development, test, and production sites. The contractor stated that the number of problems identified during software testing continue to increase and were not likely to be fixed in the near term. According to IRS’ Customer Service Vision, ICP was expected to be the vehicle to provide CSRs access to the information they would need to answer all types of calls coming from taxpayers. Also, IRS planned to combine a phase of the collection process with customer service.However, according to IRS officials, after experimentation at the Nashville customer service center prototype, IRS is now reconsidering the extent to which CSRs will be able to answer the broad range of taxpayer questions, which are anticipated if IRS reduces the current level of employee specialization and combines the customer service and some of the collection functions. Modifications to ICP and/or subsequent investments in information technology may be required as the roles and responsibilities of a CSR continue to evolve. IRS hired a contractor and formed a team in February 1996 to examine the customer-service work processes and duties of CSRs. IRS acknowledges that many questions still need to be resolved on future job scope and structure of the customer service position. The contractor has been focusing on the redesign of current operations, systems, and organizations and the design of the CSR’s position. The contractor’s task is to develop a quality oriented, workable customer service system that furthers IRS’ objectives, enhances employee and customer satisfaction, and maximizes efficient use of resources. The draft design report was issued to IRS for review and comment on August 30, 1996. IRS has traditionally operated its telephone activities along functional lines, with employees specializing in specific areas. As such, an IRS telephone representative does not handle a broad range of inquiries. For example, if a business taxpayer called IRS regarding a balance due, the taxpayer would be routed to an IRS employee who specialized in handling business accounts and who handled only business account calls. As originally envisioned, ICP was to allow CSRs to perform a wide range of tasks, rather than have specific areas of expertise. ICP was to consolidate data from multiple databases and eliminate the complex command codes that IRS employees are now required to know in order to access and update taxpayer account information. Also, IRS planned to combine its initial efforts to collect taxes owed with its traditional customer service work. Essentially, with this blending of work, a CSR would be expected to answer all types of taxpayer calls. For example, a representative could receive a call from an individual taxpayer inquiring about a refund, and the next call could be an income tax preparer asking questions about IRS procedures or tax law. The CSR described in the vision would require a far broader knowledge base and much more extensive training than under the traditional telephone operations. IRS is reconsidering the extent to which CSRs will be able to answer the broad range of questions. IRS officials said they tested the blending concept at the Nashville prototype site and concluded that blending all the duties into one position was not feasible. The work systems design team is expected to decide how the work will be performed and define the duties of CSRs. Senior executives say they are committed to merging the taxpayer service and compliance functions. They acknowledge that certain issues must be resolved, such as how much tax knowledge a CSR needs to have, the proper skill level, and what authority the position should have to make certain decisions about a taxpayer account. CSRs that we talked with had mixed views on the extent to which blending has improved IRS’ ability to serve taxpayers. Twenty-two percent of the CSRs said blending improved their ability to assist taxpayers to little or no extent while another 22 percent said it improved their ability to assist taxpayers to a great extent. Some CSRs said blending allows them to provide one-stop service to the taxpayer without transferring them to other CSRs, while others said that one CSR cannot be responsible for performing multiple jobs. Some CSRs also said blending causes inaccuracy and requires more time per call because CSRs are less proficient when performing multiple jobs. IRS officials said that ICP 2.0 has been designed to provide CSRs with the most basic capabilities that both traditional telephone assistors and collection staff would find useful. Indeed, many of the capabilities expected from ICP 2.0 would provide clear advantages over IRS’ existing systems. However, until the role of the CSR is defined, it is unlikely that IRS will be able to provide information technology solutions that maximize productivity and customer service. As we have previously reported, organizations that successfully develop systems and achieve significant operational improvements do so only after analyzing and redesigning critical business processes. At the time of our review, the process of designing, testing, and implementing the role and processes surrounding the CSR was still not complete. Until IRS completes its work systems design effort, the information technology requirements to support CSRs will not be fully understood. The information CSRs need and the presentation of data might change from IRS’ initial vision and current ICP requirements because of the results of the work systems design effort. Therefore, CSRs may not require the same capabilities from ICP, as previously envisioned, in order to provide customer service. At the time of our visits, some sites were not using all of ICP’s capabilities because current duties did not require those capabilities. For example, CSRs at the Atlanta and Cincinnati customer service centers were using ICP to access IDRS when responding to inquiries from taxpayers who had received collection notices from IRS. They did not use ICP’s capabilities to access additional databases such as ACS and AUR. Customer service center officials reported that they were not servicing the kinds of calls that require access to either ACS or AUR. None of the ICP software development projects reviewed fully satisfy any of the KPAs that the SEI’s CMM requires to classify as a CMM level 2 rating or “a repeatable software development process.” In this regard, we found that three IRS organizations developing ICP software are extremely weak in the following KPAs: requirements management, software project planning, software project tracking and oversight, software quality assurance, and software configuration management. As a result, successful delivery of ICP 2.0 software is unlikely. Each of the five KPAs, along with examples of how the software development organizations compare to the KPA goals, is summarized below. Appendix III details how well each of the three organizations performed the KPA goals. Requirements Management - The purpose of requirements management is to establish a common understanding and agreement between the customer and the software project management on the customer’s requirements that are to be addressed through the software. One of the two goals of this KPA states that, “software plans, products, and activities are kept consistent with the system requirements allocated to software.” While IRS produces a number of documents—for example, (1) the configuration item list; (2) administrative request for information services; (3) the system architectural description; and (4) the concept of operations, which contains varying levels of detail on customer requirements—IRS does not update these documents, as requirements change, to ensure that these document are complete, consistent, or current. As a result, IRS has no assurance that the code being written and tested is traceable to customer requirements. Software Project Planning - The purpose of software project planning is to establish reasonable plans for performing the software engineering and for managing the software project. One of the three goals within this KPA states that, “software project activities and commitments are planned and documented.” IRS does not have a defined process governing software project planning. Moreover, the ICP software projects do not have documented software plans. Without these plans, IRS cannot effectively measure and monitor software development progress and take appropriate action when needed. Software Project Tracking and Oversight - The software project tracking and oversight process provides insight into actual project progress so that management can take effective actions when the software project’s performance deviates significantly from the software plans. One of the three goals within this KPA is that “actual results and performances are tracked against the software plans.” As noted above, IRS does not have ICP software development plans, and while it tracks the software project against schedules, these schedules are not derived using generally accepted government or industry software engineering methods. As a result, management cannot tell when actual progress warrants corrective action. Software Quality Assurance - The purpose of software quality assurance is to enable management to assess the quality of the process being used by the software project and of the products being built. Two of the four goals within this KPA emphasize that (1) “software quality assurance activities are planned” and (2) “adherence of software products and activities to applicable standards, procedures, and requirements is verified objectively.” The ICP software projects do not have software quality assurance plans. In addition, a software quality assurance group does not participate in certain required software quality assurance functions, such as the preparation, review, and audit of projects’ software development plans, standards, and procedures. As a result, IRS has no assurance that the ICP software is being developed in a quality fashion and will perform as intended. Software Configuration Management - The purpose of software configuration management is to establish and maintain the integrity of products of the software project throughout the project’s software life cycle. Two of the four goals of the configuration management KPA require that (1) “software configuration management activities be planned” and (2) “software work products be identified, controlled, and available.” The ICP software projects reviewed do not have software configuration management plans. In addition, although IRS controls changes to source code using a tool called Source Code Control System, the requirements within this KPA require change control to all software products created within the entire software life cycle. Specifically, IRS has not identified software work products—other than source code—such as requirements documentation, design specifications, test plans and results that need to be placed under configuration management. As a result, IRS does not know whether all its software products are complete, consistent, and current. Modernizing IRS’ systems is critical to IRS reaching its Customer Service Vision. As envisioned, ICP is planned to offer some clear advantages over IRS’ existing information systems and could improve taxpayer services. However, the success of ICP may be at risk because IRS has made substantial investments in the system without having (1) validated the costs and benefits by thoroughly testing the ICP system, (2) finalized the redesign of work processes that ICP will support, and (3) achieved the software development maturity needed to successfully build the envisioned capabilities within planned cost and milestones. Some of these problems are evident in recent slippage in milestone dates for software programming and testing that forced the cancellation of the September 30, 1996, pilot start date for ICP release 2.0. The contractor’s interim report assessing the risks associated with ICP development also supports delaying the pilot test. IRS has invested millions of dollars in ICP without having the cost and benefit data needed to fully assess the program, including analyzing program risks and making the most appropriate investment decisions. Furthermore, IRS’ testing of ICP 1.5 was limited and lacked baseline measures to gauge the success of this and future releases. Until IRS settles outstanding issues with its work processes, such as the scope of the duties of CSRs, it will not be in a position to adequately project whether ICP will provide the necessary capabilities or be the best system for customer service. The views of CSRs were generally supportive of an early version of ICP. However, continuing with plans to develop and deploy ICP to support unmeasurable benefits is risky. In this regard, until IRS implements a way to measure benefits, the extent to which ICP is likely to improve customer service and provide a positive return on investment cannot be determined. IRS is unnecessarily risking hundreds of millions of dollars by attempting to develop ICP software without having the requisite processes for doing so. Concurrent with the risk assessment being performed by the contractor, we recommend that the IRS Commissioner immediately limit deployment of ICP workstations to those already purchased until (1) projected costs and benefits are better known and can be validated by testing the system in a realistic operational environment, using baseline performance measures and (2) decisions are made on work processes, including the blending of collection and service work and specific duties of CSRs. We also recommend that expedient steps be taken to better position IRS to develop its software successfully and to protect its software investments. Specifically, we recommend that the IRS Commissioner take the following actions: Develop and implement an action plan to ensure that ICP software is developed by an organization(s) with at least a level 2 CMM rating. Delay any major investment in ICP software until the action plan is implemented. We requested comments on a draft of this report from the Commissioner of Internal Revenue or her designated representative. Responsible IRS officials, including the Customer Service Site Executive and the National Director, Customer Service Planning and Systems Division, provided IRS’ comments in a November 21, 1996, meeting. These comments were supplemented by a November 26, 1996, memorandum from the National Director, Customer Service Planning and Systems Division, and the Deputy Chief Information Officer (Systems Development) that addressed our recommendations and clarified remarks made during our discussion. We considered IRS’ comments and modified this report where appropriate. IRS officials agreed with our recommendation to limit further deployment of ICP workstations to those already purchased. They are currently considering several alternatives for reevaluating ICP. They said these alternatives, along with a recommendation, will be presented to IRS’ Investment Review Board in the near future. Additionally, IRS officials generally agreed with our assessment of ICP software development processes and agreed with our recommendation that they need at least a CMM level 2 capability to develop ICP software. The officials added that future ICP development is to be done using CMM level 2 processes and that, as we recommended, major investments in ICP will be delayed until this level of capability is achieved. They also added that their plan for achieving this level of capability involved two options—software development process improvements and heavy reliance on software development contractors. With respect to the former, the officials cited examples of improvement initiatives under way and planned, such as use of a requirements traceability matrix and software quality assurance program. We believe IRS’ two proposed actions to improve software development capabilities are not totally responsive to our recommendation. First, while the software process improvements cited are a step in the right direction, these actions should be part of a complete and comprehensive action plan for process improvement, as we recommended, which is rooted in SEI’s CMM level 2 KPA requirements. Second, to effectively acquire software using development contractors, IRS must have at least SEI defined CMM level 2 software acquisition processes. Moreover, it must ensure that its development contractors have at least level 2 development capabilities. Accordingly, IRS’ action plan for ICP should specify how this goal will be accomplished before it relies on contractors to develop ICP. IRS officials also stated that some ICP software had been developed using nationally recognized standards. For example, they cited software for computer screens, developed by IRS for use on multiple systems, including ICP. However, as stated in the objectives, scope, and methodology section of this report, our software capability assessment addressed those IRS organizations responsible for developing ICP applications software. IRS officials raised concerns about the amount of money cited in our report as spent on ICP through fiscal year 1995. Rather than $150 million, they now believe the investment in ICP through 1995 is about $73 million. Throughout our review, we had difficulty determining the amount spent on ICP. At one point, IRS officials told us that $171.5 million had been spent on ICP through fiscal year 1995, as reported in the May 6, 1996, Treasury report to the House and Senate Appropriations Committees. They later told us they found errors in that estimate, and the actual investment in ICP through 1995 was about $150 million. Now, they believe the $150 million cost projection was overstated because it included costs for the Aspect Automated Call Distributor System, which are not directly attributable to ICP. While we agree that some equipment costs are included in the $150 million figure, we are uncertain how much is attributable to the Aspect system because we did not validate the accuracy of IRS’ estimates. Accordingly, the $150 million was retained in this report. We are sending copies of this report to the Ranking Minority Member of your Subcommittee, the Chairman and Ranking Minority Member of the Senate Committee on Finance and other appropriate congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. Major contributors to this report are listed in appendix IV. If you or your staff have any questions concerning this report, please call me on (202) 512-8633. This section describes the methodology we used to evaluate the software development capabilities of the organizations that are developing ICP software. The Software Capability Evaluation (SCE) is a method for evaluating agencies’ and contractors’ software development processes against the Software Engineering Institutes’s (SEI) five-level software Capability Maturity Model (CMM), as shown in table I.1. These levels, the key process areas (KPA) described within each level, and the goals within each KPA, define an organization’s ability to develop software and can be used to guide software development process improvement activities. The findings generated from an SCE identify (1) process strengths that mitigate risks, (2) process weaknesses that increase risks, and (3) improvement activities that indicate potential mitigation of risks. Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies. Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled. The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization. All projects use an approved, tailored version of the organization’s standard software process for developing and maintaining software. Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications. The software process is characterized as ad hoc, and occasionally even chaotic. Few processes are defined, and success depends on individual effort. In our July 1995 report, we reported that IRS was a CMM level 1 software development organization and that unless IRS improved its software development capability, it was unlikely to build Tax Systems Modernization (TSM) systems timely or economically. In June 1996, we reported that IRS had begun to act on our recommendations in this area, however, none of the actions were complete or institutionalized. At that time, IRS’ Chief Information Officer agreed that IRS was not yet institutionally a CMM level 2, but stated that some CMM level 2 processes were being used to develop Integrated Case Processing (ICP). Therefore, we evaluated ICP software development organizations that were said to be using CMM level 2 requirements. Specifically, we evaluated two ICP version 2 subsystems that are being developed in three locations—Dallas, Texas; Austin, Texas; and Fresno, California. We evaluated the software development processes used on these projects, focusing on KPAs necessary to achieve a “repeatable” capability or CMM level 2. According to SEI, organizations that have a repeatable software development process have been able to significantly improve their productivity and return on investment. In contrast, organizations that have not developed the process discipline necessary to better manage and control their projects at the repeatable level incur greater risk of schedule delay, cost overruns, and poor quality software.These organizations rely solely upon the variable capabilities of individuals, rather than on institutionalized processes considered basic to software development. According to SEI, KPAs for a repeatable capability are considered the most basic in establishing discipline and control in software development and are crucial steps for any project to mitigate risks associated with cost, schedule, and quality. These KPAs are identified and described in table I.2. Defining, validating, and prioritizing requirements, such as functions, performance, and delivery dates. Developing estimates for the work to be performed, establishing the necessary commitments, and defining the plan to perform the work. Tracking and reviewing software accomplishments and results against documented estimates, commitments, and plans and adjusting these based on the actual accomplishments and results. Selecting qualified contractors and managing them effectively. Reviewing and auditing the software products and activities to ensure that they comply with the applicable processes, standards, and procedures and providing the staff and managers with the results of their reviews and audits. Selecting project baseline items, such as specifications; systematically controlling these items and changes to them; and recording and reporting status and change activity for these items. Table II.1: CSRs’ Views on the Extent That ICP Has Allowed Them to Improve Customer Service in Selected Areas (as a Percentage of All Comments). Decreasing Case Inventory Delivery System cycle time Table II.2: CSRs’ Views on the Extent That Various ICP Capabilities Have Improved Their Ability to Do Their Jobs (as a Percentage of All Comments) To a very great extent Servicewide Electronic Research Project (SERP) is IRS’ automated system for researching IRS publications. Table III.1 summarizes our detailed findings from our software capability evaluation at three of IRS’ ICP development centers. As mentioned in appendix I, we evaluated the software development processes used on ICP software development projects at three centers, focusing on the key process areas (KPA) necessary to achieve a Capability Maturity Model (CMM) level 2 rating. CMM level 2 is achieved by satisfying all of the five KPAs under this level. To satisfy a given KPA, all of that area’s goals must be satisfied. Satisfying a goal, in turn, requires effectively meeting all of the activities associated with that goal. Table III.1 identifies whether each of the IRS development centers satisfied the KPAs, the associated goals, and activities. In accordance with the Software Engineering Institute’s (SEI) CMM assessment methodology, the activities within the respective goals are characterized as (1) a “strength” if IRS’ implementation of the activity was effective, (2) a “weakness” if IRS’ implementation of the CMM activity was ineffective, or IRS failed to implement an acceptable alternative, and (3) “not applicable” if the activity does not apply to the center’s software development environment. Therefore, in table III.1, a goal is classified as “not satisfied” when any associated activity is classified as a “weakness” and a KPA is classified as “not satisfied” when any associated goal is classified as “not satisfied.” Requirements management: to establish a common understanding between the customer and the software project of the customer’s requirements that will be addressed by the software project. Goal 1 System requirements allocated to software are controlled to establish a baseline for software engineering and management use. Goal 2 Software plans, products, and activities are kept consistent with the system requirements allocated to software. Software configuration management: to establish and maintain the integrity of products of the software project throughout the project’s software life cycle. Goal 1 Software configuration management activities are planned. Goal 2 Selected software work products are identified, controlled, and available. The software engineering group reviews the allocated requirements before they are incorporated into the software project. The software engineering group uses the allocated requirements as a basis for software plans, work products, and activities. Changes to the allocated requirements are reviewed and incorporated into the software project. A software configuration management plan is prepared for each software project according to a documented procedure. A documented and approved software configuration management plan is used as a basis for performing software configuration management activities. A documented and approved software configuration management plan is used as a basis for performing software configuration management activities. A configuration management library system is established as a repository for the software baselines. The software work products to be placed under configuration management are identified. Products form the software baseline library are created and their release is controlled according to a documented procedure. (continued) Goal 3 Changes to identified software work products are controlled. Goal 4 Affected groups and individuals are informed of the status and content of software baselines. Software quality assurance: to provide management with appropriate visibility into the process being used by the software project and of the products being built. Goal 1 Software quality assurance activities are planned. Goal 2 Adherence of software products and activities to the applicable standards, procedures, and requirements is verified objectively. Change requests and problem reports for all configuration items/units are initiated, recorded, approved, and tracked according to a documented procedure. Changes to baselines are controlled according to a documented procedure. The status of configuration items/units is recorded according to a documented procedure. Standard reports documenting the Software Configuration Management activities and the contents of the software baseline are developed and made available to affected groups and individuals. Software baseline audits are conducted according to documented procedures. A software quality assurance plan is prepared for the software project according to a documented procedure. Software quality assurance group’s activities are performed in accordance with the software quality assurance plan. Software quality assurance group’s activities are performed in accordance with the software quality assurance plan. Software quality assurance group participates in the preparation and review of the project’s software development plan, standards, and procedures. Software quality assurance group reviews the software engineering activities to verify compliance. Software quality assurance group audits designated software work products to verify compliance. (continued) Goal 3 Affected groups and individuals are informed of software quality assurance activities and results. Goal 4 Noncompliance issues that cannot be resolved within the software project are addressed by senior management. Software project planning: to establish reasonable plans for performing the software engineering and for managing the software project. Goal 1 Software estimates are documented for use in planning and tracking the software project. Goal 2 Software project activities and commitments are planned and documented. Software quality assurance group periodically reports the results of its activities to the software engineering group. Deviations identified in the software activities and software work products are documented and handled according to a documented procedure. Software quality assurance group conducts periodic reviews of its activities and findings with the customer’s software quality assurance personnel, as appropriate. Deviations identified in the software activities and software work products are documented and handled according to a documented procedure. Estimates for the size of software work products(or changes to the size of the software work products) are derived according to a documented procedure. Estimates for the software project’s effort and cost are derived according to a documented procedure. Estimates for the project’s critical computer resources are derived according to a documented procedure. The project’s software schedule is derived according to a documented procedure. Software project planning is initiated in the early stages of, and in parallel with, the overall project planning. A software life cycle with predefined stages of manageable size is identified or defined. (continued) Goal 3 Affected groups and individuals agree to their commitments related to the software project. Software project tracking and oversight: to provide adequate visibility into actual progress so that management can take effective actions when the software project’s performance deviates significantly from the software plans. Goal 1 Actual results and performances are tracked against the software plans. The project’s software development plan is developed according to a documented procedure. The plan for the software project is documented. Software work products that are needed to establish and maintain control of the software project are identified. The software risks associated with the cost, resource, schedule, and technical aspects of the project are identified, assessed, and documented. Plans for the project’s software engineering facilities and support tools are prepared. The software engineering group participates on the project proposal team. The software engineering group participates with other affected groups in the overall project planning throughout the project life cycle. Software project commitments made to individuals and groups external to the organization are reviewed with senior management according to a documented procedure. A documented software development plan is used for tracking the software activities and communicating status. The size of the software work products(or size of the changes to the software work products) are tracked, and corrective actions are taken as necessary. The project’s software effort and costs are tracked, and corrective actions are taken as necessary. The project’s critical computer resources are tracked, and corrective actions are taken as necessary. The project’s software schedule is tracked, and corrective actions are taken as necessary. The software engineering technical activities are tracked, and corrective actions are taken as necessary. The software risks associated with cost, resource, schedule, and technical aspects of the project are tracked. (continued) Goal 2 Corrective actions are taken and managed to closure when actual results and performance deviate significantly from the software plans. Goal 3 Changes to software commitments are agreed to by the affected groups and individuals. Actual measurement and replanning data for the software project are recorded. The software engineering group conducts periodic internal reviews to track technical progress, plans, performance, and issues. Formal reviews to address the accomplishments and results of the software project are conducted at selected project milestones according to a documented procedure. The project’s software development plan is revised according to a documented procedure. The size of the software work products are tracked, and corrective actions are taken as necessary. The project’s software effort and costs are tracked, and corrective actions are taken as necessary. The project’s critical computer resources are tracked, and corrective actions are taken as necessary. The project’s software schedule is tracked, and corrective actions are taken as necessary. The software engineering technical activities are tracked, and corrective actions are taken as necessary. Actual measurement and replanning data for the software project are recorded. Software project commitments and changes to commitments made to individuals and groups external to the organization are reviewed with senior management according to a documented procedure. Approved changes to commitments that affect the software project are communicated to the members of the software engineering group and other software related groups. Leonard Baptiste, Jr., Senior Assistant Director Kelly A. Wolslayer, Senior Information Systems Analyst Madhav S. Panwar, SCE Team Leader David Chao, SCE Team Member Nancy M. Donnellan, Information Systems Analyst Leonard J. Latham, SCE Team Member K. Alan Merrill, SCE Team Member Paul Silverman, SCE Team Member The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Internal Revenue Service's (IRS) Integrated Case Processing (ICP) systems development effort, focusing on: (1) IRS' assessment of ICP costs and benefits and users' perceptions of the system's benefits; (2) IRS' testing of ICP; (3) IRS' ongoing efforts to redesign its customer service work processes to fully use ICP capabilities; and (4) the software development processes being used for ICP. GAO found that: (1) improving service to taxpayers is an important goal that IRS' Customer Service Vision shows promise in addressing, but the promise anticipated by the vision is unlikely to be fulfilled unless changes are made in the development and deployment of ICP; (2) IRS estimated that about $150 million was spent on ICP from 1993 to 1995 and that an additional $77 million will be spent through 1996; (3) overall, IRS plans to spend about $641 million on ICP through fiscal year 2000; (4) despite this sizable investment, costs and benefits remain uncertain because: (a) the scheduled rollout of ICP workstations continues to change; (b) the ICP capabilities have not been finalized; (c) certain benefits are still to be determined; and (d) the software is still being developed; (5) IRS planned that certain ICP capabilities being developed would be pilot tested beginning on September 30, 1996, but in a memorandum dated July 31, 1996, the Associate Commissioner for Modernization postponed the pilot test indefinitely; (6) IRS developed and initiated a limited deployment of the initial ICP version; (7) the test results provided little insight on the potential benefits of the system, because IRS did not adequately measure ICP's impact on business operations; (8) IRS officials recognized the limitations of the testing and told GAO that testing of the next software release would be more comprehensive; (9) it is unclear how this and future versions will support new work processes that are being designed; (10) according to IRS' Customer Service Vision, ICP was expected to be the vehicle to provide customer service representatives with access to information that would enable IRS to combine a phase of the tax collection process with customer service; (11) IRS is now reconsidering the extent to which the collection process can be combined with customer service and is reconsidering the range of tasks a customer service representative can be expected to perform; and (12) the software development processes in place at IRS organizations responsible for developing ICP software are extremely weak, making the likelihood of their producing quality ICP software on time and within budget very low. |
The SNRA is the largest of the 38 national recreation areas within the United States. National recreation areas are areas within the National Forest System that have outstanding combinations of outdoor recreation opportunities, scenery, and proximity to potential users. They may also have cultural, historic, and other values contributing to public enjoyment. The SNRA comprises 754,000 acres in four Idaho counties—Custer, Blaine, Elmore, and Boise. Within the SNRA’s boundaries lie portions of five mountain ranges, the headwaters of five major rivers, and over 1,000 lakes. The SNRA’s forests, valleys, and barren ridges are home for a variety of wildlife species, including mountain goat, bighorn sheep, elk, moose, and deer. The lakes and streams provide important habitat for native fish populations as well as spawning grounds for chinook and sockeye salmon, which are listed and protected as threatened or endangered species under the Endangered Species Act of 1973. Visitors to the SNRA take trips on horseback and float-boats, camp, ski, and enjoy a variety of other recreational activities. (See app. I for a map of the Recreation Area.) Forest Service policy calls for its special recreation areas to be managed as showcases to demonstrate national forest management standards for programs, services, and facilities. While the policy does not define the term “showcase,” Forest Service officials interpret it to mean that these areas should be developed and managed to a noticeably higher standard than other Forest Service recreation units. The legislation creating the SNRA provides that the management, utilization, and disposal of natural resources on federal lands—such as timber, grazing, and mineral resources—could continue insofar as their utilization does not substantially impair the purposes for which the Recreation Area was created. Funding levels for the SNRA for the period from 1993 through 1997 were developed through a two-part process. First, Forest Service budgets were developed through a largely bottom-up process. Based on guidance derived from plans and the funds likely to be available, the lowest-level units, such as ranger districts and recreation areas, developed and presented their funding requests to successively higher units—forests, regions, the Washington headquarters—where the requests were consolidated and prioritized. (Fig. II.1 shows the Forest Service’s organization to include regions, forests, and recreation areas.) These requests provided the basis for the total Forest Service budget requests that were presented to the House and Senate Committees on Appropriations for their review, revision, and approval. Once the budgets were enacted, Forest Service headquarters allocated funds to each of its nine regions. The regions, in turn, allocated funds to individual forests, and the forests then allocated funds to the ranger districts and recreation areas. In 1996, the Forest Service revised its budget development and allocation process. The Forest Service now uses a more top-down approach through which it allocates funds to the regions based on a series of national criteria. In 1998, the region responsible for the SNRA began using similar criteria to allocate funds within the region. The Forest Service made the change because it believes that using the allocation criteria improves the objectivity and rationality of the budget as a process for establishing policy and program priorities. The criteria establish a visible and rational basis for allocating resources by identifying differences among units in terms of resource conditions, workload, production capabilities, and other elements. The SNRA is one of five units within the Sawtooth National Forest. The other four units are ranger districts. Special recreation areas, such as the SNRA, are designated by specific legislative acts, and typically, the legislation directs the Secretary of Agriculture to manage these areas in a manner that best provides for public outdoor recreation benefits and the conservation of scenic, scientific, historic, and other values. In contrast, ranger districts are administratively established units responsible for the broad range of activities covered by the Forest Service’s multiple-use mission, which includes managing timber harvesting, mining, and grazing as well as recreation. The Sawtooth National Forest is, in turn, part of the Forest Service’s Intermountain Region, which contains 16 forests. The yearly allocations to the SNRA contain a variety of funding categories. Three accounts are particularly important in terms of their impact on the SNRA. First, the National Forest System account, which is by far the largest, provides funding for a number of different functions such as recreation management, protection of threatened and endangered species, and rangeland management. The two other accounts are construction and land acquisition; the sizes of these two accounts vary substantially from year to year, depending on whether construction projects or land acquisitions are approved. The SNRA’s budget decreased by 54 percent in constant dollars from 1993 through 1997. However, special circumstances make the decrease misleadingly large. The SNRA’s budget for 1993 was part of a 2-year funding peak brought about by the funding of several special projects. After we controlled for these special circumstances, the SNRA’s budget decrease for the 5-year period was about 26 percent. The SNRA’s decrease compares with about an 18-percent decrease for the Sawtooth National Forest, a 1.6-percent decrease for the region that includes the SNRA, and about a 12-percent decrease for the Forest Service as a whole. The Forest Service’s budget records show that the SNRA’s total budget decreased from $4.88 million in 1993 to $2.25 million in 1997 in constant dollars. The SNRA’s staff also decreased. The number of full-time staff decreased from 31 to 29, and the number of seasonal staff providing visitor services and maintenance decreased from 75 to 26 during the 1993-97 period. Although the SNRA’s budget and staffing decreased during this period, SNRA officials said that recreation use ranged from about 1.2 million to 1.3 million visitor days per year between 1993 and 1997. Because of the magnitude of the decrease in funding over the 1993-97 period, we reviewed the area’s yearly budgets over a somewhat longer period to identify the reasons for the decrease. The funding trend for the SNRA for the 1991-97 period is shown in figure 1. As figure 1 shows, fiscal years 1992 and 1993 were peak funding years for the SNRA. Three factors account for the funding peak and help to explain a large part of the 54-percent decrease from 1993 through 1997. In 1993, the SNRA received a variety of special funds, including $365,000 for recreation management and $446,000 for the construction of recreation facilities as part of a special multiyear initiative aimed at improving recreation programs throughout the Forest Service. In addition, the SNRA received $659,000 for land acquisition. The availability of these special funds decreased dramatically from 1993 to 1994 when the recreation initiative ended. The 1997 budget contained no funds for special projects, construction of recreation facilities, or land acquisition. However, the fiscal year 1997 allocation included $260,000 for recreation road construction. Our analysis shows that if these types of funds are consistently excluded from the 1993 and 1997 budgets, the decrease in the SNRA’s budget from 1993 through 1997 was about 26 percent. A comparison of the trend in the SNRA’s budget with budget trends for the forest, the region, and the Forest Service shows that the SNRA’s budget decrease was part of a general funding decrease that affected all levels. The Sawtooth National Forest determines, within the limits of its own budget, how much the SNRA receives each year. The forest’s budget decreased by 18 percent over the 1993-97 period. The budget allocation for the Intermountain Region, which allocates funds to the Sawtooth National Forest and all other forests within the region, decreased by 1.6 percent during the same period. Forest Service officials told us that the Sawtooth National Forest’s budget had a larger decrease than the region’s because the priority of the Sawtooth Forest compared with that of the other forests in the region decreased over the period. The changes in the region’s, the forest’s, and the SNRA’s budgets have occurred within the context of the Forest Service’s annual national budgets, which decreased from $3.2 billion to $2.8 billion, or about 12 percent, over the 5-year period. With the fiscal year 1996 budget, the Forest Service changed how it allocates funds to its regions to a process called criteria-based budgeting. The Forest Service changed this process to sharpen the focus on objectives and to establish a rational basis for allocating resources. In fiscal year 1998, the Intermountain Region began using a similar process for allocating funds to forests. According to regional office officials, the new criteria-based process further reduced the SNRA’s budget. Two of the new criteria used to allocate recreation management funds in the 1998 budget process were the number of visitors and the developed site capacity—that is, the capacity of the area to handle visitors. These two criteria provided the basis for over 60 percent of the SNRA’s 1998 recreation management budget allocation. According to agency officials, recreation areas such as the SNRA do not compete well in the new criteria-based budget process because they have fewer recreation visitors and less capacity than other Forest Service units such as ranger districts located near urban areas. To prevent the continued decline in the SNRA’s budget, the regional office has ensured that for fiscal year 1999, the SNRA will be funded at no less than fiscal year 1997 levels for the following accounts: (1) National Forest System, which funds functions such as recreation management; (2) construction; and (3) fire management. (App. III provides additional information on the budgets for the SNRA and other Forest Service units.) Furthermore, starting with the fiscal year 2000 budget, the region has modified the criteria to provide the recreation areas with a larger portion of the region’s recreation management funds. In 1997, enhancing recreation, preserving conservation values, and managing commodity programs accounted for 63 percent of the SNRA’s total spending. Our analysis of agency data shows that the SNRA’s spending for these programs for fiscal years 1993 through 1997 decreased—57 percent for enhancing recreation, 21 percent for preserving conservation values, and 44 percent for managing commodity programs. As with the SNRA’s overall budget, the change in spending for these areas was misleadingly large because of the special funds allocated to the SNRA in 1993. SNRA officials said that the Recreation Area had significant accomplishments in the recreation, conservation, and commodity programs since 1993. However, the officials also said that each program still has unmet needs. Figure 2 shows the trend in expenditures for each of these areas. The SNRA’s spending for enhancing recreation decreased from $1.8 million in 1993 to $783,100 in 1997—a decrease of about 57 percent. Funding for enhancing recreation includes funds for recreation management and the construction of recreation facilities and trails. (See table IV.1 for more detailed information.) In support of its recreation program, the SNRA is responsible for maintaining over 75 developed recreational facilities that include campsites, picnic areas, boat launches, scenic overlooks, and trailheads with 750 miles of trails. It also provides dispersed recreation activities that include camping, hunting, facilities for off-road vehicles, customer service patrols, interpretive programs, and visitor centers. A large portion of the 57-percent decrease in spending occurred because funds from the multiyear recreation initiative program, which ended in 1993, were no longer appropriated. If the funds for that initiative and other special projects are excluded from the analysis, we estimate that spending in 1993 would have been about $970,000. Thus, the change from 1993 to 1997 would have shown a decrease of about $187,000, or about 19 percent. While spending for recreation decreased, visitation to the SNRA over the 5-year period remained relatively constant, ranging from about 1.2 million to 1.3 million visitor days. Despite these reductions in spending for enhancing recreation and the reductions in seasonal staff previously mentioned, SNRA officials told us they had made significant accomplishments since 1993. They said that in addition to providing for day-to-day operations, they had been able to continue providing the same recreational services to the public (such as camping), to renovate some camping facilities, and to expand the availability of trails through partnership projects with state, local, and private donors that constructed new trails and maintained existing trails. In 1996, the SNRA completed the renovation of a major campground at Redfish Lake and, in 1997, awarded a contract to construct the Harriman Trail that will run through an 18-mile stretch in the Recreation Area. SNRA officials also said that the recreation program still has unmet needs in a wide range of areas, such as in its dispersed recreation programs, customer service patrols, interpretive programs, and visitor centers. They said they have been unable to construct an additional 663 needed campsites; to reconstruct and rehabilitate deteriorating facilities, such as visitor centers, trailheads, and camp or day-use facilities; to maintain about one-third of the trails; or to inspect recreational residences (summer homes) and other facilities to ensure that they comply with the requirements in their permits. They said the SNRA has also been unable to keep visitor information desks open to adequately serve visitors, maintain adequate customer service staff in the field, or provide interpretive programs that enhance the public’s enjoyment of the SNRA. In addition, officials said that other factors besides changes in the funds available for this program affect what can and cannot be accomplished in the recreation program. In particular, reductions in funds for other programs such as land acquisition have required the recreation program to cover a higher percentage of costs for receptionist, payroll, and public affairs personnel in the SNRA. According to these officials, the impacts of these reductions have had a direct effect on accomplishments and have resulted in more unmet needs in the recreation program. (See app. V for more information on specific accomplishments and unmet needs.) The SNRA’s spending to preserve conservation values decreased 21 percent, from $586,200 in 1993 to $461,200 in 1997. If the special funding contained in the 1993 budget is excluded from the analysis, however, we estimate that spending in 1993 would have been about $505,000. Thus the change in spending from 1993 to 1997 would have been a reduction of about $44,000, or about 9 percent. Preserving conservation values is a pervasive effort at the SNRA. Every resource activity at the SNRA includes major provisions for preserving conservation values, according to the SNRA’s Acting Area Ranger. The main focus of the conservation efforts has been on protecting threatened and endangered species and protecting key wilderness values, such as protecting the primitive nature of wilderness and maintaining the opportunity for solitude, in the 217,000-acre Sawtooth Wilderness. The Snake River chinook and sockeye salmon, which spawn in the SNRA, and the Snake River steelhead trout are protected under provisions of the Endangered Species Act of 1973. In addition, the Columbia River bull trout is currently proposed for listing as a threatened species. As a result of these endangered species listings, the SNRA must do what it can to improve the habitat for these species and ensure that its actions do not have an adverse effect on the fish. The Sawtooth Wilderness Area was created by the 1972 act that established the SNRA and must be managed to meet the requirements set out in the Wilderness Act of 1964. The Wilderness Act requires that wilderness areas remain undeveloped to retain their primeval character and be managed to preserve their natural conditions. Wilderness areas should also provide opportunities for solitude or for a primitive and unconfined type of recreation. As the use of the Sawtooth Wilderness for recreational purposes increases, the SNRA must, among other things, limit and distribute use to conserve the wilderness values. According to SNRA officials, their major accomplishments since 1993 in preserving conservation values include (1) completing an environmental impact statement aimed at improving riparian ecosystems and protecting salmon habitat throughout the Salmon River Corridor; (2) eliminating barriers to fish migration; and (3) issuing the SNRA Wilderness Management Plan. SNRA officials also said that the conservation program still has unmet needs. They said they are unable to perform a number of activities that would contribute to improving the habitat for, and thus the health and diversity of, the species within the SNRA. The SNRA also has been unable to maintain trails in the wilderness and fulfill its 1993 Wilderness Implementation Schedule, which it created to enable the Recreation Area to meet the management practices, standards, and guidelines outlined in the Forest Plan for the Sawtooth Wilderness for 1994 to 1997. In addition, as in the recreation program, officials said that the reduction in funds for other programs has resulted in the conservation program’s covering a higher percentage of the cost for some SNRA personnel. This has an impact on the accomplishments and contributes to more unmet needs in the conservation program. (See app. V for more information on specific accomplishments and unmet needs.) The SNRA’s spending for commodity programs decreased 44 percent, from $253,200 in 1993 to $141,600 in 1997. However, if the 1993 allocation for projects totaling $120,000 is excluded from the analysis, this area would have experienced a 4-percent increase in spending from 1993 through 1997. The commodity programs include programs for grazing, timber, and mining. In 1997, the grazing program had 26 range allotments—19 for cattle and 7 for sheep—covering 400,500 acres, more than half of the SNRA’s land area. The allotments provided that 2,500 cows and 4,470 sheep could graze for a total of 21,000 animal unit months. The timber program primarily provides firewood and fence posts to meet the needs of local users. Commercial timber sales, which are also part of the timber program, are limited to small-scale operations for posts and firewood because the area contains only minor amounts of commercial sawtimber. The mining program in the SNRA is very small. The act creating the SNRA provided that new mining claims could no longer be filed. From 1972 through 1997, active mining claims decreased from approximately 6,000 to approximately 170. Currently, there are no commercial mining operations on the SNRA’s lands. According to SNRA officials, one of the major accomplishments for the commodity programs is that grazing continues within the SNRA despite the requirements placed on grazing to protect the habitat of the threatened or endangered salmon. Other accomplishments include improvements to riparian areas and the reclamation of abandoned mines. SNRA officials also identified unmet needs. For example, the Recreation Area was unable to complete, on schedule, plans for managing grazing on Sawtooth lands. As with the recreation and conservation programs, officials said that the reduction in funds for other programs has an impact on the accomplishments and unmet needs in the commodity programs. (See app. V for more information on specific accomplishments and unmet needs.) We identified several instances in which funds were held at the Intermountain Region and at the Sawtooth National Forest rather than being allocated to subordinate units, including the SNRA. These funds were held for a variety of region- and forestwide projects. The combined annual impact on the SNRA of withholding these funds was equivalent to less than 4 percent of the SNRA’s budget. We found no instances in which funds were allocated to the SNRA and subsequently taken back by the forest or region for other projects. According to agency officials, it is a common practice for regional and forest offices to retain funds for region- and forestwide projects. The funding for regional projects comes off the top of the regional budget before allocations are made to the forests and subsequently to the ranger districts and recreation areas. As a result, all of the forests, ranger districts, and recreation areas within the region are affected by the funding of these projects. The region responsible for the SNRA had a total of 28 regionwide projects during the 1993-97 period. These projects included improving the geographic information system and computers throughout the region and conducting the Interior Columbia Basin Ecosystem Management Project. The funds withheld by the region for regionwide projects for the 5-year period totaled about $37.5 million. Also during the 1993-97 period, the Sawtooth National Forest withheld funds totaling $305,000 from the ranger districts and the SNRA for two forestwide projects. Of this amount, $185,000 was for improving the geographic information system and $120,000 was for radio replacements. Both of these projects benefited all of the forest’s units, including the SNRA. (App. VI lists the totals for region- and forestwide projects for fiscal years 1993-97.) Funds withheld by the region and the forest reduced the SNRA’s budget by less than 4 percent. According to an estimate by the Forest Service Intermountain regional office, if there had been no regionwide projects and the Forest Service had allocated these funds to the SNRA, the Recreation Area would have received, on average, about $109,000 per year. This represents about 3.4 percent of the area’s annual budget for the 1993-97 period. Officials from the forest office told us withholding money for forestwide projects had reduced the SNRA’s budget by $7,400 per year, or about 0.23 percent, on average. Combined, the overall reduction in funds available to the SNRA was $116,400 per year, or about 3.6 percent. Although the SNRA’s budget decreased as a result of these projects, the SNRA also benefited directly from some of the projects, such as those for improving the geographic information system and upgrading radios. On February 16, 1998, a field hearing was held in Twin Falls, Idaho, to examine the management of the SNRA. Areas of particular interest at the hearing included the impact of both (1) laws, regulations, and the SNRA’s related actions and (2) the SNRA’s funding levels on individuals and businesses that are economically dependent on the SNRA. Overall, representatives of the communities that are affected by the SNRA’s management said that the Forest Service had done a good job of preserving the values that the SNRA was created to preserve. However, these representatives and various businesses and individuals identified a number of adverse impacts on various activities on the SNRA’s lands. Businesses and individuals involved in recreational or grazing activities were the most affected by changes in the SNRA’s policies and funding. In terms of recreation, outfitters that take visitors on trips through the area on float-boats or on horseback were the most affected; while in terms of grazing, cattle ranchers with grazing allotments on SNRA lands were the most affected. The outfitters and ranchers were affected primarily by restrictions resulting from laws and regulations, such as those protecting the threatened or endangered salmon. Funding levels seemed to also affect individuals and businesses but to a lesser extent than the SNRA’s actions to preserve conservation values. Local public officials and community leaders said that, overall, the SNRA was doing a good job preserving the values it was set up to preserve, such as natural, scenic, historic, and pastoral values, and fish and wildlife. Elected officials from Blaine County, Idaho, and the Director of the Blaine County Recreation District said that the SNRA’s management was doing a good job and had maintained good relations with the county. In addition, the Executive Director of the Sawtooth Society, a nonprofit, nonpartisan, independent organization, said that most people believed that the Forest Service was doing well, overall, in preserving the natural and scenic values it was created to protect. According to the river and horseback outfitters we talked with, the SNRA has taken specific actions that are adversely affecting recreation by curtailing (1) float-boating on a portion of the Upper Main Salmon River(see app. I for a map of the Recreation Area), and (2) stock-supported camping using horses and mules in the Sawtooth Wilderness, which comprises most of the western half of the SNRA. In considering the SNRA’s actions and their impacts, it is important to be aware of the changing circumstances affecting the Forest Service’s nationwide decision-making. In our 1997 report on the Forest Service’s decision-making, we noted that during the last 10 years, the agency has increasingly shifted its emphasis from consumption (primarily producing timber) to conservation (primarily sustaining fish and wildlife). This shift is taking place in response to requirements in planning and environmental laws and the judicial interpretations of these laws. In particular, section 7 of the Endangered Species Act of 1973 represents a congressional design to give greater priority to the protection of endangered species than to the current primary missions of the Forest Service and other federal agencies, such as timber harvesting, rangeland, and outdoor recreation. In 1991, the National Marine Fisheries Service listed Snake River sockeye salmon as endangered under the Endangered Species Act. In 1992, Snake River spring/summer chinook salmon were listed as threatened. The Salmon River and its tributaries, including those portions running through the SNRA, are designated as part of the critical habitat for threatened spring/summer chinook salmon migration and spawning and for endangered sockeye salmon that migrate up the Salmon River to spawn in Redfish Lake. From 1992 until 1996, the SNRA and the float-boat outfitters worked to develop plans to minimize the impact of float-boating on salmon spawning, which occurs annually from mid-August to late September, in various parts of the Salmon River. In spring 1996, the SNRA completed the final environmental impact statement for the Salmon River Corridor within the SNRA. At that time, it implemented rules to protect spawning salmon. The rules required that on August 21 each year, or earlier if spawning has commenced, the float-boaters must portage (carry) their rafts around a section of the river called Indian Riffles—a traditional spawning area for chinook salmon—and terminate their trips west of Torrey’s Hole, another spawning area for chinook salmon. These restrictions required outfitters to provide a truck to portage the rafts, to eliminate the lunch sites three of the outfitters used, and to shorten the trip for all. We met with three of the four float-boat outfitters operating out of the town of Stanley—the largest town in the SNRA. They asserted that the restrictions on float-boating reduced their revenue and increased their costs of doing business. Because of the many factors that influenced the number of rafters—such as the water level and the weather—the impact of the SNRA’s restrictions on the outfitters’ revenues is not clear. However, our analysis showed that outfitter revenues for 1996 and 1997, the first years that the new restrictions were in place, were higher than 2 of the 3 prior years. Figures the outfitters provided to the SNRA for the float-boating seasons from 1993 through 1997 indicated that the number of rafters and total revenues generated in 1996 and 1997 were better than in 1993 and 1994 but less than in 1995 when rafting conditions were considered exceptionally good. Table 1 shows the total number of rafters and revenues for 1993 through 1997. We conducted a more detailed analysis for August, the month when the portage rules go into effect. We looked at the figures for the 2-week period before the portage rules were in effect and the 2-week period after portaging began. This analysis showed that for 1996 and 1997, the total revenues and the number of rafters decreased with the imposition of the portage rule. However, we noted that the total revenues and number of rafters also decreased in the latter part of August 1993 when no restrictions were in place. Because revenues appear to decrease whether or not there are restrictions, the impact of the SNRA’s restrictions on outfitter revenue is unclear. The outfitters also told us that their costs were increased by the restrictions that require, among other things, that they monitor the river to identify spawning activity and provide a truck and staff to portage their rafts. We did not determine the amount of these costs. As a result, despite the strong revenue picture for recent years, the increased costs resulting from the restrictions may have had a negative impact on the outfitters. Horse-pack outfitters on the SNRA also have stated that the Recreation Area’s actions are hurting their operations economically and reducing the number of people they can serve. In summer 1998, the SNRA amended the Sawtooth National Forest Plan to change the management direction for the Sawtooth Wilderness. The purpose of the changes was to clearly define management objectives, standards, and guidelines to protect and preserve the Sawtooth Wilderness as required by the Wilderness Act of 1964. Rules contained in the revised plan reduced the number of people who can be part of a horse-pack excursion in the wilderness from 20 to 12 and also reduced the number of horses that can be used on each trip. The SNRA reduced the size of horse-pack groups to meet specific objectives of the Wilderness Act that provide that wilderness lands are to be managed so that they generally appear to have been affected primarily by the forces of nature and that they provide outstanding opportunities for solitude. Hikers complained that their wilderness experience—the opportunity to experience solitude—was degraded when they were confronted with large groups of horseback riders. SNRA officials also said they imposed the restrictions to reduce adverse impacts on trails and campsites. Most trails and campsites in the wilderness area could not withstand the damage caused by large horse-pack groups. We met with the owner of the largest horse-pack operation in the SNRA. He said that reducing the size of horseback groups has adversely affected his business. However, these adverse effects may be largely mitigated because the reductions are being phased in over a 3-year period and because the SNRA is working with the outfitters to identify a number of campsites that can accommodate up to 20 people. These sites will be dedicated to the outfitters’ use, and the outfitters will be able to pre-position supplies at these sites, thus lessening the impact of the requirement that the outfitters use fewer horses on their excursions, a requirement that limits the supplies horse-packing groups can carry with them. Although the outfitters’ costs may be somewhat higher for smaller groups, the outfitters will be able to continue accommodating larger groups by breaking them into smaller groups on the trail and reuniting them at their campsites. We also spoke with SNRA and local officials, ranchers, and a mining representative about the adverse effects of the Recreation Area’s actions on commodity programs—grazing, timber harvesting, and mining. Our discussions suggest that grazing is the commodity program that the SNRA’s actions have most affected. Grazing has been reduced around the riparian areas providing the watershed for the Salmon River to protect salmon habitat and to prevent adverse impacts on spawning salmon. Grazing has also been curtailed because cattle and their impacts in campgrounds, in road corridors, and along heavily used recreational trails are unacceptable to many recreational users, according to SNRA officials. Allowing cattle in campgrounds raises water quality issues as well as physical safety issues for campers. In addition, on a national level, the Forest Service has put a high priority on improving the condition of its rangelands so that range conditions meet the legal requirements set out in various laws, regulations, and planning documents. Specifically, the National Forest Management Act of 1976 requires that management plans be prepared for the forests. The plan for the Sawtooth National Forest contains management goals that set the desired future condition of allotments. Forest officials told us they believed that the range conditions established in the forest’s plan were not being met. To meet the plan’s objectives, the SNRA reduced grazing on allotments on its lands. These reductions have closed grazing areas, shortened grazing seasons, and increased ranchers’ grazing costs. Overall, the SNRA has reduced grazing on allotments by almost one-third. However, some individual allotment reductions have been larger. For example, the ranchers we met with who graze cattle on the Stanley Basin allotment had grazing on their allotments cut by almost 60 percent. These ranchers told us that they had experienced a number of adverse economic impacts as a result. They said that they had decided to sell a portion of their herd and then went into debt to purchase private land for grazing. As a result, the value of their ranches had decreased because the value of a ranch is based partly on the number of cattle that the ranch can support with both its own grazing lands and its federal grazing privileges. SNRA documents note that in the future, grazing may be further reduced when the SNRA completes the required upgrading of its grazing management plans for 10 of the 25 allotments that do not yet have complete management plans. According to the SNRA, local government, and other individuals we talked with, the SNRA’s declining budgets have also reduced recreational opportunities. Several of them said that as a result of budget restrictions, needed new campsites that have long been in the SNRA’s area plan are not being built. As noted previously, the Recreation Area has been unable to construct an additional 663 campsites that were included in its 1975 area plan. Many campers must be turned away during peak periods because there are not enough campsites equipped with facilities such as water, toilets, picnic tables, and grills. Those we spoke with also said that trails and some campsites are poorly maintained and that other trails are not maintained at all. All these factors either directly limit recreation or discourage recreation by making the experience less enjoyable, according to those we spoke with. We provided a draft of this report to the Forest Service for its review and comment. The Forest Service agreed with the overall findings and also provided detailed technical comments clarifying certain aspects of the report. We considered these comments and revised the report as appropriate. In addition, the Forest Service also commented that we did not fully disclose the Sawtooth National Recreation Area’s accomplishments and unmet needs in three areas discussed in the report—recreation, conservation, and commodities. The Forest Service pointed out a number of specific accomplishments and unmet needs that we should add to the report. We added these comments as appropriate. The Forest Service’s letter also said it was unclear how the special funds for recreation construction were handled throughout the report in terms of any accomplishments stemming from these funds. The Forest Service said that since we did not include these construction funds when calculating changes in the Recreation Area’s budget because they represented special funding, we should also not have included accomplishments made possible by those funds. We do not believe that our analysis should exclude discussing these accomplishments. As the report points out, the Recreation Area’s overall budget decreased by 54 percent and its budget for enhancing recreation decreased by 57 percent from 1993 through 1997. In computing these decreases, we included funds for recreation construction. The report also points out that the rather large decreases of 54 and 57 percent are somewhat misleading because they contain large amounts for special funds, such as construction funds, in certain years. Therefore, to get a truer reading on the decreases in the Recreation Area’s budget for the 1993-97 period, we provide an analysis that excluded these types of funds. The accomplishments that we included in the report were those Forest Service officials identified, including some which were paid for with special funds. The Forest Service also said we should not include the $260,000 for recreation road construction that the Recreation Area received in 1997 as part of its budget, but instead should treat road construction funds similarly to the other construction funds that were excluded. We agree. As a result of removing this amount from our analysis, the corrected overall budget decrease for the Recreation Area was about 26 percent over the 1993-97 period. (Had we included the funds, the decrease would have been 16 percent.) In addition, the Forest Service raised questions about whether funds provided through cooperative projects with outside sources, such as state and private organizations, should be included in the total. In designing this study, we worked with Forest Service officials at the region, forest, and Recreation Area to determine the budget fund codes to include. It was agreed at that time that funds for cooperative projects should be included in the Recreation Area’s total budget allocation for each year. We further discussed this subject with officials at the Sawtooth National Forest, and they recommended that we not make any revisions in the numbers included in the report. The Forest Service’s letter is in appendix VII. We conducted our work at the Forest Service headquarters in Washington, D.C.; the Forest Service Intermountain Regional Office in Ogden, Utah; the Sawtooth National Forest Office in Twin Falls, Idaho; and the Sawtooth National Recreation Area and the surrounding communities of Stanley and Blaine County. To determine how much was allocated to the SNRA during fiscal years 1993 through 1997, we reviewed the initial and final planning and budget advice documents for the Forest Service, the regional office, the Sawtooth National Forest, and the SNRA. We also reviewed an agency document prepared for our use by the Intermountain Region and the Sawtooth National Forest offices that included all the appropriate budget fund codes. This document was prepared to ensure that we had data that the participating Forest Service units agreed on and considered to be accurate and consistent. We interviewed Forest Service budget officials at the SNRA, the forest office, the region, and at headquarters; the SNRA Area Ranger; and the Sawtooth National Forest Supervisor. To determine the trend in funding from 1993 through 1997 and to put the trend in its historical perspective, we also reviewed the SNRA’s budgets for 1991 and 1992 and converted all numbers to constant 1997 dollars. To help explain the large allocation decreases, we identified and removed allocations for 1993 that made that year’s budget unusually high and removed funds for construction and land acquisition because funds for these activities vary substantially from year to year depending on whether construction projects or land acquisitions are approved. To determine whether regional or forest funds had not been allocated to the SNRA or whether funds had been allocated and then taken back during fiscal years 1993 through 1997 and what was done with those funds, we obtained and compared final budget allocations with expenditures. We also reviewed the allocation change documents for the SNRA and interviewed Forest Service budget officials at the SNRA, the forest office, and the region. We did not independently verify the reliability of the financial data provided nor did we trace the data to the systems from which they came. These systems were, in some cases, subject to audit procedures by the Department of Agriculture’s Office of the Inspector General in connection with the agency’s financial statement audits. For fiscal years 1995, 1996, 1997, and previous years, the Office of the Inspector General reported that because of significant internal control weaknesses in various accounting subsystems, the Forest Service’s accounting data were not reliable. Despite these weaknesses, we used the data because they were the only data available and are the data that the agency uses to manage its programs. Because of the portion of the SNRA’s budget that enhancing recreation, preserving conservation values, and managing commodity programs account for, we agreed with your office to focus our review of the SNRA’s expenditures on these areas. We further agreed that conservation values would include such areas as fish and wildlife, threatened and endangered species, and wilderness management. Similarly, commodity programs would include timber harvesting, grazing, and mining. To determine how much the Recreation Area spent to enhance recreation, to preserve conservation values, and to manage commodity programs, we obtained and reviewed documents showing expenditures for the SNRA. To obtain trends over the 5-year period, we converted all expenditures to constant 1997 dollars, removed special funding allocations from the 1993 budget, and estimated the reduction in expenditures that resulted from the reduced allocations. To identify accomplishments and unmet needs, we relied on information provided to us by the Area Ranger and other officials responsible for the program areas. We also inspected some of the most significant accomplishments. To identify examples of potentially adverse effects of the SNRA’s actions on individuals, companies, and communities economically dependent on the area, we reviewed and analyzed the testimony presented at the February 16, 1998, hearing on the SNRA before the Subcommittee on Forests and Public Land Management of the Senate Committee on Energy and Natural Resources. The purpose of the hearing was to examine the management of the SNRA. On the basis of the data presented in the testimony and our interviews with community groups, local elected officials, and Forest Service officials, we identified the most significantly affected areas in the SNRA. We then interviewed individuals, company representatives, and community officials to obtain their views and related documents on the SNRA’s actions and their impacts. Specifically, we met with the following local officials: the mayor of the city of Stanley, the chair of the Stanley Chamber of Commerce, the chairman of the Blaine County Board of Commissioners, and a commissioner from Blaine County. We also met with the following business owners and individuals: float-boat operators, a horse-pack outfitter, ranchers, the director of the Sawtooth Society, and private landowners. Lastly, we discussed the actions and their impacts with SNRA officials and analyzed various data the SNRA provided. We performed our work from June 1998 through December 1998 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report for 10 days after the date of this letter. We will then send copies to the Secretary of Agriculture; the Chief, Forest Service; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. If you have any questions about this report, please contact me at (202) 512-3841. Major contributors to this report are listed in appendix VIII. (Figure notes on next page) This organization chart highlights the position of the Sawtooth National Forest and the Sawtooth National Recreation Area within the Forest Service. This appendix presents information on budgets for the Sawtooth National Recreation Area (SNRA) and for other Forest Service units. Table III.1 provides the SNRA’s budget allocations for fiscal years 1991 through 1997, with and without special funds. Special funds include funds from the multiyear recreation initiative and funds for construction and land acquisition. Table III.2 provides the final budget allocations for the Sawtooth National Forest, the Forest Service Region 4 (the Intermountain Region), and the Forest Service nationwide for fiscal years 1993 through 1997. SNRA budget (millions) SNRA budget without special funds (millions) Percentage change (1993-97) 1997 constant dollars in millions Percentage change (1993-97) Table IV.1 shows the funds the SNRA spent to enhance recreation, preserve conservation values, and manage commodity programs for fiscal years 1993 through 1997. The figures shown in the table include those from funds allocated for special projects, such as the funds for the multiyear recreation initiative and for the construction of recreation facilities. A large portion of the decrease in the SNRA’s spending from 1993 through 1997 may be explained by the loss of special funding or the creation of additional budget line items that shifted funds from line item to line item. For example, spending for enhancing recreation decreased about 57 percent from 1993 through 1997. A significant portion of the decrease occurred because the recreation initiative, which provided additional funding in 1992 and 1993, ended in 1993, and because the SNRA had recreation facilities construction funds in 1993 but did not have such construction funds to spend in 1997. If these items are excluded from the analysis, the decrease in spending for enhancing recreation is about 19 percent rather than 57 percent. Percentage change (1993-97) Estimated percentage change without special funds (1993-97) For fiscal year 1993 and 1994, we have added $79,300 and $93,400 to recreation management and $34,100 and $39,800 to the wilderness program, respectively. These funds are from the trail maintenance budget line item that was eliminated in fiscal 1995; the funds were distributed between the recreation management and wilderness management programs. We distributed the funds based on a Forest Service headquarters formula under which 70 percent went to recreation management and the remaining 30 percent went to wilderness management. In fiscal year 1995, the Forest Service created the ecosystem planning, inventory, and monitoring budget line item and funded it by shifting funds from other budget and extended budget line items such as recreation management. This budget line item is currently called land management planning, inventory, and monitoring, and it funds the ecosystem management program. Officials at the SNRA identified a number of accomplishments and unmet needs in their programs for enhancing recreation, preserving conservation values, and managing commodity programs. The following are examples of specific accomplishments and unmet needs for each area. In 1994, the SNRA transferred the management of its recreational facilities to a concessionaire. SNRA officials said this transfer allows them to provide the same recreational services to the public at one-third the cost to the SNRA and with one-third the SNRA staff. The SNRA completed the renovation of a major campground at Redfish Lake (a scenic and popular camping area) in 1996 through a funding partnership in which the state of Idaho paid $285,000 of the project’s $777,000 cost. The major renovations included larger paved campsites to accommodate the larger recreational vehicles now in use, upgraded restrooms, and numerous erosion and vegetation improvements. This campground experiences nearly 100-percent occupancy during much of the high-use season. In 1997, the SNRA awarded a contract to construct a trail that will run through an 18-mile stretch in the SNRA. When completed, this trail will be used for hiking, horseback riding, mountain biking, and cross-country skiing. Large portions of the cost of this project are being covered by the state and by private contributions. The SNRA has not been able to construct an additional 663 campsites that have been in its area plan since 1975. According to Forest Service officials, the SNRA is unable to meet the public’s demand for camping accommodations or to rotate campsites to allow for them to recover from heavy use. The SNRA has been unable to maintain about one-third of its 750 miles of trails. The trails require regular maintenance to remove fallen trees, cut back undergrowth, and repair erosion and use damage. Unmaintained and closed trails reduce the recreational experience of visitors to the SNRA and increase costs to horse-pack outfitters who must use some of their own resources to make trails safe and usable for their trips. The SNRA has 130 recreational residences (summer homes), 8 organization camps, and 4 resorts authorized under special uses. Prior to 1997, nearly all of these facilities were inspected annually to ensure compliance with the special use permits. In 1997, when funds for fire protection were moved from the SNRA to the Ketchum Ranger District, the SNRA also lost the position for the person who had done the inspections. As a result, only about one-third of the residences are now inspected annually. In addition, the SNRA is unable to process requests for permit modifications because it does not have the staff to perform the required analysis of the impacts these modifications might have. Therefore, deviations from the permits can occur and go undetected, and some of these deviations could have significant adverse impacts on area resource values such as endangered species habitat. The SNRA has not been able to keep its visitor information desks open to provide adequate service for its more than 1 million visitors each year. The days and hours of operation have been reduced, and in 1996, the Redfish Lake Visitor Center was closed, causing a loss of service to the 18,000 people who visited the center during its average 2-1/2 month season. The SNRA decreased customer service crews by approximately one-third between 1993 and 1997. These staff are the SNRA’s primary contact in the field with the visiting public, and they are responsible for visitor assistance, education and interpretation, compliance, and search and rescue assistance. The SNRA has cut back interpretive programs at all locations because of a lack of staff and program supplies and equipment. Inadequate funding has prevented the SNRA from doing any meaningful travel management or dispersed recreation management to deal with the increasing conflicts between many of the dispersed recreation uses, such as skiers and snowmobilers, and motorized and nonmotorized trail users. Construction funds have been unavailable since 1993 to improve aging interpretive exhibits at the Headquarters and Redfish Lake visitor centers. The Redfish Lake Visitor Center needs extensive repair work to keep it functionally sound and requires considerable remodeling to make it accessible. In 1996, the SNRA completed the final environmental impact statement for the Salmon River Corridor—a 30-mile length of the Salmon River from its source around the community of Smiley Creek to the eastern border of the SNRA. The statement identified direct measures to protect salmon, primarily by improving habitat, maintaining or improving water quality, restricting float-boating activities to prevent the disturbance of spawning, and curtailing other water-related recreation activities during the critical spawning season for both sockeye and chinook salmon. The SNRA reestablished the migration route for salmon trying to return to Alturas Lake to spawn by increasing the water flow along Alturas Lake Creek and removing an irrigation diversion that was blocking the migration. The work was done through a partnership project in which the SNRA paid for the removal of the irrigation diversion from Alturas Lake Creek with $13,000 provided by the Bonneville Power Administration in 1997. The Idaho Fish and Game Department provided about $50,000 in 1998 to develop a well source to provide replacement water for the landowner who had been using water from the creek for irrigation. The landowner installed a replacement irrigation system at an estimated cost of $50,000. The Forest Service’s funding for the project was limited to its staff’s salaries for participation and oversight of the negotiations. The SNRA developed and issued a wilderness management plan in which it updated the standards and guidelines in the Forest Plan and clearly defined the management objectives for the Sawtooth Wilderness to address the impacts from increasing numbers of visitors and more intensive use. The SNRA also issued a prescribed natural fire plan for the wilderness in 1997. The plan provides criteria under which fires started by natural causes, such as lighting, can be allowed to burn, the objectives to be achieved with such burns, and the requirements for monitoring and controlling such fires. The SNRA completed Endangered Species Act consultations for all ongoing activities after chinook and sockeye salmon and steelhead and bull trout were listed under the act. From 1995 to 1997, the SNRA was unable to conduct broad-level surveys of habitat condition or species distribution. Consequently, without updated field surveys, the SNRA is not able to effectively assess the effects of forest management activities on species or habitat to effectively manage, protect, or recover listed species. The routine maintenance of trails in the Sawtooth Wilderness has not been performed as needed. In 1993, 195 miles (82 percent) of trails in the Sawtooth Wilderness were maintained, while in 1997, only 122 miles (51 percent) of the trails were maintained. The reduction of maintenance directly correlates to the decrease in trail crew size from 15 people in 1993 to 5 people in 1997. The reduction in trail maintenance has led to (1) trail damage and increased impacts due to soil erosion, (2) decreased visitor satisfaction, and (3) increased hazards because visitors are forced to find other routes around fallen trees. The SNRA has been prevented by inadequate funding from completing conservation agreements that allow for proactive management for species prior to the species’ being listed under the Endangered Species Act. The SNRA reported that almost all of the formerly degraded riparian habitat on allotments has been allowed to recover to levels that benefit various fish populations. The recovery occurred because of rigorous compliance with standards, changes in grazing management, and new fencing to prevent cows and sheep from grazing in riparian areas. The SNRA contracted in 1995-96 with the Idaho Geological Survey to complete abandoned mine surveys for chemical and physical hazards at 44 sites and, between 1995 and 1997, reclaimed three abandoned mine sites and one mining road. Because of funding shortages, the SNRA did not have staff to consult with the National Marine Fisheries Service on the appropriateness of using newly acquired land for grazing. Completing the consultation process might have resulted in additional lands being opened to grazing, according to an SNRA official. The new lands would have compensated for reductions in grazing elsewhere. The SNRA has been unable to meet the schedule mandated in the 1995 Recissions Bill for the Forest Service to meet requirements set out in the National Environmental Policy Act of 1969. All forests are required to have updated allotment management plans that set out how grazing allotments will be brought up to and maintained at conditions that meet the direction, standards, and guidelines in the forests’ land and resources management plans. The SNRA has been unable to contain infestations of noxious weeds. Consequently, the weeds are spreading more rapidly, with a resulting loss in land productivity for grazing. Because the limited funding for range programs must be spent primarily on monitoring compliance with the terms and conditions of Biological Opinions under the Endangered Species Act, the needs for completing or revising allotment management plans and for interacting with permittees cannot be fully met. The Forest Service’s Region 4 and the Sawtooth National Forest withheld funds from the SNRA for region- and forestwide projects. Table VI.1 lists some of the regionwide projects and the total dollar amounts for fiscal years 1993 through 1997. Similarly, table VI.2 lists the forestwide projects and dollar amounts for the same period. The figure for fiscal year 1993 is for the Snake River Adjudication project. The figure for fiscal year 1995 is for a vegetation inventory. The figures for fiscal years 1996 and 1997 are for an integrated ecosystem inventory. Robert E. Cronin José Alfredo Gómez Chester F. Janik Victor S. Rezendes The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the Sawtooth National Recreation Area's (SNRA) funding, its accomplishments and unmet needs, and on agency actions that have adverse impacts on the area, focusing on: (1) the funds allocated to the SNRA for fiscal years 1993 through 1997; (2) spending for fiscal years 1993 through 1997 to enhance recreation, to preserve conservation values, such as fish and wildlife, and to manage commodity programs, such as grazing, and the accomplishments and unmet needs in these areas; (3) the funds not allocated or the funds allocated and then taken back from the Recreation Area for fiscal years 1993 through 1997 and what was done with those funds; and (4) some examples of potentially adverse effects of how the Recreation Area is managed on individuals, companies, and communities economically dependent on the area. GAO noted that: (1) SNRA's overall annual budget allocation decreased in constant dollars from about $4.88 million in 1993 to about $2.25 million in 1997--a decrease of about 54 percent; (2) this rather large decrease is somewhat misleading--1993 was the second year of a 2-year funding peak in the Recreation Area's budgets; (3) these peak budgets contained: (a) special funding for recreation management that dramatically decreased in the 1994 budget; (b) funds for the construction of recreation facilities; and (c) funds for land aquisition; (4) SNRA reduced spending between fiscal years 1993 and 1997 for activities such as enhancing recreation, preserving conservation values, and managing commodity programs; (5) spending for enhancing recreation decreased about 57 percent from 1993 through 1997; (6) similarly, spending to preserve conservation values decreased 21 percent from 1993 through 1997; (7) spending for commodities programs decreased 44 percent from 1993 through 1997; (8) however, the size of these decreases is somewhat misleading because of the funds included for special projects and construction in the 1993 budget; (9) if the funds for these special projects and construction are excluded, spending decreased 19 percent for enhancing recreation, and expenditures for preserving conservation values decreased by 9 percent; (10) SNRA officials stated that because of funding shortages, a number of needs were still unmet; (11) GAO did not identify any instances in which funds that were allocated to the Recreation Area were subsequently taken back for use by other Forest Service units; (12) overall, representatives from local governments affected by how the SNRA is managed said that the Forest Service has done a good job of preserving the values that the Recreation Area was created to preserve; (13) however, they said that the declining budgets for the Recreation Area have reduced its ability to meet recreational needs; (14) the Recreation Area placed restrictions and increased requirements on river outfitters, limited the size of groups that horse-pack outfitters can take on wilderness trips, and reduced the amount of grazing allowed on its lands; (15) GAO found that the actions taken by the Recreation Area that affected these groups were taken to protect endangered salmon, rangelands, and wilderness--rather than because of funding reductions; and (16) except for the effects on some ranchers who had large reductions in their grazing privileges, from a monetary standpoint, the adverse impacts on these groups appeared to be small. |
In the past, to help reduce and restructure their workforces, federal agencies have paid buyouts to employees to voluntarily leave federal service. DOD has had buyout authority since January 1993. Most non- DOD executive branch agencies have had two buyout opportunities. The first, under the Federal Workforce Restructuring Act (FWRA) in 1994, provided these agencies the authority to offer buyouts of as much as $25,000 to employees to voluntarily leave federal service, thereby eliminating the need for involuntary staff reductions. Nearly 40 percent of the buyouts were paid to those employees in overhead positions such as personnel, budget, procurement, and accounting. About 70 percent went to employees in mid- to upper-level positions in their organizations. The second major buyout opportunity was authorized by section 663 of the Treasury, Postal Service, and General Government Appropriations Act of 1997. According to Administration officials, the buyouts have had three distinct purposes. Initially they were used to help ease reductions in the DOD civilian workforce following the end of the Cold War. Later, as part of the National Performance Review—the Clinton administration’s initiative to reinvent government—buyouts were used to reduce what the administration called “management control” positions. These positions included those held by managers and supervisors and employees in personnel, budget, procurement, and accounting occupations. Lastly, the buyouts were used to help save money by reducing the federal workforce as the Congress and the President agreed to pursue a balanced budget. During the first buyout time frame, the Congress, as well as our own reports, began expressing concern over agencies’ lack of adequate planning prior to the implementation of their buyout programs and workforce reduction initiatives. The Congress considered this concern in a series of hearings and addressed the issue when it passed Public Law 104-208 in September 1996, which directed agencies to prepare strategic buyout plans for congressional review. OMB required agencies to first submit their plans for its review prior to submitting them to the Congress. In a June 1997 report on effective buyout practices, we found that, overall, the fiscal year 1997 buyout programs at six agencies we examined at that time appeared to have been better managed than was generally the case governmentwide during the 1994 and 1995 buyout window. This was due in large part to statutory and OMB requirements, as well as OPM guidance. Together, the requirements and guidance resulted in more structured programs in which agencies indicated they used buyouts to accomplish specific objectives and reportedly would save millions of dollars in the years ahead. In response to this success, the Congress streamlined the approval process for agency programs by eliminating congressional and OMB review of agency plans, although OPM must still consult with OMB on buyout plans. The CHCO Act of 2002 provided executive branch agencies, with OPM approval, the authority to offer buyouts and early outs to certain qualifying employees, for the purposes of workforce reshaping, not just downsizing. DOD has separate legislative authority to offer buyouts and early outs and does not require approval of OPM for its programs. The Act expanded the buyout and early-out authority to give agencies the flexibility required to reorganize their workforces should agencies need to substantially reduce their number of organizational layers, transfer functions, or make other substantial workforce changes. As such, the Act’s provisions allow agencies to reduce managerial or supervisory positions, correct skill imbalances, and reduce operating costs without the loss of full-time positions. Congress specified that each executive agency requesting buyouts, early outs, or both must submit a plan to OPM outlining its intended use of these authorities. OPM also issued regulations to the heads of executive agencies with information on how to prepare these plans. OPM review officials said they approve or disapprove agency requests for the authorities based on the quality of the plans. For example, they ensure that all legislative requirements are met. OPM reviews each agency’s plan and, for buyouts only, consults with OMB before final approval is granted. OMB assures that the agency has funding for the buyout program and meets the legislative requirements. Agencies must have OPM approval before using buyout and early-out authorities and are required to provide OPM with interim and final reports on their use. Agency plans are to describe how they will use buyout and early-out offers as tools to facilitate agencies’ reshaping goals. For agencies requesting buyout or buyout and early-out authority, the plans are also to include identification of the agency or specified component(s) within the agency for which the authority is being requested; identification of the specific functions to be reduced or eliminated; a description of the categories of employees who will be offered these options identified by organizational unit, geographic location, occupational series, or grade level; and the time period during which buyouts may be paid or early outs offered. Agencies may request and offer buyout and early-out programs simultaneously for each of their components and organizational units. Since fiscal year 2003, the number of agencies using buyouts and early outs to reshape their workforces has increased, and to date, about half of the executive branch agencies have requested and used these authorities. In addition, agencies are more frequently offering buyouts and early outs together as an additional incentive for employees. Eight agencies are the major users of the authorities and represent approximately 70 percent of all requests. Agencies’ decisions to use buyouts and early outs are based on specific workforce planning needs. In some cases, technological advances that necessitated a different skill mix primarily drove agency-reshaping efforts. In other cases, agencies’ reshaping efforts were driven by a more diverse set of needs such as consolidation of functions or budgetary restrictions. Officials responsible for use of these authorities at our six selected agencies believe that buyouts and early outs have been successful tools to help reshape their respective workforces. Of the approximately 110 executive branch agencies under the CHCO Act, a total of 51 agencies have been granted authority to offer their employees programs of buyouts, early outs, or both at least once. This has increased from 28 agencies that were granted these authorities in fiscal year 2003. As shown in figure 2, OPM approved, and the agencies used, the authorities to offer programs a total of 136 times, 176 times, and 179 times, respectively, to help reshape their workforce. According to OPM data, at least 22,600 employees have separated from federal service under a buyout, early out, or both. Among the employees who separated and received a buyout, 59 percent were employees who separated by retiring under the government’s standard retirement qualifications and did not need an early out, 36 percent were employees who separated using an early-out program, and 5 percent were employees who separated through a resignation. In fiscal year 2003, 40 percent of all agency programs offered employees the opportunities to have a buyout and take an early retirement at the same time. In fiscal year 2004 this increased to 55 percent. In fiscal year 2005 this further increased to 70 percent. DOI, DOE, and Treasury were agencies more likely to offer programs with combined buyout and early-out offers. For example, in fiscal years 2004 and 2005, DOI combined buyouts and early outs in 39 of its 42 programs, DOE combined them in 38 of its 45 programs, and Treasury combined them in 14 of its 23 programs. According to several agency officials responsible for use of these authorities, employees are more likely to accept early-out offers when they are combined with buyouts because the buyout’s monetary payment would help offset the employee’s loss in income and reduced annuity payments. Over the 3-year period, OPM granted a total of 491 requests from the 51 agencies for buyout, early-out, or both authorities. As shown in figure 3, USDA, DOC, DOE, HHS, DOI, DOT, Treasury, and VA represent approximately 70 percent of granted buyout and early-out programs. The six agencies in our review based their decisions to use buyouts and early outs on specific reshaping objectives identified in their particular agency’s workforce planning process. For example, HHS consolidated its human resources (HR) offices from 40 to 5 using the buyout authority to manage the workforce transition from 1,167 employees to approximately 860. According to the HHS agency official who led the agency’s HR consolidation effort, the agency successfully used the buyout authority to manage the reduction of higher-graded positions and to strategically adjust its workforce size and skills mix. The official stated that over 125 of the estimated 350 staff affected by the agency’s consolidation of its HR offices accepted buyout offers. He explained that initially they were able to redeploy displaced staff to mission-critical occupations and functions; however as consolidations continued and the balance shifted more and more away from administrative positions and toward mission-critical occupations, their ability to absorb and redeploy staff to other administrative or mission-support jobs diminished. According to the official, the buyout program allowed HHS to meet its targets without requiring a RIF. In fiscal year 2004, NIST experienced a budget shortfall, new responsibilities, and changing national priorities for its research and development. Because of its budget shortfall, NIST officials anticipated conducting several RIFs during fiscal year 2004. NIST’s reshaping goals were to (1) phase out a major program, (2) reduce funding of another program, (3) streamline institutional and administrative support, and (4) shift the focus of some laboratories. NIST officials reported that they used a combined buyout and early-out program, reassignments, and RIFs to help restructure its workforce, achieve a more optimal skills mix, and reduce its staff from 2,744 employees to 2,556. NIST officials reported that while they were unable to totally avoid involuntary reductions, the use of the buyout and early-out program enabled them to limit the RIFs to a minimal number of employees. All of the agencies in our selected sample believe OPM’s feedback during the review process improved their use of the buyout and early-out authorities. OPM approved all agencies’ buyout and early-out requests that met the statutory and regulatory requirements. However, according to program officials in three agencies, OPM’s review process can at times be quicker so as to avoid possible delays in the approval process that could limit the success of the programs. OPM is taking steps to reduce the amount of time it takes to review agency submissions and is further planning to streamline its review process. OPM plays the central role in the oversight and implementation of the buyout and early-out authorities. OPM regulations, issued under the authority of the CHCO Act, require agencies to develop plans that provide information on agencies’ intended buyout and early-out usage. OPM, in turn, reviews and approves agency plans for using the authorities. The Act and OPM’s regulations also state that OPM will consult with OMB regarding agency buyout plans. These plans are to include the following: Identification of the specific positions to be reduced or eliminated, identified by organizational unit, geographical location, occupational series, grade level, and any other factors related to the positions. A description of the categories of employees who will be offered incentives, identified by organizational unit, geographic location, occupational series, grade level, and any other factors, such as skills, knowledge, or retirement eligibility. A description of how the agency will operate without the eliminated or restructured positions and functions. The time period during which incentives may be paid. The number of and maximum amounts of Voluntary Incentive Payments to be offered. Agencies’ early-out plans are to include the following: Identification of the agency or specified component(s) for which the authority is being requested. Reasons why the agency needs voluntary early retirement authority. The date on which the agency expects to effect the workforce reshaping. The time period during which the agency plans to offer voluntary early retirement. According to an OPM reviewing official, the OPM human capital officer assigned to review a particular agency provides the first level of examination. The review process then includes additional management reviews and the final approval by the OPM Director. Through this process, OPM checks to see whether all legislative and regulatory requirements are met, and that these criteria are being consistently applied across all agencies’ requests for buyout and early-out authority. In some cases, OPM had agencies revise their plans until they met OPM’s standards, according to an OPM review official. For example, OPM officials reported they received an agency request that involved four different components or work units. The justifications submitted for each component were in various states of readiness and covered different time periods. After consultation with OPM reviewing officials, the agency elected to withdraw the request and resubmit them as separate more-targeted requests. In its first resubmission, the agency provided a request dealing with reshaping, business process reengineering, and downsizing of one of the four components, thus improving the linkage to its business strategy. This request received OMB concurrence and was approved by OPM. Figure 4 illustrates OPM’s review process for agency buyout and early-out requests. Program officials in each of the agencies in our review believed OPM’s feedback on their plans was helpful and several reported that they have modified and improved their plans as a result. Managers at DOE stated that the agency has worked closely with OPM to resolve any questions in an effort to streamline the process. USDA officials responsible for use of these authorities stated that OPM has been responsive to their requests and subsequent inquiries. OPM program officials and nearly all of the agency officials responsible for use of these authorities in our review believe the approval process can at times be accomplished more quickly and have fewer steps. Agencies want to avoid delaying the timelines for offering a buyout and early-out program, for fear that shortening the time frame employees have for deciding whether to accept or decline the buyout or early out can reduce the number of employees taking them and thus the success of the program. Both the quality of the agency request and the number of OPM reviews, as well as OMB’s review of requests for buyout authority, can affect the cycle time for final approval. One agency official stated that he believes a contributing factor is that too many people have to review each request, a point that OPM is addressing in its planned streamlining of the review process. When we looked at a sample of 28 programs between fiscal years 2003 and 2005 for our six selected agencies, we saw a variation in approval times that ranged from 7 to 88 days. On average, we found that OPM took 36 days to review the programs in our sample. According to OPM and officials from the selected agencies, a number of variables can affect approval times, including the experience of the officers (OPM, OMB, and agency) assigned to the request, the completeness of the agency request, the complexity of the request, and whether the requests are based on competitive sourcing situations. According to OPM program managers, their calculations show that, overall, their reviews take on average 34 days, but they have set a goal to reduce this to 21 days. They established this target by analyzing results of previous reviews and also from surveying agency program officials. To help speed up its review process, OPM issued an application checklist for agencies to help ensure their plans comply with regulatory requirements prior to submission to OPM. According to OPM officials, OPM has expedited its review process by establishing more-stringent timeframes and standards for reviewing agency submissions and is developing Web site guidance on both authorities. In addition, to further streamline its review process, OPM plans to reduce the number of reviewers within each review step. If such actions are effectively implemented, OPM’s steps to improve cycle time should help address agency concerns about shortening the windows for offering buyout and early-out programs. On the basis of our review of the literature on buyouts and early outs, we identified seven practices that are associated with effective buyout and early-out programs. All of the agency officials responsible for use of these authorities in our review believe that these practices have been helpful as they have used most of them to implement their programs. Of the identified practices, five are generally reinforced in the provisions of the CHCO Act and OPM’s regulations for implementing it. The two practices not reflected in statutory and OPM documents are (1) consider and adopt ways to maximize cost savings and (2) establish an evaluation system to identify and report relevant data on buyout and early-out recipients (see table 2). Agency officials responsible for agency use of these authorities reported: 1. Identifying their reshaping goals up front helped agencies use these tools to achieve them. Officials at each of the agencies in our review reported that a first step in developing buyout and early-out programs is identifying the reshaping goals of the agency. They explained that the reshaping goals must align with organizational strategic goals. For example, the DOC’s Economics and Statistics Administration had an organizational strategic goal to enhance the quality and effectiveness of its economic policy support function by consolidating two subunits. The agency in turn developed a reshaping goal to streamline the offices and eliminate unneeded positions. In addition, agency program officials determined a decreased need for traditional clerical skills and decided to also target those positions and employees for buyouts and early outs. According to an agency official responsible for use of these authorities, they met their estimated buyout and early-out projections and achieved a number of reshaping goals. They formed a new organizational structure with increased quality and efficiencies in policy and administrative support functions, reduced staffing, and created a more desirable supervisor-to-employee ratio. 2. Considering not only buyouts and early outs, but also a range of alternative methods to meet reshaping goals helped agencies ensure success. Considering not only buyouts and early outs, but also a range of alternative methods to meet reshaping goals helped agencies ensure success. Agencies in our review stressed that developing workforce strategies to meet reshaping goals that consider alternative methods for meeting the goals, including buyouts and early outs, is important and that they routinely do so. They explained that some alternatives work better than others in certain situations. For example, one agency official stated that in some areas the agency may need to strengthen its student programs and build that particular pipeline of talent and in other skill areas they may need to redeploy employees to programs in which their skills are better utilized. USDA’s Agricultural Marketing Service considered alternative methods such as placement opportunities, hiring freezes, redeployment, retraining, delaying capital purchases, and suspension of bonuses. For example, the agency provided opportunities for some employees to accept a lower grade position or take on additional duties and responsibilities, which required on-the-job training. In addition to offering buyouts or early outs, several of the agencies considered retraining as part of their reshaping strategies. However, one agency official explained that retraining was not always a viable option. For example, at USDA’s Agricultural Research Service, the new skills required a specialized academic background. But the vast majority of the affected employees was in manual trade positions and did not have the background necessary to successfully complete a retraining program for scientific duties. Thus, the agency could only assign a few of the affected employees to other positions and used the buyout and early-out programs for the remaining affected employees. At Treasury’s Office of the Comptroller of the Currency, agency officials reported that they established a working group that did extensive cost analyses to help design its reshaping options. The group compared costs of its current field locations and identified: (1) imbalances in its manager-to- employee ratios; (2) a declining volume of work in some areas; and (3) overstaffing in some district offices, particularly among support positions and within some information technology units. The working group made some assumptions on the number of staff positions needed for each option and recognized that each would have an effect on employees. They developed net savings projections for the various realignment configurations. The final decisions on the agency’s district structure resulted in the closure of three district offices and establishment of one district office in a new location. As a result, the agency recognized there were employees with needed skills, but they were in the wrong locations. In addition to offering buyouts (under its own authority) and early outs, a number of surplus employees were given the opportunity to transfer with their function to the new office. 3. Designing buyout and early-out programs that demonstrate a clear relationship to the agency’s workforce reshaping goals helped agencies achieve those goals. Our review of agency buyout and early-out plans submitted to OPM shows that the plans identified the agency’s workforce reshaping goals and specified how using the authorities would help meet those goals. For example, following an analysis of Treasury’s Bureau of the Public Debt’s Information Technology (IT) programs, management set the goal of consolidating most of the bureau’s functions into one existing organizational unit that would require fewer employees to perform the IT work. The bureau established an IT consolidation team, made up of members of management as well as human resources representatives, to develop the workforce reshaping strategy. Team members, for the most part, were those who would be directly affected by the consolidation. According to Bureau of the Public Debt officials involved with the consolidation effort, the bureau used a combined buyout and early-out program to help reshape its workforce to achieve a more optimal skills mix and to eliminate the need to use RIFs to cut excess IT positions. Prior to offering the buyout and early-out program, the agency’s HR division surveyed all eligible employees. From the results of the survey, they were able to estimate the number of interested employees and also identify individuals likely to accept an offer. The agency also decided to make offers to non-IT individuals working in organizations identified as affected by the changes to create open positions for surplus IT employees. Agency officials reported that they met their goal to consolidate into one existing organizational unit and also met their targeted number of employees accepting program offers, which eliminated the need for involuntary separations. 4. Designing buyout and early-out programs that considered employees’ needs helped them to cope with the changes. Agency officials responsible for use of these authorities in the agencies we reviewed pointed out they consider employees’ needs when designing and implementing programs. They believed that buyout and early-out programs should generally provide career guidance, counseling, and outplacement assistance to employees who may be displaced. Treasury’s Office of the Comptroller of the Currency, when offering a buyout and early-out program, made a wide variety of other services available to employees. For example, in those offices to be closed Treasury solicited the affected employees’ relocation preferences and tried to accommodate them to the extent possible. Employees who were interested in other positions and locations could visit the new locations at the agency’s expense. In addition, the agency paid relocation bonuses to some affected employees and instituted a “safe landing” program that consisted of a support network matching affected employees with “buddies” who provided encouragement to those employees and served as sounding boards for them, training and discussion forums that focused on career and stress management, and technical training to prepare for other job opportunities. Program officials in these agencies also reported that their programs are routinely reviewed internally at many levels, including agency general counsel offices, to ensure that the programs not only conform to applicable laws, union agreements, and regulations, but also would be considered equitable from the employees’ point of view. Officials at USDA’s Agricultural Research Service reported that in addition to reviewing the appropriate contracts and regulations, they continuously consulted with the employee union and its human capital office throughout the duration of the buyout and early-out program to ensure fairness. Additionally, the agency ensured that information packets regarding the proposed programs were provided to all eligible employees before the date the offers took effect in an effort to give employees ample time for fullest consideration. Program officials at USDA’s Natural Resources Conservation Service reported that their Civil Rights Division conducted a Civil Rights Impact Analysis to determine if there were any adverse effects on employees and concluded that there were none, since every employee was offered a similar position at their current grade level. None of our selected agencies reported any grievances filed as a result of their buyout and early-out programs. 5. Developing a communication strategy early in the process helped to build an understanding of the purpose of planned changes. Agency officials in our review stressed that communication early and throughout the reshaping process was critical. In a recent report, we said that creating an effective, ongoing communication strategy is essential to implementing a merger or transformation. Communicating often, accurately, and consistently was the key factor to one agency’s successful strategy, according to an official responsible for use of the buyout and early-out authorities. At NIST, the Director met with staff prior to getting approval to offer buyouts and early outs, and explained the agency was facing financial deficits and wanted to avoid the possible need to resort to involuntary separations, and so was soliciting the authority to offer voluntary separation incentives. Other agencies began communicating with employees immediately after obtaining OPM approval to offer buyouts or early outs. Treasury’s Office of the Comptroller of the Currency established an electronic bulletin board and employees were invited to provide their comments, suggestions, or recommendations on the agency’s proposed reshaping initiative and options that the agency’s program officials might want to consider. A number of agencies also communicated with employees about the particulars of their programs through e-mails and brochures to ensure employees were kept advised of all reshaping actions. Agency officials also stressed that having highly visible top management involvement in their communication strategies helped to deliver the message of organizational changes as efficiently as possible. One agency had its managers in the components and offices targeted for reshaping brief their employees and attend group information sessions with those employees. During these sessions, employees were able to directly ask these managers specific questions about the reshaping effort, thus helping the employees to understand management’s rationale for decisions made. 6. Agencies in our review considered ways to maximize cost savings. Nearly all of the agencies in our review, in an attempt to maximize costs savings, reported that they tried to separate employees from federal service early in the fiscal year so as to save on employee salary expenses. One agency also assigned affected employees to nonaffected positions where appropriate. 7. Although they monitor who is accepting buyouts and early outs, the agencies were not evaluating the continuing and future effectiveness of these authorities. Agencies in our review collected and reviewed data on the number of employees who accept offers under their individual programs compared with their intended goals. One agency has a system in place that periodically generates a report on the numbers of employees who accept buyout offers, thus providing management with useful information to consider the progress toward their reshaping goals and make necessary adjustments. To monitor the progress and success of its reshaping initiative, another agency established a consolidation team that met weekly to discuss issues that emerged as the agency implemented this initiative and potential roadblocks to achieving its reshaping goals. The team also maintains a list of all individuals who separate from the agency with a buyout and provides this information to the agency’s staff involved in recruitment and placement to help ensure that previous buyout recipients repay the full amount of the buyout payment if they are rehired within five years of receiving the buyout. However, agencies have not expanded on these monitoring efforts to provide an evaluation of the longer-term effectiveness of these tools. For example, agencies could compare the length of service for employees in its workforce, the role this factor plays in employee decisions about accepting buyout and early-out offers, and how this could affect the composition and timing of future offers. This information on length of service in relation to the acceptance or rejection of a buyout or early-out offer could help the agencies as they plan and prepare for future programs. In addition, agencies could perform an analysis to determine the savings generated by buyouts and early outs relative to other separation strategies, such as involuntary staff reductions. Such an analysis would help agencies determine whether anticipated cost savings of buyouts and early outs will in fact provide the best choice of resource actions that could be taken. Agency officials responsible for use of these authorities that we contacted found the requirement that agencies must get even minor changes to their plans approved by OPM to be restrictive. Under OPM regulations, even slight deviations from approved plans by even one position or grade level requires additional OPM review. Officials in three agencies suggested that agencies be allowed to make such minor deviations, as long as the agencies do not exceed the total number of approved buyouts or early outs. For example, an agency official responsible for use of these authorities from HHS explained that agencies using the authorities in nearly all cases are being asked to predict the future because when offering buyouts, the agency never really knows how many employees will accept the offers, or where they will be located. The official explained that agencies target a group of employees who will be offered the buyouts, and then hope for the best. Having the flexibility to adjust the target group to which the offer is made during the implementation of their plans could improve program results and ultimately provide additional opportunity for agencies to achieve their reshaping goals, according to this manager. OPM officials said that they believe that the CHCO Act does not provide OPM the authority to allow agencies to make changes to approved plans. Furthermore, agency officials responsible for use of these authorities in our review believe that the buyout maximum payment amount of $25,000, which has been constant since 1992, may not be enough incentive to encourage eligible workers to voluntarily leave the workforce, especially higher-salaried employees. They explained that inflation has eroded the buying power and value of buyouts, making them less attractive unless the employee is ready to retire anyhow, thus defeating the purpose of the program. Officials in four agencies said some consideration should be given to ways to make buyouts more attractive to employees. On the other hand, one agency official acknowledged that increasing the buyout amount may reduce the number of offers agencies could make at a higher amount and thus could make alternative approaches to reshaping more attractive. Although they have not undertaken any studies regarding this, OPM officials believe that agencies’ reporting that they are meeting their targeted number of reductions indicates the dollar amount is sufficient. In addition, some agency officials suggested that more coordination across the agencies using the programs would be helpful as would the sharing of examples of how some agencies used effective practices in ways that were particularly successful or instructive in reshaping their workforces, and that OPM may be in the best position to do this coordination. For example, several program officials suggested that OPM could sponsor forums, an interagency working group, or even additional training sessions, such as the session OPM headquarters offered in May 2005, to encourage information sharing on how agencies may more efficiently implement their buyout and early-out programs. A number of the agency suggestions for improving the outcomes achieved with the buyout and early-out programs would benefit all agencies using the authorities, but some would require legislative changes. OPM, as the central human capital office, is well positioned to determine which governmentwide improvements to pursue and how to implement them, including regulatory changes or proposals for legislative reform. For example, OPM may assess the effects of raising the $25,000 buyout amount and, if appropriate, sponsor legislative proposals for these changes. As agencies transform to better meet 21st century challenges and changing missions, they are increasingly recognizing the need to reshape their workforces to meet these challenges. The Congress also recognized this need and responded to it by designing legislation creating buyout and early- out programs as additional tools agencies could consider using in their efforts to reshape their workforces. Our review shows that agencies have taken advantage of these programs and are employing certain practices that help them to use these tools more effectively. Now agencies and OPM, working through each agency’s CHCO, have the opportunity to make even better use of these authorities and best practices. Several agencies we reviewed have formulated ideas about how to improve them. OPM and the CHCOs, through their governmentwide Council, could provide an important service by sharing information with agencies on successful ways to use the tools and the lessons learned across agencies, and helping agencies determine what data to collect and evaluate as indicators of the program’s results. Currently, the individual agencies in our review have not systematically evaluated the relative efficiency, continuing effectiveness, and future viability of the authorities as reshaping tools for their agencies. By doing so, agencies will know whether they need to make adjustments in their long-term strategy for employing those tools, and whether it is more cost-efficient to spend funds on implementing buyouts and early outs rather than on other reshaping tools, such as retraining. Finally, because OPM takes a governmentwide perspective on human capital programs, it is well positioned to assess potential improvements to these authorities, such as the agencies proposed changes, and determine what changes to implement and the steps needed to do so. To help ensure that agencies can take full advantage of the authorities to use buyouts and early outs to reshape their workforces, we recommend that the Director of OPM, in conjunction with the CHCO Council, take the following actions. 1. Share additional information with agencies on examples of how agencies have used practices associated with effective use of buyout and early-out programs to support their programs and achieve successful results. In addition, OPM could support the improvement of approval cycle time and the effectiveness of the programs by facilitating information sharing among agencies, such as holding forums and training sessions on the use of the authorities for agencies with less-experienced staff. 2. Help agencies identify ways they can determine the extent to which the authorities have been effective tools and will continue to be in the future, and how agencies may need to tailor their reshaping strategies accordingly. 3. Assess potential program improvements, such as those the agencies have identified, for possible governmentwide implementation, then take the steps necessary to accomplish this, such as changing the regulations governing the program, or proposing any needed statutory changes. We provided a draft of this report to the Director of OPM. The Director provided written comments, which are included in appendix II. The Director agreed with our recommendations, as well as our conclusion that buyouts and early outs have been effective tools for agencies implementing workforce reshaping plans. The Director also stated that the agency- specific examples included in our report provide additional insight on the usefulness of buyouts and early outs as proven workforce reshaping options, and will help agencies to emulate best practices as they manage their own restructuring plans. In addition, the Director of OPM agreed that, working with the CHCO Council, the agency could provide an important service by sharing information on such successful ways to use the tools and the lessons learned across agencies. Furthermore, the Director stated that the examples will help as OPM actively works with agencies to determine what data to collect and evaluate as indicators of the program’s results in an effort to measure the effectiveness of their buyout and early-out programs. Once OPM reviews agencies’ actual experiences using these two tools, it will be able to consider whether changes are needed to improve their effectiveness. The Director of OPM also agreed that its review of agency plans could be done more quickly and stated that it is expediting approval of agencies’ buyout and early-out requests. The Director of OPM also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to other interested congressional parties, the Director of OPM, and the federal agencies and offices discussed in this report. In addition, we will make copies available to other interested parties upon request. This report will also be made available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 6510 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report include William Doherty, Clifton G. Douglas Jr., Charlesetta Bailey, Tom Beall, Andrew Edelson, Jeffrey McDermott, Amy Rosewarne, and Lou Smith. The objectives of our review were to identify how many agencies have sought and been granted authority to offer buyouts and early outs and the extent to which agencies have used these authorities; how selected agency officials view the Office of Personnel Management’s (OPM) role in facilitating the use of the buyout and early- out authorities; what practices are associated with agencies’ effective use of buyouts and early outs, how selected agencies used these practices, and whether they have helped agencies to achieve their workforce reshaping goals; and what challenges the selected agencies identified, if any, to continued effective use of these authorities. To address our first objective, we obtained and analyzed OPM data on the buyout and early-out programs authorized under the Chief Human Capital Officers Act of 2002 to obtain governmentwide data on agencies’ use of the program and to help identify agencies for further review. To address our other objectives, we reviewed the OPM data to identify those federal agencies that were the most frequent users of the buyout and early-out authorities under the Act. We selected the Departments of Agriculture, Commerce, Energy, Health and Human Services, Interior, and the Treasury for further review since they accounted for over half of all requests for authorities at the time we started our review. We interviewed agency officials, such as human capital officers and buyout and early-out program managers from the six selected agencies to ascertain (1) their views of OPM’s role in their use of the buyout and early-out authorities, (2) the particular buyout and early-out practices they were using, (3) their views on how the practices helped them to achieve their goals, (4) the lessons learned from their experiences, and (5) the challenges to continued successful use of these authorities. As part of this review, we also asked officials to identify and discuss more specifically either a buyout or early-out offer in each of fiscal years 2003, 2004, and 2005 that, in their view, was among the most successful of such offers conducted that year. In addition, using a systematic selection procedure with a random start, we reviewed a sample of OPM case files for about a tenth of all the offers made by the six selected agencies for fiscal years 2003 through 2005. We reviewed the files to determine the length of OPM’s review time and also to identify instances where agencies used practices associated with effective buyout and early-out use. While our total sample of cases consisted of 40 buyout or early-out offers, we excluded 7 cases from our analysis of OPM’s review time because in these cases, the agencies requested buyouts or early outs on the assumption that they would possibly use contractors to conduct some of the agencies’ work. OPM could not approve the offers, however, until the agencies made these final outsourcing decisions. In addition, we excluded 5 cases from this analysis because either OPM’s review of the case was pending or we were unable to identify the agency date of the submission or OPM’s date of approval from the file. To supplement and support the views provided by agency officials, our review also included obtaining and examining available documentation, such as strategic workforce plans, buyout and early-out plans, and other documents associated with the use of the authorities from each of our selected agencies. We also interviewed officials involved in the review process from OPM and the Office of Management and Budget to obtain information on their roles and responsibilities in approving agency requests and their perspectives on the use of buyout or early-out offers governmentwide. In addressing our third objective, we first reviewed information from relevant literature on the use of buyouts and early outs in organizations including our prior studies of federal buyout programs in 1997 and 1998. We conducted a content analysis and included as effective practices those common to multiple sources and those previously cited in GAO reports. We next conferred on the reasonableness of selected buyout and early-out practices we identified with officials from the National Academy of Public Administration and the International Public Management Association who had relevant expertise in the management of human resource programs. Both of these organizations have published reports on the effectiveness of federal human capital practices related to buyout and early-out programs. Because we designed our selection of agencies and cases to examine the experiences, practices, and perspectives of a set of agencies that, in recent years, have been relatively more engaged in conducting buyout and early- out programs, the findings are not generalizable to other agencies or the federal government as a whole. To assess the reliability of OPM’s database on buyout and early-out authorities, we interviewed the officials at OPM who were knowledgeable about maintaining that database. In addition, we provided each of our six selected agencies with OPM’s count of their buyout and early-out authorities to review. We determined that OPM’s data were sufficiently reliable for the purpose of providing general information on the number of agencies participating in buyouts and early outs over time and the number and nature of authorities that these agencies used. Our review was conducted in accordance with generally accepted government auditing standards from October 2004 through November 2005. | Under the Chief Human Capital Officers (CHCO) Act of 2002, an agency may request authority from the Office of Personnel Management (OPM) to offer employees voluntary separation incentive payments (buyouts) and voluntary early retirement (early outs) to help reshape its workforce. GAO was asked to identify (1) how many agencies have been granted authority to offer buyouts and early outs and how often agencies used them, (2) how agencies view OPM's role in facilitating the use of these tools, (3) how agencies have used practices associated with effective use of the tools, and (4) what challenges agencies identified, if any, to continued effective use. To respond, GAO reviewed the practices of the Departments of Agriculture, Commerce, Energy, Health and Human Services, Interior, and the Treasury because they were among the most frequent users of these authorities. The total number of agencies using buyouts and early outs to reshape their workforces has significantly increased from 28 to 51 during fiscal years 2003 through 2005, and the number of programs agencies have offered over the past 3 years has also significantly increased. During this timeframe, at least 22,600 employees have separated from federal service under these authorities. Officials at all six agencies GAO reviewed believe that OPM's mandatory review of and feedback on their plans for using buyouts and early outs has improved implementation. However, nearly all of the agencies said they believe the review at times can be achieved more quickly and should be streamlined. OPM on average took 36 days to review the 28 randomly selected programs GAO assessed at the six agencies. OPM is taking steps to reduce its overall average review time including establishing more stringent timeframes for review and reducing the number of reviewers. The six agencies also reported using almost all of the practices GAO identified as associated with the effective use of buyouts and early outs, and that these practices resulted in better-planned programs. Agencies were not, however, using one practice that involves evaluating the longer-term effectiveness of the buyout and early-out authorities for reshaping their future workforces. Officials at the six agencies suggested that information on how some agencies used effective practices in ways that were particularly successful or instructive in reshaping their workforces could help them improve program results. Agency officials responsible for use of these authorities from the six agencies agreed that certain reforms would help them address some of the challenges they face in implementing their programs. These include (1) increasing the current dollar amount agencies can pay under buyouts to make the programs more attractive to employees and increase the acceptance rate and (2) allowing agencies to make minor changes to buyout and early-out plans after OPM approval. OPM is in the best position to assess these and other possible reforms and ways to achieve them. |
The Buy Indian Act of 1910 authorizes the Secretary of the Interior to employ Indian labor and to purchase the products of Indian-owned firms without using the normal competitive process. As implemented, Interior’s BIA may use the Buy Indian Act procurement authority. In addition, effective in 1955, Congress transferred authority over functions relating to the maintenance and operation of hospitals and health facilities for Indians, and the conservation of the health of Indians from Interior to HHS—formerly the Department of Health, Education, and Welfare. As a result, HHS’ IHS may use the Buy Indian Act procurement authority for acquisitions in connection with those functions. BIA and IHS may use the Buy Indian Act to give preference to Indian-owned businesses when acquiring supplies and services to meet agency needs and requirements. The Buy Indian Act itself is brief and contains little detail. The key to implementing the Act is in both agencies’ regulations. The two agencies have broad discretion over whether and how to utilize the Buy Indian Act and have issued agency regulations governing their use of the authority. BIA provides services to approximately 1.9 million American Indians and Alaska Natives to enhance quality of life, promote economic opportunity, and carry out the responsibility to protect and improve the trust assets of American Indians and Alaska Natives. IHS is responsible for providing health care for American Indians and Alaska Natives. To provide for these services, both agencies contract for a variety of items and services such as administrative and custodial services, maintenance projects, and office supplies. BIA and IHS are divided into twelve largely similar geographic areas across the United States, which they refer to as regional offices and area offices respectively.director. BIA and IHS headquarters set policies and oversee the regional offices. Each regional office employs contracting officers responsible for awarding contracts, including Buy Indian Act contracts. Each agency also awards some contracts through their headquarters offices. See figures 1 and 2 below for information on the regional structure of each of the agencies. BIA and IHS have policies and procedures in place to implement the Buy Indian Act and to help ensure contractors’ compliance with key requirements. However, both agencies’ headquarters have limited insight into implementation of the Act at regional offices. BIA and IHS both implement the Buy Indian Act authority through a combination of regulations, agency policy, and guidance. BIA officials told us they prioritize the use of the Buy Indian Act over other set-aside authorities. Conversely, IHS officials reported prioritizing awards through other set- asides over the use of the Act so as to meet federally mandated small business goals. However, these priorities are not documented in regulations or policies. Both agencies have regulations to help ensure contractors comply with key requirements, such as maintaining the minimum proportion of Indian ownership, not subcontracting more than half of the contracted work to other than Indian firms, and providing a preference to Indians in employment, training, and subcontracting. Although these regulations are in place, headquarters officials at both agencies reported limited insight into implementation of these regulations at their regional offices because they do not collect data concerning the Buy Indian Act from regional offices, nor does either agency have a specific review of Buy Indian Act contracts included in its regular procurement review process. Both agencies have regulations and policies in place to implement the Buy Indian Act, codified in formal rules and agency guidance. In 2013, Interior finalized regulations implementing the Buy Indian Act in the Department of the Interior Acquisition Regulation, over 30 years after they had been initially promulgated. The rule solidified the processes to be used for implementation and provided a consistent policy to be used throughout BIA. BIA began promulgation of the regulations back in October 1982 with proposed rules published in the Federal Register. BIA made additional efforts to establish regulations over the next 30 years until the final regulation took effect in 2013. BIA officials could not identify a specific reason as to why finalization of the regulations took so long. Prior to issuing a rule, BIA issued an internal policy manual to govern the program and provided guidance to its employees through a series of policy memoranda. BIA cited creating a more uniform process and applying it more consistently as the main reasons for pursuing a formal regulation. HHS, which at the time was the Department of Health, Education, and Welfare, enacted Buy Indian Act regulations in 1975. A modified version of these regulations was later incorporated into the Department of Health and Human Services Acquisition Regulation. IHS also issued the Indian Health Manual to provide additional guidance to its employees regarding the IHS procurement process, including specific policies regarding the Buy Indian Act. The chapter of the Indian Health Manual that contains requirements related to the Buy Indian Act is currently under revision. BIA and IHS define the term “Indian” in their regulations somewhat differently. BIA regulations define “Indian” as a person who is a member of an Indian Tribe, or “Native” as defined in the Alaska Native Claims Settlement Act. IHS, however, defines “Indian” as a member of any tribe, pueblo, band, group, village, or community that is recognized by the Secretary of the Interior as being Indian or any individual or group of individuals recognized by the Secretary of the Interior or the Secretary of HHS. Both agencies’ implementing regulations impose key requirements on contractors. First, both agencies require eligible firms to be 51 percent Indian-owned. Second, firms awarded a contract under the Buy Indian Act must give preference to Indians in employment and training opportunities under the contract, and to Indian firms in the award of any subcontracts. Third, firms awarded a contract under the Buy Indian Act must not subcontract more than 50 percent of the work to other than Indian firms. BIA officials stated contracting officers must consider the Buy Indian Act first when awarding every contract, and if they are unable to award a contract using the Buy Indian Act, they must provide justification as to why not. However, this policy is not currently documented. According to BIA officials, policy documentation was recently rescinded because it was confusing and not fully in-line with the intent to award Buy Indian first. BIA is working on revising its policy on the use of the Buy Indian Act. Officials were uncertain exactly when the new guidance would be issued. At IHS, use of the Buy Indian Act versus other set-aside programs is unclear and also not sufficiently documented. IHS officials told us that, because of difficulties meeting small business goals, the agency prioritizes awarding contracts to vendors that help the agency meet its federally mandated small business goals, and that awarding contracts under the Buy Indian Act is secondary to those goals. They also stated that since June 2005, there has been an effort within the agency to encourage Indian-owned firms to seek status under set-asides other than the Buy Indian Act, such as women-owned or veteran-owned small businesses. IHS was unable to provide documentation related to this practice, and was only able to produce a 1995 policy that, contrary to what we were told, indicated that the Buy Indian Act takes precedence over other set-asides. The lack of documented policy at BIA and IHS is not consistent with federal internal control standards, which provide that formally documented policies and procedures help to ensure that staff performs Without documented policies in activities consistently across an agency.place BIA and IHS are at risk for inconsistent application of the Buy Indian Act across the agency. Both agencies’ regulations provide for mechanisms to help enforce key requirements of the Buy Indian Act as implemented. Specifically, both agencies’ regulations require that firms awarded a contract under the Buy Indian Act be at least 51 percent Indian owned, provide a preference to Indians in employment, training, and subcontracting, and not subcontract more than 50 percent of the work to other than Indian firms. These key requirements are implemented through mechanisms such as self- certification procedures and, in some cases, specific contract clauses. Both agencies rely on firms to self-certify their status as Indian-owned. BIA requires firms to represent their Indian-owned status by checking a box when submitting a proposal for a contract that indicates that they meet the relevant regulatory definitions. are included in all BIA Buy Indian Act contracts that require firms to report any change in Indian-owned status. According to IHS officials, bidders on IHS Buy Indian Act contracts must submit a certificate of degree of Indian blood or other form of tribal membership documentation as part of an application packet in order to be considered eligible for a Buy Indian contract. According to agency officials at both BIA and IHS, under self- certification, contracting officers may request more information from a bidding firm to confirm its Indian-owned status, though officials report this is rarely done. At BIA, the contracting officer may ask an attorney in the appropriate regional office to review a firm’s representation. In addition, after receipt of offers, the contracting officer may question the representation of any bidder by filing a formal objection with the chief of the contracting office. See 48 C.F.R. § 1452.280-4 for the Indian economic enterprise representation provision included in Buy Indian solicitations. the Buy Indian Act specific challenge process at BIA. IHS handles challenges according to the protest procedures set out in the Federal Acquisition Regulation. To deter intentional misrepresentations of Indian- owned status, both agencies rely on the suspension and debarment process, and prosecution under the federal false statement statute. Both agencies’ regulations require firms that are awarded a contract under the Buy Indian Act to give preference to Indians in employment, training, and subcontracting. Contracts awarded under the Buy Indian Act, and all resulting subcontracts, are required to contain the Indian Preference Clause, which specifically requires the contractor to provide a preference to Indians in employment, training, and subcontracting opportunities under the contract. The clause further requires the contractor to maintain sufficient records indicating compliance. Agency officials at both BIA and IHS told us they have regulations limiting subcontracting with other than Indian-owned firms to no more than 50 percent of the work, although only BIA implements this requirement through the inclusion of a contract clause. Violations of contract clauses can have serious consequences such as contract termination. This approach is similar to how limitations on subcontracting might be handled for some small business contracts. For example, when awarding an 8(a) contract, the Federal Acquisition Regulation directs contracting officers to include the Limitations on Subcontracting clause, under which the contractor agrees that the 8(a) firm will perform a certain percentage of the work. According to BIA and IHS officials, neither agency employs systematic monitoring or compliance protocols—such as systematic reporting on specific Buy Indian requirements—to ensure that contractors comply with key requirements and contract clauses beyond regular contracting officer oversight. The Buy Indian Act is not necessarily unique in this regard. The Small Business Administration’s women-owned small business and economically disadvantaged women-owned small business programs generally rely on self-certification and oversight by the contracting officer as well. Certain contracts or types of work involve more stringent, and more specific, monitoring requirements. For example, for contracts over $50,000, BIA requires contractors to appoint a liaison officer in charge of keeping records for its Indian preference program and to issue semi- annual reports. IHS monitors compliance in a similar fashion, requiring a liaison officer for non-construction contracts equal to or over $50,000 and construction contracts equal to or over $100,000. Federal internal control standards state that, for an entity to run and control its operations, it must have relevant, reliable, and timely communications, and that information is needed throughout the agency to achieve all of its objectives. The standards further state that operating information is needed to determine whether an agency is complying with various laws and regulations. We found that BIA and IHS headquarters officials have limited insight into the Buy Indian Act implementation in the regional offices. Specifically, both BIA’s and IHS’s headquarters-level procurement managers stated that they had little knowledge about challenges to a firm’s self-certification of Indian-owned status that might have occurred in the regional offices. When asked about how frequently challenges occurred or how they were resolved, BIA and IHS officials told us they would have to consult with regional offices to provide this information. Also, officials at both agencies told us they do not aggregate data relating to challenges. More broadly, these officials reported they do not require regional offices to collect, retain, or aggregate data about compliance with Buy Indian requirements in a systematic fashion. Given this lack of insight, it is difficult for BIA and IHS officials to know whether the Buy Indian Act is being consistently applied among the regions, or for the agencies to determine the extent to which mechanisms to implement key requirements are working as intended. Both agencies also identified a specific process in place for reviewing procurements awarded at their regional offices, but these reviews have not historically included an examination of contracts awarded using the Buy Indian Act. For example, Interior requires BIA to conduct bi-annual acquisition reviews at each of its regional offices, but officials told us these reviews have not previously examined the use of the Buy Indian Act in particular. Following a series of informal, region-by-region reviews starting in mid-2015, BIA plans to include Buy Indian Act requirements in future formal acquisition reviews. Similarly, IHS officials reported that they recently completed a periodic procurement management review of contracts awarded at its regional offices, but officials stated they were not aware of any reviews, past or planned, specifically focused on the use of the Buy Indian Act. By not reviewing Buy Indian Act contracts as part of the procurement review process, both agencies are missing opportunities to ensure effective oversight of these contract awards. Use of the Buy Indian Act comprises a small percentage of BIA and IHS contract obligations. However, both agencies also award contracts to Indian-owned firms using other authorities, thus increasing the percentage of obligations awarded to Indian-owned firms. During the period covered by our review, both agencies awarded contracts using the Buy Indian Act authority to more than 300 different Indian-owned firms. The types of goods and services purchased under the Act varied, and included maintenance, medical, custodial, administrative support, and office supplies. Use of the Buy Indian Act represents a small percentage of both BIA’s and IHS’s annual contract obligations. However, both agencies can and do use other procurement authorities to award contracts to Indian-owned firms, thus increasing the overall percentage of contracts awarded to such firms. Officials from both agencies noted that it would be difficult to have all contract obligations be set aside for award under the Buy Indian Act, noting that some requirements, such as those for utilities, may not be suitable for award using the Act. Figures 3 and 4 show the annual percentage of obligations under the Buy Indian Act, Indian-owned obligations awarded through other procurement authorities, and non Indian-owned obligations for BIA and IHS respectively. We also found that BIA and IHS were awarding contracts using the Buy Indian Act authority to a number of different Indian-owned firms. Specifically, in fiscal years 2010 through 2014, BIA awarded 732 contracts to 269 vendors and IHS awarded 131 contracts to 84 vendors. Use of the Buy Indian Act at both agencies’ offices varies. Based on data from FPDS-NG from fiscal years 2010 through 2014, most of the agencies’ offices awarded contracts under the Buy Indian Act, although some used it more than others. For example at BIA, while almost half of the total Buy Indian Act obligations across this time frame were awarded by the agency’s central office, a headquarters office, these obligations decreased from about two-thirds of the agency’s Buy Indian Act obligations in 2010 to about one-third in 2014. Other offices with relatively high percentages of use included the Navajo and Western regional offices. At IHS the majority of the Buy Indian Act obligations were awarded through its Albuquerque, Phoenix, and California area offices. See figures 5 and 6 for more details about the offices’ contract obligations using the Act. BIA and IHS have used the Buy Indian Act to purchase a variety of goods and services in areas such as maintenance, medical, custodial, administrative support, and office supplies.and services purchased are more varied, with architecture and At BIA, the types of goods engineering related goods and services being the most common (see figure 7). Based on our analysis, IHS primarily uses the Buy Indian Act to purchase goods and services in three areas: professional and administrative support services, medically related goods and services, and custodial or housekeeping goods and services (see figure 8). The use of the Buy Indian Act is intended to promote growth and development of Indian industries and, like any set-aside program, is important in helping these businesses in the marketplace. Interior and HHS issued regulations and other guidance to guide implementation of the Act, but reported differing priorities in terms of use of the Act. While it is within each agency’s discretion to establish these priorities, it is important that these priorities be clearly documented. Both agencies lack current documentation of these stated priorities. Without clear and documented guidance on their priorities, BIA and IHS are at risk of inconsistent implementation. We also found that BIA and IHS have limited insight into how key requirements, such as self certification and potential challenges to those certifications, are being implemented at their regional offices where the contracts are being awarded. Both agencies would benefit from collecting data on use of the Buy Indian Act from regional offices as well as including a review of contracts awarded using the Act in their oversight reviews. Without knowledge of how the regions are implementing requirements related to the Act, both agencies may be missing opportunities to improve use of the Act. Information about the number of challenges, for instance, or detailed reporting on how contractors are meeting their Indian preference requirements, could point to issues in need of attention. Conversely, such information might also highlight regional innovations that, if implemented more broadly, could improve the use of the Buy Indian Act across each agency. Such information could help ensure that both agencies are maximizing the benefits intended in terms of the growth and development of Indian industries. To ensure consistent implementation of the Buy Indian Act procurement authority across the agencies and to enhance oversight of implementation of the Act at regional offices, we recommend that the Secretaries of the Interior and Health and Human Services direct the Bureau of Indian Affairs and Indian Health Service respectively, to take the following three actions: (1) clarify and codify their policies related to the priority for use of the Buy Indian Act, including whether the Buy Indian Act should be used before other set-aside programs; (2) collect data on regional offices’ implementation of key requirements, such as challenges to self-certification; and (3) include Buy Indian Act contracts as a part of their regular procurement review process. We provided a draft of this report to Interior and HHS for review and comment. Both agencies concurred with our recommendations and identified actions they are taking or plan to take to address the recommendations. HHS also provided technical comments which we incorporated as appropriate. Interior indicated it is in the process of updating policy that will more clearly define the priority of the use of the Buy Indian Act authority. Interior also indicated it would develop policy and procedure requirements for collecting data bureau-wide—including from all Indian Affairs offices that initiate procurement actions–on the Act’s key requirements, including self-certification, verification, and validation. Additionally, Interior indicated it plans to incorporate information related to Buy Indian Act contracts into its checklist for its annual reviews. HHS plans to clarify and codify policies related to the priority for use of the Buy Indian Act in the Indian Health Manual. HHS also plans to conduct a review of contracts awarded under the Buy Indian Act as part of its internal procurement oversight reviews and to require all acquisition offices to conduct regular Buy Indian Act procurement reviews. Additionally, HHS stated it plans to continue oversight to ensure that contractors comply with key requirements and that the agency will collect data on contracts defined in FPDS-NG as American Indian/Alaska Native owned, but did not specify the extent to which data would be collected on the regional offices’ implementation of key requirements. As HHS implements our recommendations, we continue to emphasize the importance of oversight and data collection at the regional office level and encourage HHS to take the necessary steps in collecting data from these offices. Interior and HHS’s written comments are reprinted in appendix I and appendix II, respectively. We are sending copies of this report to interested congressional committees; the Secretary of the Interior; and the Secretary of Health and Human Services. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please call me at (202) 512-4841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix III. In addition to the contact named above, Janet McKelvey, Assistant Director; Julie C. Hadley, Analyst-in-Charge; Matthew J. Ambrose, Danielle R. Greene, Kristine R. Hassinger, Julia M. Kennon, Jeffery D. Malcolm, and Roxanna T. Sun made key contributions to this report. | The Buy Indian Act of 1910 and agencies' implementing regulations allow Interior's BIA and the Department of Health and Human Services' IHS to award federal contracts to Indian-owned businesses without using the standard competitive process. Among other requirements, eligible firms must be at least 51 percent Indian-owned and give preference to Indians in employment, training, and subcontracting. GAO was asked to review the implementation of the Buy Indian Act. This report identifies (1) the policies and procedures at BIA and IHS to implement the Act; and (2) the funds obligated by BIA and IHS using the Buy Indian Act procurement authority. GAO reviewed the Buy Indian Act, the Federal Acquisition Regulation, and agency policies and regulations. GAO also analyzed data from the Federal Procurement Data System-Next Generation on BIA and IHS's contract obligations under the Act between fiscal years 2010 and 2014 and met with agency officials. The Department of the Interior's (Interior) Bureau of Indian Affairs (BIA) and the Department of Health and Human Services' Indian Health Service (IHS) have requirements in place to implement the Buy Indian Act. Through supplements to the Federal Acquisition Regulation, both BIA and IHS have policies and procedures to implement key requirements: Indian-owned status . Eligible firms must be 51 percent Indian-owned. The agencies rely on firms to self certify that they are Indian-owned and interested parties may challenge a firm's self-certification. Indian preference . The agencies require that contractors give preference to Indians in employment and training opportunities, and use a contract clause to implement this requirement. Subcontracting . The agencies require contractors to give preference to Indian firms in the award of any subcontracts. However, BIA and IHS have limited insight into implementation of the Buy Indian Act at their regional offices, where the contracts are generally awarded. For example, officials at both agencies' headquarters had little knowledge as to how often challenges to self-certifications of Indian-owned status occur on contracts awarded at the regional offices. Neither agency collects data from regional offices on use of the Buy Indian Act, and neither agency includes a specific review of Buy Indian Act contracts in its regular procurement review process. Therefore, the agencies may be missing opportunities to maximize the intended benefits of the Act in terms of growth and development of Indian firms. Use of the Buy Indian Act comprises a small percentage of the two agencies' annual contract obligations. However, these agencies also award contracts to Indian-owned firms using other authorities, thus increasing the percentage of obligations awarded to Indian-owned firms. GAO recommends, among other things, that Interior and Health and Human Services enhance their oversight of execution of the Act at regional offices by collecting additional data on key requirements and including Buy Indian Act contracts in procurement reviews. Interior and Health and Human Services agreed with GAO's recommendations. |
A U.S. passport is not only a travel document but also an official verification of the bearer’s origin, identity, and nationality. Under U.S. law, the Secretary of State has the authority to issue passports. Only U.S. nationals may obtain a U.S. passport, and evidence of citizenship or nationality is required with every passport application. Federal regulations list those who do not qualify for a U.S. passport, including those who are subjects of a federal felony warrant. The Deputy Assistant Secretary for Passport Services oversees the Passport Services Office, the largest component of State’s Consular Affairs Bureau. Passport Services consists of three headquarters offices: Policy Planning and Legal Advisory Services; Field Operations; and Information Management and Liaison. Also within Consular Affairs is the Office of Consular Fraud Prevention, which addresses passport, visa, and other types of consular fraud; the Consular Systems Division, responsible for the computer systems involved in passport services and other consular operations; and the Office for American Citizens Services, which handles most issues relating to passport cases at overseas posts. The Bureau of Diplomatic Security is responsible for investigating individual cases of suspected passport and visa fraud. The State Department Office of the Inspector General (OIG) also has some authority to investigate passport fraud. State operates 16 domestic passport-issuing offices, which employ approximately 480 passport examiners who approve and issue most U.S. passports that are printed each year. The number of passports issued by domestic passport offices has risen steadily in recent years, increasing from about 7.3 million in fiscal year 2000 to 8.8 million in fiscal year 2004. Overseas posts deal with a much lower volume of passports by comparison, handling about 300,000 worldwide in fiscal year 2004. The majority of passport applications are submitted by mail or in-person at one of almost 7,000 passport application acceptance facilities nationwide. The passport acceptance agents at these facilities are responsible for, among other things, verifying whether an applicant’s identification document (such as a driver’s license) actually matches the applicant. Then, through a process called adjudication, passport examiners determine whether they should issue each applicant a passport. Adjudication requires the examiner to scrutinize identification and citizenship documents presented by applicants to verify their identity and U.S. citizenship. The passport adjudication process is facilitated by computer systems, including the Travel Document Issuance System, which appears on passport examiners’ screens when the adjudication begins and automatically checks the applicant’s name against several databases. Figure 1 identifies the key computer databases available to help examiners adjudicate passport applications and detect potential fraud. In addition, examiners scrutinize paper documents and other relevant information during the fraud detection process, watch for suspicious behavior and travel plans, and request additional identification when they feel the documents presented are insufficient. When examiners detect potentially fraudulent passport applications, they send the applications to their local fraud prevention office for review and potential referral to State’s Bureau of Diplomatic Security for further investigation. State’s Bureau of Diplomatic Security investigators stated that imposters’ use of assumed identities, supported by genuine but fraudulently obtained identification documents, was a common and successful way to fraudulently obtain a U.S. passport. This method accounted for 69 percent of passport fraud detected in fiscal year 2004. Investigators found numerous examples of aliens and U.S. citizens obtaining U.S. passports using a false identity or the documentation of others to hide their true identity. In one example, in 1997, a naturalized U.S. citizen born in Cuba stole a Lear jet and transported it to Nicaragua. At the time of his arrest in 2003, he was using an assumed identity and possessed both false and legitimate but fraudulently obtained identification documents, including a U.S. passport in the name he used while posing as a certified pilot and illegally providing flight instruction. Seized at his residence when he was arrested were two Social Security cards, four driver’s licenses, three Puerto Rican birth certificates, one U.S. passport, one pilot identification card, numerous credit cards and checking account cards, and items used to make fraudulent documents. In October 2004, he pled guilty to knowingly possessing five or more “authentication devices” and false identification documents, for which he was sentenced to 8 months confinement. In another case, a man wanted for murdering his wife obtained a Colorado driver’s license and a passport using a friend’s Social Security number and date and place of birth. Three and four years later he obtained renewal and replacement passports, respectively, in the same assumed identity. He was later arrested and pled guilty to making a false statement in an application for a passport. He was sentenced to about 7 months time served and returned to California to stand trial for murdering his wife. Applicants commit passport fraud through other means, including submitting false claims of lost, stolen, or mutilated passports; child substitution; and counterfeit citizenship documents. Some fraudulently obtain new passports by claiming to have lost their passport or had it stolen or damaged. For example, one individual who used another person’s Social Security number and Ohio driver’s license to report a lost passport obtained a replacement passport through the one-day expedited service. This fraudulently obtained passport was used to obtain entry into the United States 14 times in less than three years. Diplomatic Security officials told us that another means of passport fraud is when individuals obtain replacement passports by using expired passports containing photographs of individuals they closely resemble. This method of fraud is more easily and commonly committed with children, with false applications based on photographs of children who look similar to the child applicant. Assuming the identity of a deceased person is another means of fraudulently applying for a passport. According to State Bureau of Diplomatic Security documents, passport fraud is often commited in connection with other crimes, including narcotics trafficking, organized crime, money laundering, and alien smuggling. According to Diplomatic Security officials, concerns exist within the law enforcement and intelligence communities that passport fraud could also be used to help facilitate acts of terrorism. Using a passport with a false identity helps enable criminals to conceal their movements and activities, and U.S. passports provide their holders free passage into our country with much less scrutiny than is given to foreign citizens. U.S. passports also allow visa-free passage into many countries around the world, providing obvious benefits to criminals operating on an international scale. According to State officials, the most common crime associated with passport fraud is illegal immigration. For example, one woman was recently convicted for organizing and leading a large-scale passport fraud ring that involved recruiting American women to sell their children’s identities, so that foreign nationals could fraudulently obtain passports and enter the United States illegally. According to the Department of State, the woman targeted drug-dependent women and their children, paying them about $300 for each identity and then using the identities to apply for passports. The woman then sold the fraudulently obtained passports to illegal aliens for as much as $6,000 each. One of the key challenges to State’s fraud detection efforts is limited interagency information sharing. Specifically, State currently lacks access to the Terrorist Screening Center’s consolidated terrorist watch list database, which was created in 2003 to improve information sharing among government agencies. By consolidating terrorist watch lists, TSC is intended to enable federal agencies to access critical information quickly when a suspected terrorist is encountered or stopped within the United States, at the country’s borders, or at embassies overseas. However, because State’s CLASS name-check database does not contain the TSC information, U.S. citizens with possible ties to terrorism could potentially obtain passports and travel internationally without the knowledge of appropriate authorities. Although TSC has been operational since December 2003, State and TSC did not begin exploring the possibility of uploading data from the TSC database into passport CLASS until December 2004. State and TSC have not reached an agreement about information-sharing, though State sent an official proposal to TSC in January 2005. A TSC official told us that she does not foresee any technical limitations, and added that TSC agrees that it is important to work out an agreement with State. We recommended that State and other parties expedite such arrangements, and State said that it and the TSC are actively working to do so. Because the FBI and other law enforcement agencies do not currently provide State with the names of all individuals wanted by federal law enforcement authorities, State’s CLASS name-check system does not contain the names of many federal fugitives, some wanted for murder and other violent crimes; these fugitives could therefore obtain passports and potentially flee the country. The subjects of federal felony arrest warrants are not entitled to a U.S. passport. According to FBI officials, FBI databases contain the names of approximately 37,000 individuals wanted on federal charges. State Department officials acknowledge that many of these individuals are not listed in CLASS. We tested the names of 43 different federal fugitives and found that just 23 were in CLASS; therefore, passport examiners would not be alerted about the individuals’ wanted status if any of the other 20 not in CLASS applied for a passport. In fact, one of these 20 did obtain an updated U.S. passport 17 months after the FBI had listed the individual in its database as wanted. A number of the 20 federal fugitives who were included in our test and were found not to be in CLASS were suspected of serious crimes, including murder. One was on the FBI’s Ten Most Wanted list. Table 1 lists the crimes suspected of the federal fugitives in our test. State officials told us that they had not initiated efforts to improve information sharing with the FBI on passport-related matters until the summer of 2004 because they had previously been under the impression that the U.S. Marshals Service was already sending to CLASS the names of all fugitives wanted by federal law enforcement authorities. State officials were not aware that the information in the U.S. Marshal’s database was not as comprehensive as that contained in the FBI-operated National Crime Information Center database. State officials became aware of this situation when the union representing passport examiners brought to their attention that a number of individuals on the FBI’s Ten Most Wanted list were not in CLASS. In the summer of 2004, the FBI agreed to State’s request to provide the names from the FBI’s Ten Most Wanted list. As part of these discussions, State and the FBI explored other information-sharing opportunities as well, and FBI headquarters officials sent a message instructing agents in its field offices how to provide names of U.S. citizens who are FBI fugitives to State on a case-by-case basis. Additionally, State began discussions with the FBI about receiving information on individuals with FBI warrants on a more routine and comprehensive basis. According to FBI officials, State requested that the FBI provide only the names of FBI fugitives and not those of individuals wanted by other federal law enforcement entities. However, the FBI is the only law enforcement agency that systematically compiles comprehensive information on individuals wanted by all federal law enforcement agencies, and, according to FBI officials, it is the logical agency to provide such comprehensive information to State. We recommended that State expedite arrangements to enhance interagency information sharing with the FBI to ensure that the CLASS system contains a more comprehensive list of federal fugitives. According to State, it sent a written request on this issue to the FBI in April 2005. State also noted that it had reached agreement in principal with the FBI on information sharing efforts related to FBI fugitives. In addition to its role in compiling information on federal fugitives, the FBI is also the only law enforcement agency that compiles comprehensive information on individuals wanted by state and local authorities. According to FBI officials, FBI databases contain the names of approximately 1.2 million individuals wanted on state and local charges nationwide. FBI officials told us that some of the most serious crimes committed often involve only state and local charges. We tested the names of 24 different state fugitives and found that just 7 were in CLASS; therefore, the CLASS system would not flag any of the other 17, were they to apply for a passport. Table 2 lists the crimes suspected of the 17 tested state fugitives not in CLASS who were included in our test. During our review, State Department officials told us that having a comprehensive list of names that included both federal and state fugitives could “clog” State’s CLASS system and slow the passport adjudication process. They also expressed concern that the course of action required of State would not always be clear for cases involving passport applicants wanted on state charges. We recommended that State work with the FBI to ensure that the CLASS system contains a more comprehensive list of state fugitives. In commenting on a draft of our report, State said that it now intends to work with the FBI and U.S. Marshals Service to establish an automated mechanism for integrating information on state warrants into CLASS. State does not maintain a centralized and up-to-date electronic fraud prevention library, which would enable passport-issuing office personnel to efficiently share fraud prevention information and tools. As a result, fraud prevention information is provided inconsistently to examiners among the 16 domestic offices. For example, at some offices, examiners maintain individual sets of fraud prevention materials. Some print out individual fraud alerts and other related documents and file them in binders. Others archive individual e-mails and other documents electronically. Some examiners told us that the sheer volume of fraud- related materials they receive makes it impossible to maintain and use these resources in an organized and systematic way. Other information sharing tools have not been effectively maintained. Consular Affairs’ Office of Consular Fraud Prevention maintains a Web site and “e-room” with some information on fraud alerts, lost and stolen state birth documents, and other resources related to fraud detection, though fraud prevention officials told us the Web site is not kept up to date, is poorly organized, and is difficult to navigate. We directly observed information available on this Web site during separate visits to State’s passport-issuing offices and noted that some of the material was outdated by as much as more than a year. The issuing office in Seattle developed its own online fraud library that included information such as the specific serial numbers of blank birth certificates that were stolen, false driver’s licenses, fraud prevention training materials, and a host of other fraud prevention information resources and links. However, this library is no longer updated. Most of the 16 fraud prevention managers we talked to believed that the Bureau of Consular Affairs should maintain a centralized library of this nature for offices nationwide. We recommended that State establish and maintain a centralized and up- to-date electronic fraud prevention library that would enable passport agency personnel at different locations across the United States to efficiently access and share fraud prevention information and tools. Commenting on our draft report, State said that it now intends to design a centralized online passport “knowledgebase” that will include extensive sections on fraud prevention resources. In January 2004, State eliminated the assistant fraud prevention manager position that had existed at most of its domestic passport-issuing offices, and most Fraud Prevention Managers believe that this action was harmful to their fraud detection program. State eliminated the position primarily to enable more senior passport examiners to serve in that role on a rotational basis to gain deeper knowledge of the subject matter and enhance overall fraud detection efforts when they returned to adjudicating passport applications. However, managers at 10 of the 12 offices that previously had permanent assistants told us that the loss of this position had been harmful to their fraud detection program. In particular, managers indicated that the loss of their assistant impacted their own ability to concentrate on fraud detection by adding to their workload significant additional training, administrative, and networking responsibilities, while also diverting from their fraud trend analysis and preparation of reports and case referrals. Fraud Prevention Managers and other State officials have linked declining fraud referrals to the loss of the assistant fraud prevention manager position. In the 12 offices that previously had permanent assistants, fraud referral rates from the managers to Diplomatic Security decreased overall by almost 25 percent from fiscal year 2003 through 2004, the period during which the position was eliminated, and this percentage was much higher in some offices. Without their assistants helping them screen fraud referrals, check applicant information, and assist with other duties related to the process, managers said they are making fewer fraud referrals to Diplomatic Security because they lack the time and do not believe they can fully rely on new rotational staff to take on these responsibilities. We recommended that State consider designating additional positions for fraud prevention coordination and training in domestic passport-issuing offices. Passport Services management told us they were not planning to re-establish the permanent assistant role, but that they are in the process of filling one to two additional fraud prevention manager positions at each of the 2 offices with the largest workloads nationwide. State also plans to establish one additional fraud prevention manager position at another issuing office with a large workload. Commenting on our draft report, State said that it would now also consider rotating GS-12 Adjudication Supervisors through local fraud prevention offices to relieve Fraud Prevention Managers of some of their training responsibilities. State routinely transfers adjudication cases among the different offices to balance workloads, and Fraud Prevention Managers at a number of issuing offices said they had noticed a lower percentage of fraud referrals returned to them from the 3 offices that were assigned a bulk of the workload transfers. In fiscal year 2004, 28 percent of passport applications were transferred to 1 of these 3 offices for adjudication, while other issuing offices adjudicated 72 percent. Although these 3 offices received 28 percent of the applications, they provided only 11 percent of total fraud referrals to the originating agencies. For fiscal year 2003, the 3 processing centers adjudicated 26 percent of the applications but provided only 8 percent of the fraud referrals. In 2004, 1 of the issuing offices transferred out to processing centers 63 percent of its applications (about 287,000) but received back from the processing centers only 2 percent of the fraud referrals it generated that year. In 2003, this office transferred out 66 percent of its workload while receiving back only 8 percent of its total fraud referrals. Fraud Prevention Managers and other officials told us that one reason fewer fraud referrals return from these 3 offices is that passport examiners handling workload transfers from a number of different regions are not as familiar with the demographics, neighborhoods, and other local characteristics of a particular region as are the examiners who live and work there. For example, some officials noted that, in instances when they suspect fraud, they might telephone the applicants to ask for additional information so they can engage in polite conversation and ask casual questions, such as where they grew up, what school they attended, and other information. The officials noted that, due to their familiarity with the area, applicants’ answers to such questions may quickly indicate whether or not their application is likely to be fraudulent. One examiner in an office that handled workload transfers from areas with large Spanish- speaking populations said that the office had an insufficient number of Spanish-speaking examiners, emphasizing the usefulness of that skill in detecting dialects, accents, handwriting, and cultural references that conflict with information provided in passport applications. We recommended that State assess the extent to which and reasons why workload transfers from one domestic passport issuing office to another were, in some cases, associated with fewer fraud referrals and to take any corrective action that may be necessary. In its official comments on our draft report, State did not address this recommendation. State has not established a core curriculum and ongoing training requirements for experienced passport examiners, and thus such training is provided unevenly at different passport-issuing offices. While State recently developed a standardized training program for new hires that was first given in August 2004, we reviewed the training programs and materials at all 7 issuing offices we visited and discussed the programs and materials at other offices with the remaining nine Fraud Prevention Managers by telephone and found that the topics covered and the amount and depth of training varied widely by office. Some had developed region- specific materials; others relied more heavily on materials that had been developed by passport officials in Washington, D.C., and were largely outdated. Some scheduled more regular training sessions, and others did so more sporadically. Several examiners told us they had not received any formal, interactive fraud prevention training in at least 4 years. Some Fraud Prevention Managers hold brief discussions on specific fraud cases and trends at monthly staff meetings, and they rely on these discussions to serve as refresher training. Some Fraud Prevention Managers occasionally invite officials from other government agencies, such as the Secret Service or DHS, to share their fraud expertise. However, these meetings take place only when time is available. For example, officials at one issuing office said the monthly meetings had not been held for several months because of high workload; another manager said he rarely has time for any monthly meetings; and two others said they do not hold such discussions but e-mail to examiners recent fraud trend alerts and information. We recommended that State establish a core curriculum and ongoing fraud prevention training requirements for all passport examiners. State said that it is implementing a standardized national training program for new passport examiners but that it is still providing training to existing passport examiners on a decentralized basis. State officials told us that they intend to develop a national training program for experienced examiners, after certain organizational changes are made in State’s headquarters passport operation. Numerous passport-issuing agency officials and Diplomatic Security investigators told us that the acceptance agent program is a significant fraud vulnerability. Examples of acceptance agent problems that were brought to our attention include important information missing from documentation and identification photos that did not match the applicant presenting the documentation. Officials at one issuing office said that their office often sees the same mistakes multiple times from the same acceptance facility. These officials attributed problems with applications received through acceptance agents to the sporadic training provided for and limited oversight of acceptance agents. State has almost 7,000 passport acceptance agency offices, and none of the 16 issuing offices provide comprehensive annual training or oversight to all acceptance agency offices in their area. Instead, the issuing offices concentrate their training and oversight visits on agency offices geographically nearest to the issuing offices, or in large population centers, or where examiners and Fraud Prevention Managers had reported problems, or in high fraud areas. Larger issuing offices in particular have trouble reaching acceptance agency staff. At one larger issuing office with about 1,700 acceptance facilities, the Fraud Prevention Manager said he does not have time to provide acceptance agent training and that it is difficult for issuing office staff to visit many agencies. A manager at another large issuing office that covers an area including 11 states said she does not have time to visit some agencies in less populated areas. While State officials told us all acceptance agency staff must be U.S. citizens, issuing agency officials told us they have no way of verifying that all of them are. Management officials at one passport-issuing office told us that, while their region included more than 1,000 acceptance facilities, the office did not maintain records of the names of individuals accepting passport applications at those facilities. We recommended that State strengthen its fraud prevention training efforts and oversight of passport acceptance agents. In commenting on a draft of our report, State said that it is adapting and expanding computer- based training for U.S. Postal Service acceptance facilities for more widespread use among acceptance agents nationwide. State also indicated that it would institute a nationwide quality review program for its acceptance facilities. However, State officials recently told us that the quality reviews would focus only on new acceptance facilities and existing facilities with reported problems. It is unclear whether State will perform quality reviews for the rest of its nearly 7,000 facilities. Although State’s Bureau of Diplomatic Security has provided additional resources for investigating passport fraud in recent years, its agents must still divide their time among a number of competing demands, some of which are considered a higher priority than investigating passport fraud. A Diplomatic Security official told us that, after the September 11th terrorist attacks, the bureau hired about 300 additional agents, at least partially to reduce investigative backlogs. Diplomatic Security and passport officials told us that, while the increased staff resources had helped reduce backlogs to some degree, agents assigned to passport fraud investigations are still routinely pulled away for other assignments. At most of the offices we visited, few of the agents responsible for investigating passport fraud were actually there. At one office, all of the agents responsible for investigating passport fraud were on temporary duty elsewhere, and the one agent covering the office in their absence had left his assignment at the local Joint Terrorism Task Force to do so. Agents at one office said that five of the eight agents involved in passport fraud investigations there were being sent for temporary duty in Iraq, as were many of their colleagues at other offices. Agents at all but 2 of the 7 bureau field offices we visited said they are unable to devote adequate time and continuity to investigating passport fraud. We noted that the number of new passport fraud investigations had declined by more than 25 percent over the last five years, though Diplomatic Security officials attributed this trend, among other factors, to refined targeting of cases that merit investigation. The Special-Agent-in- Charge of a large Diplomatic Security field office in a high fraud region expressed serious concern that, in 2002, the Bureau of Diplomatic Security began requiring, to reduce backlog of old cases, that most cases be closed after 12 months, whether or not the investigations were complete. The agent said that about 400 incomplete cases at his office were closed. A Diplomatic Security official in Washington, D.C., told us that, while field offices had been encouraged to close old cases that were not likely to be resolved, there had not been a formal requirement to do so. State officials agreed that Diplomatic Security agents are not able to devote adequate attention to investigating passport fraud, and told us that the Bureau of Diplomatic Security plans to hire 56 new investigative agents over the next few years. According to State officials, these new investigators will be solely dedicated to investigating passport and visa fraud and will not be pulled away for other duty. Although State’s approach to developing new nationwide passport examiner production standards, implemented in January 2004, raises methodological concerns, subsequent changes to the standards make an assessment of their impact on fraud detection premature. State developed new nationwide passport examiner production standards in an effort to make performance expectations and work processes more uniform among its 16 issuing offices. However, State tested examiner production before standardizing the passport examination process; differences in work processes across offices at the time of the test limited the validity of the test results. State then used the results in conjunction with old standards to set new nationwide standards. The new standards put additional emphasis on achieving quantitative targets. Responding to concerns about their fairness due to changes that may have slowed the examination process, as well concerns that the new standards led examiners to take “shortcuts” in the examination process to meet their number targets, State made a number of modifications to the production standards during the year. The various modifications have made it unclear what impact the standards have had on passport fraud detection. Madam Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. If you or your staff have any questions about this testimony, please contact Jess Ford at (202) 512-4128 or [email protected], or Michael Courts at (202) 512-8980 or [email protected]. Individuals making key contributions to this testimony included Jeffrey Baldwin-Bott, Joseph Carney, Paul Desaulniers, Edward Kennedy, and Mary Moutsos. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Maintaining the integrity of the U.S. passport is essential to the State Department's efforts to protect U.S. citizens from terrorists, criminals, and others. State issued about 8.8 million passports in 2004. During the same year, State's Bureau of Diplomatic Security arrested about 500 individuals for passport fraud, and about 300 persons were convicted. Passport fraud is often intended to facilitate other crimes, including illegal immigration, drug trafficking, and alien smuggling. GAO examined (1) how passport fraud is committed, (2) what key fraud detection challenges State faces, and (3) what effect new passport examiner performance standards could have on fraud detection. Using the stolen identities of U.S. citizens is the primary method of those fraudulently applying for U.S. passports. False claims of lost, stolen, or damaged passports and child substitution are among the other tactics used. Fraudulently obtained passports can help criminals conceal their activities and travel with less scrutiny. Concerns exist that they could also be used to help facilitate terrorism. State faces a number of challenges to its passport fraud detection efforts, and these challenges make it more difficult to protect U.S. citizens from terrorists, criminals, and others. Information on U.S. citizens listed in the federal government's consolidated terrorist watch list is not systematically provided to State. Moreover, State does not routinely obtain from the Federal Bureau of Investigation (FBI) the names of other individuals wanted by federal and state law enforcement authorities. We tested the names of 67 federal and state fugitives and found that 37, over half, were not in State's Consular Lookout and Support System (CLASS) database for passports. One of those not included was on the FBI's Ten Most Wanted list. State does not maintain a centralized and up-to-date fraud prevention library, hindering information sharing within State. Fraud prevention staffing reductions and interoffice workload transfers resulted in fewer fraud referrals at some offices, and insufficient training, oversight, and investigative resources also hinder fraud detection efforts. Any effect that new passport examiner performance standards may have on State's fraud detection efforts is unclear because State continues to adjust the standards. State began implementing the new standards in January 2004 to make work processes and performance expectations more uniform nationwide. Passport examiner union representatives expressed concern that new numerical production quotas may require examiners to "shortcut" fraud detection efforts. However, in response to union and examiner concerns, State eased the production standards during 2004 and made a number of other modifications and compromises. |
Within broad federal requirements under Title XIX of the Social Security Act, each state administers and operates its Medicaid program in accordance with a state Medicaid plan, which must be approved by CMS. A state Medicaid plan (1) describes the groups of individuals to be covered and the methods for calculating payments to providers; (2) establishes criteria and requirements for providers to be eligible to receive payments; (3) describes the categories of services covered, such as inpatient hospital services, nursing facility services, and physician services; and (4) must be approved by CMS in order for the state to receive matching funds for the federal share of Medicaid payments it makes. Any changes a state wishes to make in its Medicaid plan, such as establishing new Medicaid payments to providers or changing methodologies for determining provider payment rates, must be submitted to CMS for review and approval as a state plan amendment. Federal matching funds are available to states for different types of payments that states make. For regular, claims-based payments made directly to providers that have submitted bills for services rendered, states pay the providers based on established payment rates for the services provided. For supplemental payments, states generally make monthly, quarterly, or annual lump sum payments or may include the payments as adjustments to regular, claims-based payments. Supplemental payments include Disproportionate Share Hospital (DSH) payments, which states are required by federal law to make to certain hospitals. These payments are designed to help offset these hospitals’ uncompensated care costs for serving large numbers of Medicaid and uninsured low-income individuals. Many states also make other supplemental payments that are not required under federal law. These payments include Medicaid UPL supplemental payments, which are Medicaid payments that are above the regular Medicaid payments but within the UPL, defined as the estimated amount that Medicare would pay for comparable services. UPL supplemental payments, like regular claims-based payments, must be made for allowable Medicaid expenditures and must comply with applicable federal requirements. Regular and UPL supplemental payments are not limited to providers’ costs of delivering Medicaid services; however, as Medicaid payments, they are intended to pay for Medicaid-covered services provided to Medicaid beneficiaries and must by law be economical and efficient. States may also make other supplemental payments to hospitals, nursing facilities, and other providers authorized under Medicaid demonstrations. (See app. II for information on our past concerns about Medicaid supplemental payments.) The Medicaid UPL is a ceiling on the amount of federal matching funds a state may receive for Medicaid payments; it is based on the amount that Medicare would pay for similar services. Because states’ regular payments are often lower than what Medicare would pay for comparable services, states are able to make UPL supplemental payments, which are separate from and in addition to regular payments, and the federal government will share in those payments up to the maximum amount allowed under the UPL. (See fig. 1.) The UPL is not a provider-specific limit but instead is applied on an aggregate basis for certain provider ownership types and categories of services. Specifically, the UPL is applied on an aggregate basis to the three ownership types—local government, state government, and private. Separate UPLs exist for providers of inpatient hospital services, outpatient hospital services, nursing facility services, and physician and other practitioner services, and for services provided in intermediate care facilities for the developmentally disabled (ICF/DD). To obtain federal funding for both regular and supplemental payments, states submit their estimated aggregate expenditures by type of service to CMS each quarter for an upcoming quarter. After CMS has approved the estimate, it makes federal funds available to the state for the purpose of making Medicaid payments during the upcoming quarter. States typically make Medicaid payments to providers with a combination of nonfederal funds and federal funds. Within 30 days of the end of each quarter, states are required to submit their actual expenditures for the quarter on the standardized form CMS-64. CMS uses the CMS-64 data, which aggregates states’ expenditures, to reconcile actual expenditures with states’ estimates. CMS-64 expenditure reports on Medicaid payments show provider ownership for 10 percent of total Medicaid payments made in federal fiscal year 2011. Each quarter, states submit their total Medicaid payments on the CMS-64 expenditure reports by more than 70 categories of service. The expenditure reports capture aggregate state expenditures and are not intended to collect provider-specific payment information. Provider ownership information is reported for 6 categories of service for UPL supplemental payments and 2 categories of service for regular payments, accounting for $40 billion, or 10 percent, of the $414 billion in Medicaid payments in federal fiscal year 2011.CMS-64 expenditure data that is reported by provider ownership, payments to government providers accounted for $21 billion, or 52 percent, and payments to private providers accounted for the remaining $19 billion, or 48 percent. (See fig. 2.) Because states report their CMS-64 expenditure data at an aggregate state level and not by provider or by claim, we could not determine the extent to which the difference in payments to government providers versus private providers was due to a higher volume of services provided or a larger number of providers in the ownership group. Assessing Medicaid payments to individual hospitals was hampered by insufficient data. In two states with reliable data, Illinois and New York, our estimates of average daily payments made to government and private hospitals showed inconclusive trends, but also identified that a small number of government hospitals were receiving high payments that warrant oversight. For example, some selected hospitals in each of these states received Medicaid payments in excess of total operating costs. Our assessment of Medicaid payments to individual hospitals in three selected states—California, Illinois, and New York—was hampered by inaccurate and incomplete state data on Medicaid payments and CMS claims data on Medicaid payments and days of service. States must capture and report payment data to CMS, but the data needed to compare payments by individual provider and provider ownership are not specifically required. Despite extensive work we conducted in California to obtain and analyze Medicaid claims and UPL supplemental payment data, we were unable to compare individual hospitals’ daily payments by hospital ownership for inpatient hospital services. This was because California lacked reliable data to enable an assessment of Medicaid payments made to individual hospitals. The data California provided on its Medicaid supplemental payments and hospital ownership, neither of which are reported in MSIS, were not usable due to inconsistent hospital identification numbers—including state identification numbers and National Provider Identifiers (NPI)different versions of the data, and missing hospital ownership information. —payment amounts that changed in For example, the California Medicaid officials provided their supplemental payments in over 20 different spreadsheets, each of which represented a different type of payment and included, by hospital, a hospital identification number and the payment amount. However, the spreadsheets used different types of hospital identification numbers among different spreadsheets, and the California officials were unable to provide a crosswalk of the different identification numbers. As a result, hospital payments listed on multiple spreadsheets could not be matched to determine how much in supplemental payments the state was paying the individual hospitals, and could not be matched with the MSIS claims data by hospital. Although data provided by Illinois and New York were sufficiently reliable for assessing certain Medicaid payments to individual providers for inpatient hospital services, both states had Medicaid payment gaps that precluded assessment of all Medicaid payments the states made. In Illinois, 3 of the 21 local government hospitals in the state received large supplemental Medicaid payments that are based on criteria outlined in the Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act These payments, which of 2000 (BIPA)—referred to as BIPA payments.Illinois makes annually, are significant—totaling nearly $750 million annually. Because these BIPA payments are not for specified Medicaid services or related to the cost of providing Medicaid services, we did not include them when determining daily payments. Similarly, states are not required to report DSH supplemental payments by the uncompensated care costs related to Medicaid patients versus uninsured patients. Therefore, we did not include them when determining daily payments. Illinois, in state fiscal year 2011, paid $335 million in DSH supplemental payments, of which $304 million was paid to 3 local government hospitals, $27 million was paid to the state’s 1 state government hospital, and $4 million was paid to 38 private hospitals. New York, in state fiscal year 2011, paid over $2 billion in these payments, of which over $1 billion was paid to 20 local government hospitals, $250 million was paid to 5 state government hospitals, and $670 million was paid to 158 private hospitals. Because it was unclear what portion of these payments was related to the cost of providing uncompensated care related to Medicaid patients versus uninsured patients, we did not include them when determining Medicaid payments by hospital ownership. In addition, claims data for both Illinois and New York could not be used for analyzing payments to individual providers for outpatient hospital, nursing facility, and ICF/DD services. For outpatient hospital services, available claims data did not provide sufficient information to determine the number of outpatient visits. Some of the outpatient claims were for bundled services—that is, services that were provided over a series of visits—and, therefore, we could not calculate outpatient payments on a per visit basis. For both nursing facility and ICF/DD services, available claims data were not reliable for determining the number of days of service provided. Adjustment claims for these services only reported adjustments to the payments and did not indicate the days of service that were similarly affected. As a result, we could not determine an accurate number of days of service for each provider and, therefore, could not calculate daily payments. For inpatient services provided by 193 hospitals in Illinois in state fiscal year 2011, government hospitals’ and private hospitals’ average daily payments were comparable. In comparing these regular and UPL supplemental payments by hospital ownership, we adjusted the regular payments for differences in the conditions of the patients treated by the hospitals, commonly referred to as “case-mix” adjustment.daily payment was highest for the state government hospital at $2,666, compared to $2,639 for the local government hospitals, and $2,620 for private hospitals. While government and private hospitals had comparable average daily payments for inpatient services, the daily payments for the individual hospitals within the ownership groups were wide ranging and varied. For example, the daily payments for local government hospitals ranged from $552 to $9,822, compared to $754 to $11,239 for private hospitals. (See fig. 4.) These varying daily payments make it difficult to draw conclusions about payment differences by hospital ownership, but helped in the identification of individual hospitals with significantly higher daily payments compared to other hospitals. (See app. I for information on the methodology used for comparing average daily payments for inpatient hospital services in Illinois by hospital ownership, and app. IV for more detailed information on Illinois’s Medicaid payments for inpatient hospital services by hospital ownership, including the average, median, and range of the daily payments.) When comparing the Medicaid inpatient payments—regular and UPL supplemental payments—to the costs of providing those services, estimated using cost reports prepared by hospitals, for hospitals with the highest daily payments, we found that six of the seven selected hospitals had total Medicaid inpatient payments that exceeded those hospitals’ total The three local government hospitals costs of providing these services.and three private hospitals had Medicaid inpatient hospital payments that exceeded costs, ranging from about $273,000 to about $18 million over costs. The one state hospital had payments that were $4 million less than costs, with $124 million in payments compared to $128 million in costs. (See fig. 5.) Illinois Medicaid officials attributed the variation in the extent to which the inpatient payments exceeded costs to various factors. For example, except for the local government hospital with payments about $655,000 more than costs, the hospitals that had payments in excess of costs received regular Medicaid inpatient payments that were predetermined rates based on a patient’s diagnosis—a Diagnosis Related Group system—and were not paid on costs. The Diagnosis Related Group payment method is intended to provide incentives for hospitals to lower costs. In addition, the officials told us that these hospitals, including the local government hospital, received UPL supplemental payments that were calculated based on 2005 data—a year in which the hospitals provided a higher volume of Medicaid inpatient services, which resulted in larger UPL supplemental payments. In addition, for the selected hospitals in Illinois, we also compared Medicaid payments and other supplemental payments to the hospitals’ total operating costs for all services and all patients and found that one of the local government hospitals received Medicaid payments that exceeded the hospital’s total operating costs. For this comparison, in addition to regular inpatient and inpatient UPL supplemental patients, we included DSH supplemental payments and BIPA Medicaid supplemental payments. We found that these Medicaid payments to this hospital totaled $907 million, while total operating costs were $540 million. The hospital’s BIPA Medicaid supplemental payments were the cause of payments exceeding total operating costs. According to the Illinois officials, the BIPA Medicaid supplemental payments are payments that the state is authorized to make under federal law. (See app. I for information on the methodology used for comparing the selected Illinois hospitals’ Medicaid payments for inpatient services to the costs of providing those services and to total operating costs, and app. V for more detailed information on the hospitals’ Medicaid payments for inpatient services, the costs of providing those services, and total operating costs.) For inpatient services provided by 201 hospitals in New York in state fiscal year 2011, government hospitals had higher average daily payments than private hospitals. Local government hospitals had the highest average daily payment for the case-mix-adjusted regular and UPL supplemental payments at $1,514, compared to $933 for private hospitals. However, the local government hospitals’ high average daily payment was due primarily to two hospitals receiving a total of $416 million in UPL supplemental payments, inflating the average payment for all local government hospitals. The individual hospitals’ daily payments varied widely within each of the hospital ownership groups. For example, the local government hospitals’ daily payments ranged from $198 to $9,176, compared to $144 to $3,413 for private hospitals.difficult to draw conclusions about payment differences by hospital ownership, but helped in identifying individual hospitals with significantly higher daily payments compared to other hospitals. According to New York officials, these daily payments may have varied because of a variety of factors, including the geographic location of a hospital. (See app. I for information on the methodology used for comparing average daily payments for inpatient hospital services in New York by hospital ownership, and app. VI for more detailed information on the state’s Medicaid payments for inpatient hospital services by hospital ownership, including the average, median, and range of the daily payments.) (See fig. 6.) The varying daily payments make it Any redistributed UPL supplemental payments these 10 It was not within the scope of our review to examine how the payments returned to the corporation were used, the extent to which they were redistributed among the corporation’s hospitals or other facilities, or whether the redistribution of excessive UPL payments is consistent with federal DSH limits or CMS policy regarding provider retention of Medicaid payments. However, at the conclusion of this review we brought these practices to the attention of CMS officials for their consideration. Officials agreed the payment arrangement may warrant further review. CMS’s oversight of Medicaid payments to individual hospitals and other institutional providers is limited. The agency does not collect provider- specific payment and ownership information and lacks a policy and standard process for determining whether Medicaid payments to individual providers are economical and efficient. As a result, excessive state payments to individual providers may not be identified or examined by CMS. CMS does not collect sufficient information on payments to enable it to assess payments for individual providers, which would allow the agency to ensure that payments are appropriately spent for Medicaid purposes. CMS collects information on states’ Medicaid payments from its review of state plan amendment proposals and two payment data systems. However, CMS does not collect comprehensive information on provider- specific payments through these sources. As a result, it cannot identify or assess total Medicaid payments received by individual providers and the extent to which they differ among providers for similar services, and cannot review significant differences in payments among providers. In addition, CMS cannot determine whether payments to individual providers are consistent with the Medicaid criteria of efficiency and economy. Federal agencies should collect accurate and complete data to monitor programs they oversee. Information describing proposed Medicaid payments and related methodologies that states submit to CMS is not adequate to provide data for assessing and overseeing Medicaid payments, including those to government providers. CMS must review and approve state plan amendments before a state can make payments and claim the federal share of the payments. However, according to CMS officials, while states lay out criteria for who qualifies for payment and how payments are calculated in their state plan amendments, they are not required to offer more details, such as information on which providers will receive payments. In addition, because CMS asks states to submit comprehensive descriptions of their payment methodologies, state plan amendment language describing a state’s methodology for determining Medicaid payments can be complex and technical, without offering specific details on the payments that will result from the payment methods. As an example, language in a New York state plan amendment for state fiscal year 2011 UPL supplemental payments for inpatient services to local government hospitals identified the total amount authorized to be paid in UPL supplemental payments, but did not identify the amounts paid to individual hospitals.cannot rely solely on reviews of state plan amendments to assess whether payments to specific providers are meeting Medicaid criteria of economy and efficiency. Lacking these details, CMS CMS’s two ways of collecting Medicaid payment information—the Medicaid Statistical Information System (MSIS), a data collection system, and the CMS-64, a quarterly expense report used to provide federal matching funds for state Medicaid expenditures—do not collect complete information on payments to government and private providers. MSIS is CMS’s national eligibility and claims data system and is the agency’s only source of provider-specific payment data reported by states. However, states are not required to report in MSIS provider ownership information or UPL supplemental payments that are not paid on claims. As a result, analyzing payments by provider ownership groups is not possible, and assessing total payments by provider is complicated by the fact that the UPL supplemental payments, which can be significant, are not reported in MSIS. For example, according to state data, in state fiscal year 2011, Illinois and New York made about $2 billion and $3 billion, respectively, in UPL supplemental payments that were not reported in MSIS. CMS-64 was not designed to capture provider-specific information; it provides aggregate payment amounts and does not have provider-specific payment or ownership information. As mentioned previously, it captures total payments by provider ownership for a few payment types, representing 10 percent of total Medicaid payments made in federal fiscal year 2011. More recently, another source of provider-specific payment information, including UPL supplemental payments, became available for certain providers, but it too provides limited information. Beginning in 2010, states have been required to submit audited reports annually on any hospital receiving DSH supplemental payments. Information that states are required to report separately for each DSH hospital includes the hospital’s Medicaid costs, and all Medicaid payments—regular, DSH supplemental, and UPL supplemental. However, this reporting is not required for hospitals that are not eligible to receive DSH supplemental payments. Recognizing the need for better data from the states, CMS began implementing two initiatives in 2013. The first initiative, to improve its oversight of the Medicaid UPL and state UPL supplemental payments, requires additional state reporting, but gaps remain. Beginning in June 2013, states were required to annually submit to CMS documentation of their Medicaid UPL calculations and provider-specific payment information. Previously, CMS had performed reviews of UPL calculations only when a state submitted a proposal to revise existing payments or add new payments in its state plan. Despite the new guidance and new reporting requirements, data gaps and challenges remain that limit CMS’s ability to oversee payments. CMS has not specified a standardized data reporting format, including the key data states should report on providers and payments, such as NPIs for each provider and actual supplemental payments. As a result, some states may not report actual supplemental payments they make and, without NPIs, CMS is currently unable to merge UPL supplemental payments with regular claims-based payment data in MSIS. CMS’s second initiative, to improve MSIS, is intended to collect provider- specific ownership and supplemental payment information. CMS is developing the Transformed Medicaid Statistical Information System (T- MSIS)—an enhanced Medicaid data system—to replace MSIS. T-MSIS will require states to report additional information to CMS that is not currently collected in MSIS, including provider-specific information on The agency supplemental payments received and provider ownership. has cited T-MSIS as a key tool for providing the federal government and states with better information with which to manage and monitor Medicaid program integrity, including identifying waste, fraud, and abuse.However, there is uncertainty about when T-MSIS will be operational. In December 2014, CMS officials reported that the agency was still working on stabilizing its data systems to begin accepting state claims data through T-MSIS as states pass testing and are found by CMS to be ready to transition to T-MSIS. In December 2014, 18 states were in the final testing phases, and, depending on the nature of remaining issues with their data, these states could be ready for full implementation in 2 months. However, officials were uncertain when all states would be capable of reporting claims and payment information via T-MSIS. In addition, it is uncertain when states will be able to report all of the new data required under T-MSIS. According to CMS officials, some states have had problems reporting some of this information, particularly provider ownership information. Officials were also uncertain about whether all of the issues we encountered with the existing claims data submitted by states through MSIS would be addressed when T-MSIS was fully operational. For example, when we reported that some states were reporting state-assigned provider numbers rather than NPIs, reporting multiple NPIs for one provider, or reporting incorrect and inaccurate NPIs, officials said that under T-MSIS there will be a cross-walk between provider NPIs and state-issued provider identification numbers that states use in processing claims. However, beyond looking for obvious errors in formatting of the NPI numbers, such as incorrect values or provider numbers that are too short or too long, CMS will not identify erroneous NPI numbers. Officials said errors involving providers with multiple NPIs or NPIs assigned to the wrong provider are identified when the data are analyzed for oversight and monitoring purposes. In addition to these two initiatives, CMS officials told us they are also considering ways to improve data for overseeing payments at the provider level. As part of this effort, in May 2014, CMS contracted a study to, among other things, (1) analyze documentation on regular and UPL supplemental payments that states began submitting in 2013 to determine opportunities for improvement in CMS oversight; (2) store that information in a standardized format to enable analysis to be performed at both the aggregate and the provider-specific levels; and (3) assess the utility of T-MSIS data for the purpose of assisting CMS oversight of Medicaid UPL payments. The officials expect to receive the first report from the study in early 2015, and based on this report, will determine any additional actions the agency will take to enhance the information it collects for oversight purposes. CMS cannot ensure that Medicaid payments to individual providers are economical and efficient because the agency does not have a standard policy delineating criteria for when payments made to individual providers are economical or efficient, nor does it have a process to identify payments to individual providers that appear questionable. Instead, the agency reviews payment methodologies, relies on states to provide justification for unclear methodologies, and follows up on payments that are identified as questionable by oversight reviews conducted by oversight agencies, such as HHS’s Office of Inspector General. However, even when CMS identifies cases of payments to individual providers for further review, it does not have established criteria for determining whether these payments are economical and efficient. According to officials, to determine state compliance with the statutory requirement that Medicaid payments are economical and efficient,on ensuring that states comply with Medicaid’s UPL regulations. The UPL regulations establish a ceiling on the amount of federal matching funds a state can claim. The UPL, which is based on how much Medicare would pay for the same service, is an aggregate limit that applies to groups of CMS primarily relies providers based on a category of service and provider ownership.the UPL limits payments to a group of providers, it does not limit the amount of payment a particular provider can receive, provided the aggregate payment amount to the group does not exceed the UPL. CMS’s focus on the aggregate UPL hinders its ability to determine whether payments to individual providers are economical and efficient, as states can comply with an aggregate UPL but target UPL supplemental payments to a small number of providers. To illustrate, CMS reviewed and approved a state plan amendment authorizing the state of New York to make more than $400 million in inpatient hospital UPL supplemental payments to qualifying local government hospitals. The UPL supplemental payment amount represented the difference between regular Medicaid payments to the 21 local government hospitals subject to the UPL and what Medicare would have paid for inpatient services to these hospitals in the aggregate. However, we found in July 2014 that the aggregate UPL supplemental payments the state estimated it could make based on the workload of all 21 local government hospitals in New York, In approving the state were actually made to only 2 of the 21 hospitals. plan amendment authorizing the UPL supplemental payment, CMS determined that the payment would not exceed the applicable aggregate UPL. The state plan amendment did not specify the number or names of hospitals that were eligible for payments under the amendment, and CMS did not obtain information on which of the 21 local government hospitals would receive UPL supplemental payments. The state submitted hospital- specific information showing the difference between each hospital’s estimated regular Medicaid payments and the UPL, which is what Medicare would pay for comparable services.information to calculate the aggregate UPL for the local government hospitals. Figure 8 compares the difference between regular payments The state used this and the UPL that New York estimated for each hospital to the actual amounts of UPL supplemental payments made. In addition, we found a similar concentration of UPL supplemental payments for outpatient hospital services made to local government hospitals in state fiscal year 2011. Specifically, CMS approved New York to make about $154 million in UPL supplemental payments for outpatient hospital services for the 21 local government hospitals. Similar to the case for UPL supplemental payments for inpatient services, the state made a UPL supplemental payment for outpatient hospital services to only one local government hospital. CMS has recently taken actions to reduce the supplemental payment amounts paid to the three hospitals, indicating that the payments were excessive, but had not, as of January 2015, made a formal determination as to what payment amount would have been appropriate for the local government hospitals. According to CMS officials, because their reviews focus on the aggregate UPL, they were not aware of the distribution of these payments to specific hospitals. However, after we informed them of these payments, they initiated a review of the payments and, according to CMS officials, were in the process of working with the state to lower future payments the state would make to the three local government hospitals identified as receiving large supplemental payments in this review. As of January 2015, CMS had not provided details on the amount of payment reductions for the three hospitals. CMS officials told us they recognized the need for a strategy to oversee Medicaid payments to individual providers and the agency was considering ways to improve the agency’s oversight of Medicaid payments and payment limits, including how to better assess payments to individual providers. Medicaid represents significant expenditures for the federal government and states and is the source of health care for tens of millions of vulnerable individuals. Its long-term sustainability is critical, and will require effective federal oversight to ensure that Medicaid payments are economical and efficient, and are made for covered Medicaid items and services. The longstanding concerns we have raised about some states’ excessively large Medicaid payments to certain institutional providers continue. Further, our analysis showing the wide ranges in hospitals’ average daily payments, and high payments over costs to certain government and private hospitals, raises further questions about federal oversight of states’ payments to individual institutional providers, both government and private. Provider payments that are tens of millions of dollars, and in some cases hundreds of millions of dollars, greater than providers’ costs raise questions about whether such payments are consistent with economy and efficiency as required by law and the extent to which the payments are ultimately used for Medicaid purposes. Medicaid payments that exceed the total costs of operating the hospital raise, even further, questions as to their appropriateness. Moreover, the fact that CMS is largely unaware of the extent to which state Medicaid payments exceed Medicaid costs to certain providers highlights the shortcoming of its current approach to overseeing state Medicaid payments. To oversee state Medicaid payments to individual providers, CMS needs accurate and complete provider payment data, as well as a policy and process for reviewing payments made to individual providers. While CMS has taken some steps to improve payment data it receives from the states, it does not have the comprehensive data for oversight, and future data improvements are uncertain. In addition, CMS does not have a policy and process for assessing the economy and efficiency of payments at the provider level. Without good data on payments to individual providers, a policy and criteria for assessing whether the payments are economical and efficient, and a process for reviewing such payments, the federal government could be paying states hundreds of millions, or billions, more than what is appropriate. To improve CMS’s oversight of Medicaid payments, we recommend that the Administrator of CMS take the following three actions: Take steps to ensure that states report accurate provider-specific payment data that include accurate unique national provider identifiers (NPI). Develop a policy establishing criteria for when such payments at the provider level are economical and efficient. Once criteria are developed, develop a process for identifying and reviewing payments to individual providers in order to determine whether they are economical and efficient. To ensure the appropriateness of Medicaid payments to providers in New York, we recommend that the Administrator of CMS take the following fourth action: expedite the formal determination of the appropriateness of New York’s payment arrangements and ensure future payments to local government hospitals are consistent with all Medicaid requirements. We provided a draft of this report to HHS for comment. In its written response, HHS concurred with our recommendations and noted efforts to address them. HHS stated that it is evaluating ways to improve its oversight, including gathering information from states to better inform future policies. HHS noted that information being collected will better inform the agency regarding efforts to establish criteria, policies, and procedures to evaluate whether payments at the provider level are economical and efficient. HHS comments are reprinted in appendix VIII. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IX. To determine what the Centers for Medicare & Medicaid Services (CMS) Medicaid expenditure reports include about payments by provider ownership nationwide, we analyzed CMS’s quarterly Medicaid expenditure reports for federal fiscal year 2011—the most recent year for which complete data were available at the time of our analysis. To determine, for selected states, how state Medicaid payments to government hospitals compare to state Medicaid payments to private hospitals, we used federal Medicaid claims data and data provided by the states. To determine, in the selected states, how the state Medicaid payments to selected hospitals compare to the hospitals’ Medicaid costs and to hospitals’ total operating costs, we used the federal Medicaid claims data and data and Medicaid cost reports provided by the states. To examine what information CMS Medicaid expenditure reports include about payments by provider ownership nationwide, we used the quarterly Medicaid expenditure reports—referred to as the CMS-64—that states use to report their Medicaid expenditures for purposes of receiving federal matching funds. We determined based on the expenditure reports that states reported payments by provider ownership for six categories of service for upper payment limit (UPL) supplemental payments and two categories of service for regular payments. The six categories of service for UPL supplemental payments reported by provider ownership include (1) inpatient hospital, (2) outpatient hospital, (3) nursing facility, (4) physician and surgical, (5) other practitioner, and (6) intermediate care facilities for the developmentally disabled (ICF/DD). The two categories of service for regular payments that are reported by provider ownership include (1) ICF/DD and (2) school-based services. For each state, we compiled payments for the categories of service reported by provider ownership that were provided in federal fiscal year 2011—the most recent year for which complete data were available at the time of our analysis—by excluding those payments for services that were reported in federal fiscal year 2011 but were provided in prior years, and including payments for services provided in federal fiscal year 2011 but were reported in federal fiscal years 2012 or 2013. We used two main CMS-64 expenditure reports to compile this information. One report—the CMS-64 Base Report—includes payments for services provided in federal fiscal year 2011, as well as payments and adjustments for prior years. It does not include payments or payment adjustments for services provided in federal fiscal year 2011 that were reported in federal fiscal years 2012 or 2013. The other key report—the Financial Management Report Net Expenditure Reports—includes payments for services provided in federal fiscal year 2011 and includes payments made in 2011 that were for prior years. It also includes payments or payment adjustments for services provided in federal fiscal year 2011 that were reported in federal fiscal years 2012 or 2013. By using these two reports in combination, we determined total payments for services provided in federal fiscal year 2011 for the categories of service reported by provider ownership. For these six categories of service for UPL supplemental payments, we used more-detailed feeder forms for the two reports, which the states use to report the UPL supplemental payments by provider ownership. To assess the reliability of the CMS expenditure reports, we conducted interviews with CMS officials on how the agency uses the data and any known data reliability issues, reviewed related documentation, and conducted logic tests on the expenditure data. We determined that these data were sufficiently reliable for the purposes of our report. The results of this analysis were limited, however, in that states report their CMS-64 expenditure data at an aggregate state level and not by provider or by claim. Therefore, we could not determine the extent to which the difference in payments to government providers versus private providers was due to a higher volume of services provided or a larger number of providers in the ownership group. To determine how, in selected states, state Medicaid payments to government hospitals compare to state Medicaid payments to private hospitals, we selected a nongeneralizable sample of three states— California, Illinois, and New York. We selected these states based on the following criteria: having large Medicaid programs as determined by spending for Medicaid services, making large amounts of certain supplemental Medicaid payments, geographic diversity. We determined that for California the data needed for our analysis were not reliable and, therefore, we could not compare the state’s payments by provider ownership. For Illinois and New York we analyzed Medicaid payments for inpatient services provided in state fiscal year 2011 by three hospital ownership groups—local government, state government, and private. We analyzed payments for state fiscal year 2011 because it was the most recent year for which data on regular, claims-based payments were available. To compare Medicaid payments by hospital ownership in Illinois and New York, we combined federal inpatient hospital Medicaid claims data from the Medicaid Statistical Information System (MSIS)—the federal system through which states report Medicaid claims—with data provided by the states, which included additional payment data and hospital ownership information not included in MSIS. Specifically, From MSIS we compiled regular, claims-based payments for inpatient hospital services by identifying the states’ fee-for-service claims for services provided by hospitals, including general acute care, children’s, and cancer hospitals. We excluded psychiatric hospitals, all managed care claims, claims for patients covered by a separate State Children’s Health Insurance Program, and any Medicare “crossover” claims—where Medicare was the primary payer. We used all four quarters of MSIS claims from state fiscal years 2011 and 2012 to identify those claims where the beginning date of service indicated the service was provided in state fiscal year 2011. We adjusted the regular fee-for-service claims to account for differences in the conditions of the patients treated by the hospitals, commonly referred to as “case-mix” adjustment, using case-mix data provided by the states. From the states we obtained provider-specific UPL supplemental payments and hospital ownership information. In addition, we obtained provider-specific Disproportionate Share Hospital (DSH) supplemental payment amounts from both states, and payment amounts for an additional Medicaid supplemental payment made to certain Illinois hospitals under the Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (BIPA). However, because it was unclear what portion of DSH supplemental payments was related to the cost of providing Medicaid services and because BIPA payments are not for specified Medicaid services or related to the cost of providing Medicaid services, we did not include these payments when calculating Medicaid payment amounts for government and private hospitals. We combined the inpatient MSIS payment and day data with the state- provided supplemental payment and hospital ownership data using unique hospital identification numbers, such as the National Provider Identifier (NPI)—a national, unique 10-digit identification number assigned to health care providers. After combining the MSIS and state-provided data, we performed two calculations. First, we calculated a Medicaid daily payment amount for each hospital by dividing the hospital’s total inpatient service payments (regular claims-based payments and UPL supplemental payments) by the total Medicaid days of inpatient services the hospital provided. Second, we calculated an average daily payment amount for each hospital ownership group by summing the daily payment amounts of every hospital in each ownership group and dividing it by the number of hospitals in the ownership group. To ensure that hospitals with very low inpatient days were not skewing the average daily payment amounts, we excluded from this analysis the hospitals that had the lowest 5 percent of inpatient days in state fiscal year 2011. To assess the reliability of the MSIS claims data and data provided by the states, we reviewed relevant data documents and interviewed agency officials. For the MSIS data, we reviewed the CMS data dictionary and a report on identified issues with the state fiscal year 2011 MSIS claims, conducted logic tests, and interviewed CMS officials on how the data are used by the agency and any known data reliability issues. We also interviewed state Medicaid officials to determine how the states report their MSIS data to CMS. We determined that the MSIS data were reliable for our purposes. For the state-provided data on payments not reported in MSIS and on hospital ownership, we conducted logic tests and interviewed state Medicaid officials. While we determined through our assessments that the data provided by both Illinois and New York were reliable for our purposes, we determined that California’s state-provided data on payments not reported in MSIS and hospital identification numbers were not reliable and, therefore, we could not compare this state’s payments by provider ownership. To compare Medicaid payments for inpatient hospital services to Medicaid costs for these services in Illinois and New York, we selected hospitals that had the highest daily payment amounts in each of the three ownership groups in state fiscal year 2011. We selected seven hospitals in Illinois, because the state had only one state government hospital, and nine hospitals in New York. For each of the selected hospitals in both states, we compared Medicaid payments for inpatient services to Medicaid costs for inpatient services. We calculated the total Medicaid payments for inpatient services—regular and UPL supplemental payments—based on payment data from CMS’s Medicaid claims data and the state-provided data on supplemental payments. For purposes of comparing payments to costs, we did not case-mix-adjust the regular payments for differences in the conditions of the patients treated by the hospitals. To estimate Medicaid inpatient costs, we used inpatient Medicaid costs that each hospital reported to the state on standard cost reports for state fiscal year 2011. For the selected Illinois hospitals, the inpatient Medicaid costs were reported on the Medicaid cost report. For the selected New York hospitals, we determined inpatient Medicaid costs by first calculating the percentage of each hospital’s total inpatient days that were Medicaid inpatient days and then applying that percentage to the total inpatient service costs to get an initial Medicaid inpatient cost estimate. For the selected hospitals in both states, to account for differences between the days for inpatient services that were reported on the cost reports compared to the days reported in the CMS claims data, we calculated a daily Medicaid cost amount and then multiplied the daily cost amount by the number of days for inpatient services from CMS’s Medicaid claims data. To calculate the daily cost amount, we used the costs and days reported on the Medicaid cost reports; we divided each hospital’s total Medicaid inpatient costs by Medicaid total inpatient days. For each of the selected hospitals in both Illinois and New York, we also compared Medicaid payments for inpatient services and related supplemental payments to the hospital’s total operating costs for all services and all patients. For the selected hospitals in Illinois, we included in this comparison regular inpatient and inpatient UPL supplemental payments, as well as Disproportionate Share Hospital (DSH) supplemental payments and an additional Medicaid supplemental payment that was authorized under BIPA. For the selected hospitals in New York, we included in this comparison the regular inpatient and inpatient UPL supplemental payments, as well as DSH supplemental payments. For both states, we did not include regular and supplemental payments for outpatient services because we were unable to analyze Medicaid payments for outpatient services. We identified each of the hospitals’ total operating costs for all services and all patients on the hospital’s cost report.and days for Medicaid, and also include total costs for all patients. The hospitals’ Medicaid cost reports include costs To determine the reliability of the selected Illinois and New York hospitals’ cost reports, we interviewed state Medicaid officials on how the cost data are compiled and used by the agency and whether there were any known data reliability issues. We also compared Medicaid costs and patient days from the selected hospitals’ cost reports from state fiscal year 2009 to the hospital’s DSH report—an independently audited report that states are required to submit to CMS annually for every hospital that receives a DSH supplemental payment—from state fiscal year 2009, the most recent year for which DSH reports were available. Based on these assessments, we determined that the cost report data were sufficiently reliable for our purposes. Over the past 20 years, we have reported a number of concerns about Medicaid payments—particularly supplemental payments—that states have made to a small number of providers. Specifically, we have found that by making large supplemental payments to providers that are concurrently supplying funds to the state for the nonfederal share (through such financing arrangements as providers’ taxes and intergovernmental transfers), states have been able to obtain billions of dollars in additional federal matching funds without a commensurate increase in state funds used to finance the nonfederal share. For example, in 2004 and 2005 we found that some states’ excessive payments to a few government providers facilitated the inappropriate In addition, we found shifting of state costs to the federal government.that a lack of uniform guidance on setting Medicaid payment limits and the flexibility given to states under existing federal rules concerning the distribution of supplemental payments allowed states to make large Medicaid payments to a few government providers. We also found that a lack of transparency in how such payments were made allowed for potentially inappropriate Medicaid payments to certain providers and hindered the ability of the Centers for Medicare & Medicaid Services (CMS) to oversee such payments. Table 1 summarizes past issues we have found regarding state Medicaid payments made to providers and actions taken by Congress and CMS to address these concerns. Partially in response to concerns about excessive supplemental payments to government providers, CMS issued a proposed rule in early 2007 to limit state upper payment limit (UPL) supplemental payments to government providers to their cost of providing Medicaid services. However, concerns were raised that it would harm certain providers, and on May 24, 2007, Congress passed a one-year moratorium on the finalization or implementation of the proposed rule. CMS issued the rule in final form on May 25, 2007, the date on which the President signed the law containing the moratorium. In 2008, a federal district court found the agency’s finalization of the rule violated the moratorium and vacated the rule, and CMS formally rescinded the rule in 2010. This appendix provides results of our analysis of Centers for Medicare & Medicaid Services (CMS) CMS-64 Medicaid expenditure reports for for payments by provider ownership, both state-by-state and nationwide,federal fiscal year 2011. Specifically, the appendix includes expenditures for Medicaid payments for the categories of service reported by provider ownership, including six categories of service for upper payment limit (UPL) supplemental payments and two categories of service for regular payments. Table 2 shows total Medicaid expenditures, expenditures reported by provider ownership, the percentage of total expenditures that were reported by provider ownership, expenditures for payments to government providers and private providers, and government provider expenditures and private provider expenditures as a percentage of total expenditures reported by provider ownership. Tables 3 through 8 show total UPL supplemental payments and the payments and related percentages by three provider ownership groups—local government, state government, and private—for the six categories of service for UPL supplemental payments that are reported by provider ownership. Tables 9 and 10 show total regular payments and the payments by the three provider ownership groups for the two categories of service for regular payments that are reported by provider ownership. This appendix provides the results of our analysis of Medicaid payments for inpatient services provided in state fiscal year 2011 in Illinois by hospital ownership. Table 11 shows, by hospital ownership, the Illinois hospitals’ average daily payment, minimum and maximum daily payment, and median daily payment for regular and upper payment limit (UPL) supplemental payments combined. Table 12 shows, by hospital ownership, Illinois hospitals’ state fiscal year 2011 inpatient service Medicaid regular payments, UPL supplemental payments, Disproportionate Share Hospital supplemental payments, and a third type of Medicaid supplemental payment that three local government hospitals received. This appendix provides the results of our analysis comparing seven selected Illinois hospitals’ Medicaid payments for inpatient services to their Medicaid costs for inpatient services and total operating costs in state fiscal year 2011. Table 13 compares, for each of the seven selected Illinois hospitals, Medicaid payments for inpatient services—including regular payments, upper payment limit (UPL) supplemental payments, and the total regular and UPL supplemental payments—to total estimated Medicaid costs for providing inpatient services in state fiscal year 2011. Table 14 compares, for the seven selected Illinois hospitals, Medicaid inpatient service payments to total estimated operating costs for all services and all patients for state fiscal year 2011. Medicaid payments include regular and UPL supplemental payments for hospital inpatient services, total Disproportionate Share Hospital supplemental payments, and Medicaid supplemental payments authorized under the Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (BIPA). This appendix provides the results of our analysis of Medicaid payments for inpatient services provided in state fiscal year 2011 in New York by hospital ownership. Table 15 shows, by hospital ownership, the New York hospitals’ average daily payment, minimum and maximum daily payment, and median daily payment for regular and upper payment limit (UPL) supplemental payments combined. Table 16 shows, by hospital ownership, New York hospitals’ state fiscal year 2011 inpatient service Medicaid regular payments, UPL supplemental payments, and Disproportionate Share Hospital supplemental payments. This appendix provides the results of our analysis comparing nine selected New York hospitals’ Medicaid payments for inpatient services to their Medicaid costs for inpatient services and total operating costs in state fiscal year 2011. Table 17 compares, for each of the nine selected New York hospitals, Medicaid inpatient service payments—including regular payments, upper payment limit (UPL) supplemental payments, and total regular and UPL supplemental payments—to total estimated Medicaid costs for providing inpatient services in state fiscal year 2011. Table 18 compares, for the nine selected New York hospitals, Medicaid inpatient service payments to total estimated operating costs for all services and all patients for state fiscal year 2011. Medicaid payments include regular and UPL supplemental payments for hospital inpatient services and total Disproportionate Share Hospital supplemental payments. In addition to the contact named above, Tim Bushfield (Assistant Director), Pauline Adams, Elizabeth Conklin, Iola D’Souza, Julianne Flowers, Vikki Porter, Roseanne Price, and Sandra George made key contributions to this report. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Medicaid Financing: States’ Increased Reliance on Funds from Health Care Providers and Local Governments Warrants Improved CMS Data Collection. GAO-14-627. Washington, D.C.: July 29, 2014. Medicaid: Completed and Preliminary Work Indicates That Transparency around State Financing Methods and Payments to Providers Is Still Needed for Oversight. GAO-14-817T. Washington, D.C.: July 29, 2014. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 2013. Medicaid: More Transparency of and Accountability for Supplemental Payments Are Needed. GAO-13-48. Washington, D.C.: November 26, 2012. Medicaid: States Made Multiple Program Changes, and Beneficiaries Generally Reported Access Comparable to Private Insurance. GAO-13-55. Washington, D.C.: November 15, 2012. Medicaid: States Reported Billions More in Supplemental Payments in Recent Years. GAO-12-694. Washington, D.C.: July 20, 2012. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Medicaid: Ongoing Federal Oversight of Payments to Offset Uncompensated Hospital Care Costs Is Warranted. GAO-10-69. Washington, D.C.: November 20, 2009. Medicaid: CMS Needs More Information on the Billions of Dollars Spent on Supplemental Payments. GAO-08-614. Washington, D.C.: May 30, 2008. Medicaid Financing: Long-standing Concerns about Inappropriate State Arrangements Support Need for Improved Federal Oversight. GAO-08-650T. Washington, D.C.: April 3, 2008. Medicaid Financing: States’ Use of Contingency-Fee Consultants to Maximize Federal Reimbursements Highlights Need for Improved Federal Oversight. GAO-05-748. Washington, D.C.: June 28, 2005. Medicaid: Intergovernmental Transfers Have Facilitated State Financing Schemes. GAO-04-574T. Washington, D.C.: March 18, 2004. Medicaid: Improved Federal Oversight of State Financing Schemes Is Needed. GAO-04-228. Washington, D.C.: February 13, 2004. | Under Medicaid, a joint federal-state program, states pay health care providers and receive federal matching funds for their payments. States may have incentives to make excessive Medicaid payments to certain institutional providers such as hospitals operated by local governments. Medicaid payments are not limited to providers' costs, but federal law requires they be economical and efficient. Large payments that exceed costs raise questions as to whether the payments are for Medicaid purposes. GAO was asked to review state Medicaid payments to government providers compared to private, that is, for-profit and non-profit providers. GAO examined (1) in selected states, how state Medicaid payments to government hospitals compare to those made to private hospitals, and, for selected hospitals, to their Medicaid costs and total hospital operating costs; and (2) CMS oversight. GAO assessed hospital payments by ownership for three states selected in part based on size and geographic diversity, reviewed laws, regulations, guidance, and other documents, and interviewed CMS and state officials. GAO's assessment of Medicaid payments to government and private hospitals in three selected states was hampered by inaccurate and incomplete data on payments. States must capture but are not required to report all payments they make to individual institutional providers, nor are states required to report ownership information. For example, large supplemental payments states often make to hospitals are not reported by hospital. GAO assessed data for hospitals in two of three selected states, Illinois and New York; the third state, California, did not have accurate or complete payment data that would allow an assessment of total payments made to individual hospitals. In the two states, GAO's estimates of average daily payments—total payments adjusted for differences in patient health, divided by patient days—made to government and private hospitals showed inconclusive trends, but also identified that a small number of government hospitals were receiving high payments that warrant oversight. In Illinois, average daily payments for inpatient services were comparable for government and private hospitals, but these averages masked wide variations in daily payments for both types of hospitals. Daily payments ranged from less than $600 to almost $10,000 for local government hospitals and from $750 to over $11,000 for private hospitals. For seven hospitals with high daily payments, GAO examined how payments compared to each hospital's costs of providing Medicaid services as reported by the hospital in cost reports and found that six of the seven hospitals' Medicaid payments exceeded their Medicaid costs. In New York, average daily payments were higher for government hospitals than private hospitals, but as with Illinois these averages masked wide variations, with daily payments ranging from about $200 to over $9,000 for local government hospitals and from less than $200 to $3,400 for private hospitals. Four of nine selected government and private hospitals with high daily payments had Medicaid payments that exceeded Medicaid costs: two were local government hospitals that, all together, received payments exceeding their costs by nearly $400 million. One selected hospital in Illinois and two in New York had Medicaid payments that exceeded the local government hospitals' total operating costs, including costs associated with all services provided to all patients they served. Oversight of Medicaid payments to individual hospitals and other institutional providers, which is the responsibility of the Department of Health and Human Services' (HHS) Centers for Medicare & Medicaid Services (CMS), is limited in part by insufficient information on payments and also by the lack of a policy and process for assessing payments to individual providers. CMS does not collect provider-specific payment and ownership information. CMS also lacks a policy and standard process for determining whether Medicaid payments to individual providers are economical and efficient. Excessive state payments to individual providers may not be identified or examined by CMS. For example, CMS's oversight mechanisms did not identify large overpayments to two New York hospitals until they were identified by GAO. CMS began reviewing the appropriateness of these payments during the course of GAO's review. GAO recommends that CMS take steps to ensure states report provider-specific payment data, establish criteria for assessing payments to individual providers, develop a process to identify and review payments to individual providers, and expedite its review of the appropriateness of New York's hospital payments. HHS concurred with the recommendations. |
Taxpayers have been allowed to deduct mortgage interest payments on their federal tax returns since Congress enacted the federal income tax in 1913. At that time, the deduction for home mortgage interest was part of the deduction allowed for any interest paid. The Tax Reform Act of 1986 limited deductions for nonbusiness interest. The 1986 act specifically disallowed deductions by individuals of personal interest but included a limited exception for qualified mortgage interest. The 1986 act was amended the next year to include dollar limits on the amount of home mortgage debt for which interest payments can be deducted. Taxpayers may deduct the interest they pay on loans secured by qualified homes—either their main home or their main home and a second home. These loans include first or second mortgages, home equity loans, and home equity lines of credit. Boats and recreational vehicles may qualify as homes if they have sleeping, cooking, and toilet facilities. As table 1 shows, two kinds of debt may qualify for the mortgage interest deduction. The first kind is acquisition debt, which is debt incurred in acquiring, constructing, or substantially improving a qualified home and that is secured by the home. Taxpayers may deduct all of the interest paid on acquisition debt incurred on or before October 13, 1987, known as grandfathered debt. Taxpayers’ interest deductions for debt on qualified homes purchased after October 13, 1987, is limited to the interest on $1 million of acquisition debt, reduced by any grandfathered debt. The second kind of qualified debt is home equity debt, which is any non- acquisition debt secured by the home. Its proceeds may be used for anything other than to buy, build, or substantially improve the home. For example, the proceeds may be used to pay off credit cards or finance cars or vacations. Home equity debt is limited to the home’s fair market value minus the acquisition debt and grandfathered debt, or to $100,000, whichever is less. Home equity debt for tax purposes should not be confused with what is commonly called a home equity loan. The latter may qualify as either acquisition debt or home equity debt depending on how the proceeds are used. Table 1 also summarizes the deductibility of points. A point is a term used to describe certain charges paid in connection with obtaining a mortgage and is calculated at 1 percent of a mortgage’s principal. Generally, points are considered prepaid interest, and, as such, the deduction is usually allocated over the life of the loan. However, if various conditions are met, the taxpayer may deduct the points in the year paid. Determining whether points are fully deductible in the year paid may require that taxpayers go through as many as 10 decision steps. Lending institutions and other entities engaged in a trade or business that receive $600 or more during the year in mortgage interest and certain points from an individual taxpayer are required to file a Form 1098, Mortgage Interest Statement. Form 1098 reports are sent both to the taxpayer and IRS to show how much mortgage interest was paid to the lender that year. Taxpayers report their deductible mortgage interest information to IRS on lines 10 through 13 of Schedule A, the schedule itemizing deductions for the Form 1040 individual income tax return (see app. II). Typically, the larger deductible amounts are reported on line 10, which shows home mortgage interest and points reported to taxpayers on Form 1098. Lines 11 and 12 of Schedule A are for claiming mortgage interest and points, respectively, not reported on Form 1098, and, starting in 2007, line 13 is used for deductible qualified mortgage insurance premiums. Some interest payments reported on Form 1098 may be deductible on other Form 1040 schedules, such as Schedule C for sole proprietorship businesses or Schedule F for farming. Changes in the housing market over the years may have affected the overall amount of home mortgage interest that taxpayers claimed. Table 2 shows the change in various indicators related to the mortgage interest deduction from 2001 to 2006. The rise in home prices and thus mortgage amounts, partly account for the large increases in mortgage interest deductions claimed by taxpayers in 2006 compared to 2001. In 2006, the median sales price for existing U.S. single-family homes was up from 2001. Median sales prices varied widely across metropolitan areas, with some locations exceeding $700,000. The home equity loan totals reported by the Federal Reserve Board also increased significantly from 2001 to 2006. According to the U.S. Census Bureau’s 2006 American Community Survey, about a fifth of owner-occupied housing units with a mortgage had a home equity loan. Also, according to the Joint Center for Housing Studies of Harvard University, 85 percent of homeowners who refinanced their mortgages in 2006 took cash out. Although IRS’s enforcement and research programs found some mortgage interest deduction compliance problems, the methods leave gaps in what is known about the extent and specific nature of noncompliance. The four main programs that IRS uses to enforce or research mortgage interest deduction compliance include the following. A computer matching program designed to identify taxpayers whose mortgage interest deduction exceeds amounts reported on Form 1098: Of the approximately 1.7 million mismatches the program identified above a certain threshold in 2005, the latest year available, IRS followed up on about 135,000 and changed tax assessments or refunds for about 53,000. Tax assessments totaled about $216 million, much more than in previous years, and averaged about $4,300. IRS’s National Research Program (NRP), the main ongoing study of taxpayer compliance based on examinations of a random sample of tax returns: IRS found that between 12 and 14 percent of individual taxpayers who claimed home mortgage interest on line 10 of Schedule A misreported the amount. The misreporting was split about evenly between taxpayers underreporting the deduction and taxpayers overreporting it. Routine examinations done by correspondence, in an IRS office, or on site: IRS does not maintain separate examination results solely on line 10 issues. Most of the approximately 33,000 examinations done in fiscal year 2008 by IRS revenue agents and tax compliance officers that included a review of line 10 resulted in increases to the line 10 deduction (i.e., the deduction increased) or in no recommended change. For cases involving line 10 where examiners recommended greater tax amounts, the median tax owed for all issues examined including line 10 was $4,612, according to our analysis of IRS records. For cases closed in fiscal year 2008 in which examiners decreased the line 10 amounts and lowered taxpayers’ deductions, the median decrease in the line 10 mortgage interest deduction was $6,430. Other results are listed in appendix III. Two special compliance initiative projects (CIP) primarily examining acquisition debt for a limited segment of individual taxpayers with high mortgage interest deductions or high adjusted gross incomes: About 9,000 examinations were closed in fiscal year 2008 for both projects. IRS recommended about $95 million in tax changes for cases closed in fiscal years 2006 through 2008. Examiners decreased line 10 deductions for most cases closed in fiscal year 2008. The median decreases were about $29,000 for the first project and $28,000 for the second. For cases in which examiners increased the tax owed, the median increases in tax were about $8,000 in both projects. IRS examiners attributed the changes to taxpayers and paid tax return preparers not knowing or ignoring the mortgage interest deduction rules and to problematic tax preparation software. Appendix III also presents results from CIP exams. Table 3 shows, for these four main programs, that gaps exist in what IRS can measure or enforce concerning taxpayers’ mortgage interest deductions. Of the four main programs, only NRP projects the amount of mortgage interest deduction misreporting to the population of individual taxpayers. However, because NRP automatically excluded any Schedule A deduction where the Form 1098 amounts matched the Schedule A amounts, it may have underestimated mortgage interest deduction misreporting relating to the home equity and acquisition debt limits. The mortgage interest deduction rules create compliance problems for taxpayers, reflecting the deduction’s complexity. The effects of the problems, however, are uneven. Although many taxpayers might encounter few problems, others could face many more. Problems cited by tax practitioners and in our review of articles on deducting home mortgage interest included the following: Taxpayers need to distinguish between acquisition and home equity debt but did not always do so. Taxpayers deducted interest on loans exceeding the limitations, including the acquisition, home equity, and two-home limits. Taxpayers who were subject to the AMT and thus not eligible to deduct home equity interest claimed it nonetheless. Tax practitioners also missed the limitations and did not comply with the AMT rules. Depending on the circumstances, some taxpayers and practitioners faced extensive recordkeeping and calculations related to such matters as refinancing, the AMT, business use of the home, other uses of loan proceeds, and the periodic use and repayment of home equity lines of credit. One practitioner told us that completing worksheets was the quick and easy part of work related to the mortgage interest deduction; the most difficult and most time-consuming part was getting taxpayers to locate and provide the proper information on what kind of debt was involved and how loan proceeds were used. Typically, taxpayers provided records piecemeal over time and after many phone calls. Taxpayers may have been unaware that they might need documentation on matters such as how proceeds of home equity loans are spent to properly determine the amount of the mortgage interest deduction on their tax returns. Mortgage interest deduction limits based on debt amounts are not directly comparable with the information on Form 1098, which lists interest paid. If taxpayers’ debts exceed the limits, taxpayers must calculate how much interest they can deduct. The complexity of the laws that govern the mortgage interest deduction are evident in the guidance IRS has published. Figure 1 is the flowchart in IRS’s 16-page instructions to taxpayers—Publication 936: Home Mortgage Interest Deduction—that help taxpayers determine if their mortgage interest is fully deductible. It leads taxpayers through as many as seven decision points and still sometimes requires them to consult another part of the publication. Appendix IV provides two examples of the mortgage interest deduction’s complexity. To alleviate the problems and complexity facing taxpayers complying with the mortgage interest deduction rules, policy makers likely would have to change tax laws. For example, we noted in the past that shifting mortgage interest deduction limits from debt amounts to an interest-limit cap could simplify administration of the deduction. Under an interest-limit cap, IRS could more easily compare the amounts reported on Schedule A and Form 1098 to the cap and identify cases for audit or follow-up. Similarly, taxpayers would have only to report the amount of Form 1098 interest that fell under the cap and would not have to figure out which property qualified or how to account for debt limits, depending on how any new legislation was written. However, changing the tax code for qualified residential mortgage interest deductions could have significant tax policy implications. For example, an interest-paid cap could cause taxpayers living in areas with high housing prices to be disadvantaged compared with taxpayers with similar incomes living in areas with low housing prices. A cap could also disadvantage those who borrowed during periods of high interest rates. Assessing such policy changes and whether such consequences would be appropriate are beyond the scope of this report. Because the Form 1098 information report shows the dollar amount of interest a taxpayer paid in a year without regard to the limits on the amount of debt imposed by law, IRS’s computer matching program comparing Form 1098 and tax return amounts will not detect certain noncompliance. For example, as already discussed, taxpayers cannot claim a deduction for interest exceeding the $1 million acquisition debt limit or other limitations. However, the annual information report for a taxpayer with a $1.5 million acquisition mortgage would show the interest paid on the entire mortgage instead of the interest on the $1 million under the annual debt limit. Because of this, the information IRS receives from third parties on Form 1098 cannot be used by itself to determine if taxpayers improperly claimed interest on debt in excess of the legal limitations. However, the way to overcome this problem—using examinations to detect noncompliance—is expensive compared to automated matching programs, and its payoff in increased revenue is low. For example, according to IRS examiners, pinpointing which tax returns have a home- equity debt noncompliance issue is very labor intensive. As shown earlier, for non-CIP line 10 examinations by revenue agents and tax compliance officers closed in fiscal year 2008, the median decrease to line 10 was $6,430, according to our analysis of IRS data. At the highest individual tax rate of 35 percent and assuming no offsetting factors, the resulting increase in tax revenue would be about $2,250. Although not an exact comparison, the average tax assessment IRS recommended for field examinations of individual taxpayers was about $19,150 per taxpayer. Given the non-CIP examinations’ relatively low payoff, some practitioners we interviewed said the tax code as it relates to the home mortgage interest deduction is unenforceable in a practical sense. Additional information about taxpayers’ mortgages could help IRS identify the most productive cases to examine and determine whether taxpayers are claiming the correct amount of mortgage interest deduction. IRS could obtain more helpful information about taxpayers’ mortgages by expanding information collected on Form 1098. IRS officials said that in implementing certain additional reporting requirements, the agency would need to meet the terms of the Paperwork Reduction Act, which requires agencies to minimize the paperwork burden they impose on the public and maximize the practical utility of the information they collect. IRS officials also said attention would need to be paid to the added costs that expansion would impose on third parties and IRS, such as updating computer systems. Any computer programming changes for IRS and third parties would likely be done only once to accommodate the new information. Form 1098 could be revised to collect the following information: address of the property secured by the mortgage to which the interest on the form relates; outstanding mortgage debt balances on the property; an indicator if the mortgage interest is for a loan that was refinanced during the year; and an indicator of whether the mortgage interest relates to an acquisition loan or a home equity loan. Figure 2 shows what a Form 1098 might look like if revised to collect any of this information. We found that some companies that submit Form 1098 statements to IRS already provide some of this information with the Form 1098 statements they send to borrowers. These options could be considered singly or in combination. Further, changes to Form 1098 reporting requirements would need to apply only to future reports to give filers sufficient time to adjust their computer systems to collect and calculate the additional information. If the address relating to taxpayers’ mortgage interest deductions were on the Form 1098 (see fig. 2, box 6) and electronically captured in IRS databases, IRS could use an automated process to determine whether the mortgage interest taxpayers claimed corresponded to a qualified residence and was eligible for the deduction. For example, IRS could see if an address reported on Form 1098 matched the address that the taxpayers listed on their Form 1040. Tax preparers told us that requiring the property address also would help them prepare returns more accurately because they could use the information to determine if the property securing the debt is the taxpayer’s qualified home. Some IRS officials responsible for examination policy whom we interviewed agreed the property address would be useful in selecting returns for examination and in cases where taxpayers have more than one home. We previously recommended that IRS revise Form 1098 to collect the address of the property whose mortgage is reported on Form 1098. In response, IRS agreed to consider implementing our recommendation, citing the burden the requirement could place on third parties. Representatives of the mortgage banking industry told us that it would be feasible to report property address information on Form 1098 because mortgage lenders already maintain this information. We also found an example of a lender that provided address information with Form 1098 information that it sent to borrowers. If the beginning and ending mortgage debt balances or average annual debt balances were provided on Form 1098, IRS could electronically identify taxpayers with more than $1 million in mortgage debt (see fig. 2, box 7) or identify taxpayers whose mortgage interest deductions appeared out of proportion to their debt amounts. IRS officials said that debt balance information would be particularly helpful in selecting returns for examination if combined with the address information because the additional information would help disentangle mortgage interest deductions of taxpayers with multiple homes. According to a mortgage industry representative, Form 1098 filers could provide debt balance information. We found examples of companies that provided information on mortgage balances on the Form 1098 statements they sent to borrowers. However, industry representatives further stated that companies do not necessarily have accurate balance information at a particular point in time and that average balances would be easier for the industry to report than balances pegged to any specific day, especially for home equity lines of credit. For companies that do not keep average balance information now, they would incur the associated set-up costs of collecting the information in the future. Form 1098 could be redesigned with a check box for filers to show whether a mortgage was refinanced (see fig. 2, box 8). This would help IRS identify taxpayers who might be noncompliant with rules specific to refinancing, such as the rule to amortize points paid on the mortgage. A mortgage industry representative said that the burden of a refinancing check box could be reduced if the rule were only to complete the box in the year that the loan was made, eliminating the need for a Form 1098 filer to track over time whether a refinancing had occurred. Also, Form 1098 filers do not uniformly keep any indication in their records as to whether loans are acquisition loans or refinancing. However, because the deductibility of the interest on the cash taken out in a refinancing depends in part on how the money is spent, identifying refinancing would not necessarily eliminate the need for IRS to examine returns. It also would be helpful to IRS if Form 1098 filers could show whether cash was taken out in a refinancing, but such a change might not be able to include the amount of cash. The mortgage industry representative said that any information on the cash taken out would be unreliable and burdensome for Form 1098 filers to get. Form 1098 filers have estimates on cash taken out, not the final figures available to the closing agent. The industry representative also said that Form 1098 filers do not maintain information on whether loan proceeds exceeded the amount of the loan that was refinanced. However, IRS examiners could more easily see if cash were taken out at refinancing and investigate whether rules on refinancing were followed if Form 1098 had address, mortgage debt, and a refinancing check box, because they could see whether increases in the loan balances over time for a particular home took place. Tax forms do not require taxpayers or Form 1098 filers to report the types of debt on which mortgage interest deductions are based, even though the mortgage debt limitations for deducting interest are different for acquisition and home equity debt. If Form 1098 filers were required to identify the type of debt associated with the deduction (see fig. 2, box 9), IRS might more easily discern whether a taxpayer’s deduction exceeded the acquisition or home equity debt limit, especially if combined with reports of the debt amounts. Having Form 1098 filers identify debt types, however, would create challenges. A mortgage industry representative told us that Form 1098 filers do not always have information about debt types, especially if the loans have been sold and re-sold or the original loan has been refinanced. Form 1098 filers do not know whether mortgage refinancing proceeds were used for home improvements, which could qualify as acquisition debt or home equity debt, depending on how the money was used. The representative also said that if a home-equity debt reporting requirement were instituted, the industry would have to introduce costly and burdensome systems. IRS already contracts with a private firm to obtain information about taxpayers for routine examination purposes. Other private sector data might also be useful to IRS in detecting mortgage interest noncompliance. For example, we obtained information from SMR Research, one of several companies that analyze loan information that IRS might find useful in determining taxpayer compliance with rules governing the deduction of interest on home equity loans. By comparing homeowners’ current mortgage debt for a particular property with earlier debt, SMR Research estimated that up to several million homeowners had loans that might have exceeded the home equity debt limitation. In the aggregate, these homeowners’ debts over the debt limitation were several hundred billion dollars. SMR Research’s data do not show whether taxpayers correctly reported deductions based on home equity loans. IRS would still have to check the returns. Given information underlying the transactions in SMR Research’s database and shown in appendix VII, IRS could test the use of this or similar private-sector databases to: pinpoint taxpayers for examination, initiate correspondence to taxpayers, or conduct outreach to paid preparers. Some IRS CIP examiners told us they had significant findings related to home equity debt, indicating that follow-up might be productive. Taken as a whole, IRS taxpayer guidance—Schedule A and its instructions, Publication 17, Your Federal Income Tax, and Publication 936, Home Mortgage Interest Deduction—generally informed taxpayers that mortgage interest deductions are subject to limits. Even though the guidance was generally sufficient, Schedule A does not explicitly mention the limitations. As shown in appendix II, Schedule A, line 10 asks taxpayers for the “Home mortgage interest and points reported to you on Form 1098.” Sometimes, as when an acquisition loan exceeds $1 million, this wording could be problematic because a taxpayer or preparer would need to look beyond the Form 1098 to determine the proper line 10 amount. Using the same amount of space on Schedule A as now, line 10 could easily be revised to mention limitations by deleting “reported to you” and using wording such as “Unless limited, home mortgage interest and points on Form 1098.” Another way to revise Schedule A would be to add something about mortgage interest deduction limitations to the margin near line 10. Other possibilities for changing Schedule A to highlight limitations also exist, but would entail tradeoffs. For example, a check box could be added to the Schedule asking taxpayers if either the $1 million or the $100,000 limitation applies to their loans. Making such a change not only would help taxpayers preparing their own returns but also might prompt paid preparers to more thoroughly interview their clients. However, with the mortgage interest deduction, these types of changes would add some burden to taxpayers whose deductions are not affected by the debt limits. The effectiveness of such change is also unclear. Taxpayers might not read IRS guidance when preparing returns. To counter this possibility but also be mindful of preparation, mailing, and taxpayer service costs involved, IRS could test whether corresponding with some taxpayers would cost-effectively reduce misreporting. Added outreach—through seminars or communications with stakeholders such as paid preparers, tax return software providers, and industry groups— could communicate key rules and common mistakes and be targeted to taxpayers, such as those reporting the deduction above a certain level. Sending correspondence directly to taxpayers also could help inform paid preparers of rules and common mistakes because the taxpayers would likely share it with their preparers, according to representatives of the tax return preparation industry. IRS has used outreach programs on mortgage interest deductions in recent years. For instance, IRS’s tax tips covered deducting refinancing costs. IRS’s guidance on the acquisition debt limit is inconsistent with a prior tax court ruling. Currently, Publication 936 guides the taxpayer to treat any qualified mortgage debt above the $1 million acquisition debt limit as home equity debt subject to the $100,000 limit, regardless of the loan’s use. In practice, this means that the taxpayer can deduct the interest on up to $1.1 million in acquisition debt (treating the amount above $1 million as home equity debt even if used to acquire the home). However, a 1997 U.S. Tax Court case ruled that the acquisition debt limit is $1 million and no additional deduction can be taken above the $1 million limit unless the taxpayer truly has home equity debt. One tax software company official cited the 1997 case as support for the company’s decision to change the guidance in its software to allow interest deductions only up to $1 million in acquisition debt, rather than follow the guidance in Publication 936. In 2008, IRS Chief Counsel began reviewing the inconsistency in the debt limit allowances. However, a completion date for this work is uncertain. Until IRS completes its determination, taxpayers will calculate their deductions using different debt limits depending on whether they, their tax preparers, or their tax software follow the guidance in Publication 936 or the tax court interpretation. Depending on IRS’s determination, some taxpayers may not be optimizing their deduction or IRS could be losing revenue from taxpayers overdeducting. IRS’s examiners’ guidance and training materials included information for identifying and calculating home-equity and the acquisition-debt limitations. Overall, examiners we interviewed were satisfied with training and guidance on the mortgage interest deduction. However, IRS’s guidance and training materials did not cover problems that may arise in some real-life situations. For example, the guidance does not mention a problem that an examiner said she commonly found: taxpayers owning a third home, waiting to sell one of their first two, and deducting the interest paid on all three. Neither does the guidance address situations that tax professionals described as challenging for IRS to enforce, such as multiple refinancings, mortgages, or home equity lines of credit. The absence of such examples could impede examiners who do not deal with mortgage interest deduction issues regularly. An IRS training official told us that updating training materials with relevant examples could be done easily. Also, IRS’s examiner training materials reflect the taxpayer guidance with regard to the allowable acquisition debt limit. For example, according to one IRS training exercise, taxpayers could deduct interest on up to $1.1 million in mortgage debt even if they used the $1.1 million just to acquire a home, contradicting the 1997 tax court ruling, discussed above. The difference may lead examiners to inconsistently calculate the tax owed, because some may calculate the acquisition debt limit as $1 million based on the tax court determination and others may allow deductions on up to $1.1 million as instructed by the training. As previously mentioned, IRS Chief Counsel’s office is reviewing the inconsistency between the IRS guidance and the tax court ruling. IRS officials said that they have procedures to ensure that once the review is complete, examiners will be told of any clarification. The three companies’ tax preparation software for individuals that we analyzed differed from each other in how they treated the limitations on the amount of debt for which interest can be deducted. These software packages were among the most widely used by individuals. One company’s initial software screen containing mortgage interest deduction instructions mentioned possible deduction limitations, sending users to Publication 936 for calculations. One of the other companies made changes to its mortgage interest deduction displays in its 2008 version after considering our input on its 2007 version. These changes give more prominence to the deduction limitations and how Publication 936 may be used for calculations. The third company’s on-line software had information on the limitations, but users would not find this information unless they looked under “frequently asked questions.” This software also led the user to take amounts directly off Form 1098. For 2008, the software added a question to its frequently asked questions area about what taxpayers should do if interest and points exceeded the amount they could deduct. If taxpayers were to click on this question, they would find information about the limitations and the calculation needed. IRS does not know how frequently software packages are associated with misreporting. Such information could help IRS in its compliance efforts. We recently recommended that IRS require software companies to include a software identification number that specifically identifies the software package used to prepare tax returns. IRS plans to implement a software identification system in October 2009. Because IRS has little information on taxpayers’ mortgage debts, it cannot easily detect when taxpayers are not complying with the deduction limits. Instead, to ensure taxpayer compliance with the statutory requirements, IRS must conduct examinations that are more costly to use than other enforcement methods, such as a matching program. The absence of specific information about the reasons for noncompliance also prevents IRS from making efficient decisions on how to select cases for examination. Simultaneously, the complexity of mortgage interest deduction rules creates problems with some taxpayers trying to comply. These problems are compounded by an inconsistency between IRS guidance and a tax court ruling on acquisition debt limits. Because the limits causing complexity are written in law, statutory changes may be the most direct way to address the challenges of administering the deduction and alleviate the compliance problems. However, consideration of statutory changes was beyond the scope of this report. Nonetheless, within the current statutory framework, opportunities exist for IRS to mitigate some of the problems we identified. We are making seven recommendations. Specifically, we recommend that the Commissioner of Internal Revenue revise NRP’s case selection system so that a tax return’s mortgage interest deduction is not automatically excluded as an examination issue if it matches information reported on Form 1098; revise Form 1098 to require third parties to provide information on mortgage balances, the address of a home securing a mortgage, and an indicator of whether the mortgage is for a current year refinancing; investigate whether using information from private sources would be productive in detecting mortgage interest noncompliance, especially for home equity debt; revise the wording on Schedule A to clearly state that the mortgage interest deduction is subject to limitations; conduct a test to evaluate whether mortgage interest deduction-related outreach programs to taxpayers and tax return preparers could be a cost-effective way to reduce noncompliance; outreach might include sending correspondence covering key rules and common mistakes or promoting seminars on common types of misreporting; set a date to complete the Chief Counsel determination on whether the acquisition debt limit is $1 million or $1.1 million when used in combination with the home equity debt limit; and revise examiner training materials by adding examples cited as common problems by auditors and paid tax return preparers, such as those involving multiple homes or home-based businesses, and after the Chief Counsel’s final determination on the acquisition limit, revise examiner training and the worksheet in guidance to reflect the project’s outcome. We received written comments from the Internal Revenue Service on July 23, 2009 (for the full text of the comments, see app. VIII). IRS agreed with five of our recommendations and agreed to study the other two. It acknowledged that without information about taxpayers’ mortgage debts, it cannot easily detect taxpayer noncompliance with the mortgage interest deduction limits. IRS also said that the absence of information about noncompliance prevents it from efficiently deciding how to select cases for review and that the complexity of the rules causes problems for some taxpayers. Regarding our recommendation to revise Form 1098 to include more information on taxpayer mortgages, IRS agreed to study the issue, saying it does not have enough data to support revisions at this time. Because IRS acknowledged in its comments that it does not have information about taxpayers’ mortgage debts to easily detect noncompliance, we believe that our recommended revisions to Form 1098 would be cost-effective ways to provide IRS with additional useful information to help it detect noncompliance. Concerning our recommendation to conduct a test to evaluate whether mortgage interest deduction-related outreach programs could be a cost-effective way to reduce noncompliance, IRS said it will study the feasibility of such a test. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. For further information regarding this report, please contact me at (202) 512-9110 or at [email protected]. Contacts for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report may be found in appendix IX. To provide information on how IRS detects taxpayers’ noncompliance with the home mortgage interest deduction rules and what IRS knows about the extent of noncompliance and to identify problems, if any, that taxpayers face in attempting to comply with the deduction and describe IRS’s challenges in detecting mortgage interest deduction noncompliance, we reviewed IRS documents and interviewed agency officials to determine how IRS checks taxpayers’ compliance with deduction rules and if there were issues affecting its ability to detect noncompliance. This included reviewing information on IRS’s compliance initiative projects that focused on the mortgage interest deduction and interviewing IRS examiners from four widely dispersed parts of the country who were involved in the projects. The examiners were selected by IRS officials for their mortgage interest deduction expertise; obtained information from IRS’s 2001 National Research Program (NRP) compliance study of individual taxpayers and data for 2006 through 2008 from its Examination Operational Automation Database (EOAD) and Audit Information Management System (AIMS) to determine examination results about individual taxpayers’ compliance with mortgage interest deduction rules; and conducted a literature search of relevant tax publications and interviewed non-government experts to enhance our understanding of mortgage interest deduction compliance issues affecting taxpayers and IRS. To assess options to give IRS more information to enforce compliance with mortgage interest deduction rules, we surveyed literature and interviewed industry representatives about third-party information reporting options that would give IRS useful information to improve its ability to enforce mortgage interest deduction rules while minimizing burden on third parties. We discussed the options, including their pros and cons, with industry representatives from organizations such as the Mortgage Bankers Association and the American Institute of Certified Public Accountants and with others such as scholars who have expertise in this area. We also met with IRS officials to obtain their views on the feasibility of the options to improve compliance. Consideration of statutory changes was beyond the scope of the request. To determine if IRS could benefit from using private sector information to detect noncompliance with home equity rules, we obtained statistical information about home equity loans, home equity lines of credit, and cash-out refinancings that have exceeded $100,000 from SMR Research, one of several companies that compile and analyze mortgage data. According to its Web site, SMR Research is the nation’s largest publisher of industry and market research studies on the home mortgage business, home equity lending, and other consumer loan subjects. We did not study the information of any of the other company with extensive mortgage data because our purpose was only to gather data on the type of information that might be available from private sector sources that potentially could benefit IRS in detecting home equity noncompliance. To assess the reliability of the SMR Research data, we discussed the controls over the database with a responsible company official and reviewed documentation of the controls used to gather the data. We also used information from the American Housing Survey for the United States: 2007, prepared by the U.S. Department of Housing and Urban Development and the U.S. Census Bureau, to confirm the reasonableness of SMR Research information. We found the SMR database reliable for our purposes. To find whether IRS’s guidance to taxpayers provided enough information to properly calculate the mortgage interest deduction, we reviewed the guidance to determine whether it mentioned the limitations prominently, i.e., whether the limitations were mentioned directly as opposed to the user being directed to another place. To determine whether IRS’s examiners’ training and guidance provided enough information on properly calculating the mortgage interest deduction, we compared examiner training and guidance documents with IRS publications and instructions on mortgage interest deductions, as well as the applicable tax laws. To describe how tax return preparation software programs and IRS’s Free File program handle the mortgage interest deduction, we checked how the three software packages used by 88 percent of individuals filing electronically treated the debt limitations. We did this by logging on to the software and reviewing the screens pertaining to the home mortgage interest deduction. We also discussed our findings with company officials. We reviewed tax year 2007 versions of each of the software packages and included the versions used for IRS’s Free File program. Our results are limited to the versions of the software packages we reviewed and cannot be generalized to others. Free File offers free federal tax return preparation and electronic filing services through these companies and others. In addition, we reviewed documentation showing how software widely used by paid tax return preparers for millions of taxpayers throughout the country handles the mortgage interest deduction. In addition, to respond to your request, we analyzed data from IRS’s Statistics of Income (SOI) Division and the Compliance Data Warehouse (CDW) to compile information about year-to-year changes in taxpayers’ mortgage interest deductions and Form 1098 filings for tax years 2004 through 2006. Our analyses focused on the dollar amounts listed on Schedule A, line 10; whether mortgage interest deductions on Schedule A, line 10 matched Form 1098 amounts; frequency of paid preparer use by those claiming a Schedule A, line 10 deduction; loan account numbers; and amounts of mortgage interest deduction taken on forms other than Schedule A to which we had access, including Schedule C for sole proprietorship businesses, Schedule E for supplemental income and losses, Schedule F for farms, and Form 4835 for farm rental income and expenses. Form 8829, Expenses for Business Use of Your Home, also can be used to report a mortgage interest deduction, but mortgage interest reported on this form is not included in the SOI database and consequently was not part of our analysis. The following table compares examination results of IRS’s compliance initiative projects (CIP) on the mortgage interest deduction with results from its routine examinations (non-CIP). As shown in the examples in this appendix, to determine the amounts of mortgage interest that is deductible on Schedule A, a taxpayer or his or her tax return preparer might need to follow many steps. First, the taxpayer must find all his or her records pertaining to the property’s acquisition and determine if the October 13, 1987, cutoff date applies, then locate records of refinancing arrangements affecting acquisition debt, and calculate the appropriate average annual balance. Second, the taxpayer must determine home equity indebtedness, properly tracking, for instance, the use of proceeds from home equity lines of credit, and determining average balances and home equity amounts not deductible for AMT purposes. Third, the taxpayer must know when loan proceeds finance business use of the home, a partly rented home, or personal, business, investment, or municipal bond expenditures because how the loan proceeds are used determines the deductibility of the loan interest on the tax return. The American Institute of Certified Public Accountants gave us details of the following scenario in which married homeowners refinanced their house: In 1990, a married couple borrowed $150,000 to buy a house. By January 1, 2008, they had made principal payments of $50,000, leaving an acquisition debt of $100,000. On January 1, 2008, their house’s fair market value was $500,000, and they refinanced their original loan for $400,000. The new $400,000 mortgage would be allocated as follows: The first $100,000 would be considered home acquisition debt, and the interest on it would be deductible for the regular tax and for the AMT. The next $100,000 would be considered home equity debt, equal to the home equity limitation and, therefore, deductible for the regular tax but not for AMT unless it was used to acquire, construct, or improve a main or second home. The last $200,000 would be subject to “interest tracing” rules, with deductibility depending on how the money was used—for instance, if used for personal or municipal bond purposes, the interest would not be deductible. However, if used for business or investment purposes, the interest may be deductible. The example would have been more complicated if not for the following simplifying assumptions that we made: The couple owned only one house and never borrowed money to improve it. They bought it after the October 13, 1987, the grandfather date when rules changed regarding the amount of acquisition debt for which interest is deductible. This was the first time they refinanced the house. The refinancing was done on the first day of the year and $1,000 in principal was repaid each month, making monthly mortgage balances easily calculated throughout the year. They had no home equity loans other than a home equity part of the refinancing and had not used a home equity line of credit, the use of whose proceeds they would have had to have tracked. They never used the house for business or rented any part of it. We used the facts and assumptions of this example to complete the 13 steps in table 5, which is patterned after a worksheet in Publication 936. Table 5 tracks the path taxpayers would follow in figuring their deductible home mortgage interest. A professional tax preparer also gave us the following example to illustrate the complexity of the mortgage interest deduction. This example may be atypical but reflects a more complicated case than the previous example. The client had more than one home, a mortgage greater than $1 million, a refinancing, and home equity debt. In this case, the practitioner said that it took over 4 hours of tracing transactions to calculate the acquisition and home equity debt for one house, using an Excel spreadsheet to enter amounts from closing statements for each of multiple refinancings. Home equity debt had to be tracked separately because interest on allowable home equity debt is generally not deductible for AMT purposes unless the proceeds are used to buy, build, or improve a home. Acquisition debt after refinancing had to only include the acquisition debt part of the previous loan and closing costs. A deductible interest spreadsheet had to consider beginning and ending loan balances, how loan proceeds were originally used, and total interest paid. This appendix presents analyses of Statistics of Income (SOI) Division and Compliance Data Warehouse (CDW) data focusing on Schedule A, line 10 deductions and Form 1098 filings. Tables 6 through 12 use a SOI-derived panel of taxpayers from 2004 through 2006, combined with CDW data on Form 1098 filings, and table 13 uses SOI data exclusively. Table 6 shows that for returns reporting a Schedule A, line 10 deduction for all three years of the study period, about half had mortgage interest from one Form 1098. For example, in 2006, 42.6 percent of taxpayers reporting a mortgage interest deduction had one Form 1098, totaling about $126.5 billion in Schedule A, line 10 deductions. Table 6 also shows that some taxpayers reported a mortgage interest deduction on Schedule A, line 10. This might mean that the taxpayer improperly filled out line 10 on Schedule A or that IRS had not received a corresponding Form 1098 for that particular taxpayer. The number of Form 1098 filings does not reflect the number of homes a taxpayer has because, for example, taxpayers would receive multiple Form 1098 filings in a given year if they sold a home or if their existing mortgage was sold in the secondary market. The margins of error for the tables in this appendix appear in app. VI. Table 7 breaks down the pattern of Schedule A, line 10, deduction amounts compared with the number of Form 1098 filings. For example, in 2006, 75 percent of taxpayers with two Form 1098 filings had deduction amounts of $14,325 or less. Table 8 shows the results of comparing Schedule A, line 10, deduction amounts on tax returns to amounts reported to IRS by third parties on Form 1098. As the table shows, about 65 percent to 67 percent of the comparisons resulted in a match in all 3 years. However, in some situations, our analysis showed that taxpayers’ deductions also were greater or less than the amounts on Forms 1098. Over the three-year period, 8.2 percent to 11.1 percent of tax returns’ deductions exceeded the Form 1098 amount. This can occur legitimately when, for example, a property is co-owed and one of the owners claiming the deduction does not receive a Form 1098. It can also occur because of taxpayer or Form 1098 filing errors. Deductions can be less than the Form 1098 amounts when taxpayers’ debts have exceeded the deduction limits or by error. Table 9 shows both the distribution of Schedule A, line 10 deduction amounts and the distribution of the differences between Schedule A, line 10 and Form 1098 statements filed for taxpayers who reported a Schedule A, line 10 deduction. For example, in 2006, the median (50th percentile) difference between Form 1098 and Schedule A amounts for taxpayers whose deduction exceeded the Form 1098 amount was $727. Our analysis also examined the change in mortgage interest deduction amounts from 2004 through 2006, for those taxpayers who reported a deduction in all 3 years. Table 10 shows the amount of change from 2004 to 2005 and then from 2005 to 2006. For example, the median (50th percentile) changes for tax returns that had increases from both 2004 to 2005 and 2005 to 2006 were $1,496 and $1,691, respectively. Table 11 shows the comparison of the year-to-year change in the number of Forms 1098 taxpayers received with the dollar amount of change on Schedule A, line 10, for the corresponding years. Only those taxpayers with tax returns filed in all 3 years of our study are included in this analysis. For example, the table shows that 581,950 tax returns had a declining number of Form 1098 filings from both 2004 to 2005 and 2005 to 2006 (“Down all years”). For those taxpayers, the dollar amount of their deductions declined by $2,471 from 2004 to 2005 for those at the 50th percentile and decreased by $1,834 from 2005 to 2006. An increase in deduction dollar amounts can occur when Form 1098 filings decreased, for example, as in the year after a taxpayer sells one of his two homes, and purchases a new home that has a larger mortgage interest deduction amount than the previous two homes combined. Table 12 shows an analysis of Form 1098 by mortgage loan account numbers. We performed this analysis to show more precisely how frequently taxpayers deducted mortgage interest on the same loan from year to year, assuming the same loan number in different years referred to the same loan. For example, from 2004 and 2005, about 25 million taxpayers had one matching account number on their Forms 1098. The third chart shows taxpayers who had matching accounts throughout the 3- year period. For example, 574,173 taxpayers had 3 matching account numbers on their Forms 1098 in 2004, 2005, and 2006. Taxpayers with zero matching account numbers likely means that they deducted interest on different loans in different years. Table 13 shows the distribution of taxpayers’ mortgage interest deductions on Schedule A and on other schedules. Non-Schedule A interest deductions may relate to forms such as Schedule C for small business, Schedule E for rental property, and Schedule F for farming. The following tables provide the margins of error for the corresponding statistics reported in the previous appendix. For example, table 14 corresponds with table 6 in app. V. Table 22 describes the three types of home equity loans—lump sum home equity loans, home equity lines of credit, and refinancings—that SMR Research included in its calculations of home equity loan debt that exceeded $100,000, the home equity debt limitation. Although SMR Research data do not include Social Security numbers useful in data matching, its data include (1) homeowner names and addresses; (2) loan dollar amounts useful for determining if loan limitations for deductible interest were exceeded; and (3) other information useful in audit selection or other compliance activities. An example of this other SMR Research information that might be useful for compliance purposes is information showing that more than a third of the cash-out refinancing dollar amounts and the home equity loan dollar amounts not known to be associated with lines of credit were in just 14 counties. Pinpointing homeowners with the largest home equity loans or who live in one of a few counties with taxpayers having large proportions of the nation’s home equity loan dollar value might ease IRS’s burden in matching large amounts of information to its own databases. However, we recognize that matching databases using names and addresses is much more difficult than using Social Security numbers and that its viability would have to be tested. For instance, IRS could run a name and address match for a particular geographic location and follow up with only those taxpayers whose information matched without extensive effort. If the test were to show a low return on investment or did not deserve a higher priority than competing efforts, it could be abandoned. IRS follow-up could be examinations, possibly targeted at geographic areas or taxpayers with the largest potential overdeductions; correspondence to taxpayers as described in an earlier section; or outreach to the tax return preparation community. Outreach to preparers could inform them that the home equity area is being scrutinized by IRS and, if taxpayers’ returns had not properly considered the $100,000 or fair market value limitations, amended returns might be warranted. Non- examination efforts might be preferable to resource-intensive examinations because the amounts of possible extra tax per noncompliant taxpayer might be relatively small. For example, if a taxpayer with an 8- percent interest rate on a $200,000 home equity loan deducted interest on the entire loan, the overdeduction would be about $8,000, or about $2,000 in taxes for a taxpayer in the 25-percent bracket. In addition to the contact named above, Charlie Daniel, Assistant Director; Amy Bowser; Sara Daleski; Eric Gorman; Lawrence Korb; Karen O’Conor; Anne Stevens; and John Zombro made key contributions to this report. | The home mortgage interest deduction is the third most expensive federal income tax expenditure, with the government expected to forgo about $80 billion of revenue for the deduction in 2009.1 Subject to various limitations, taxpayers may deduct interest on home-secured loans, such as mortgages, mortgage refinancings, and home equity loans, including those taken as lump sum amounts and home equity lines of credit. The rules that taxpayers must follow in determining the proper amount of mortgage interest to deduct can be complex. For example, there are limitations on the amount of debt for which interest can be deducted, special rules for refinancing, situations where alternative minimum tax (AMT) considerations apply, and rules on the deductibility of prepaid interest amounts called points. In general, complex tax rules increase the potential for noncompliance. Congress asked us to study the home mortgage interest deduction to determine if there are administrative issues that need to be addressed to improve taxpayer compliance and Internal Revenue Service (IRS) enforcement. For this report, we (1) provide information on how IRS detects taxpayers' noncompliance with the home mortgage interest deduction rules and what it knows about the extent of noncompliance; (2) identify the problems, if any, taxpayers face in attempting to comply with the deduction and describe IRS's challenges in detecting mortgage interest deduction noncompliance; (3) assess options to give IRS more information to enforce compliance with the rules; (4) determine whether IRS's guidance to taxpayers and its examiners' guidance and training on the deduction provide enough information to properly calculate the taxpayers' allowable mortgage interest deduction; and (5) describe how tax-return preparation software programs handle the deduction. Congress also asked us to provide descriptive information on taxpayers' mortgage interest deductions and mortgage interest payments reported on Form 1098, Mortgage Interest Statement. Appendix V provides this information. Consideration of statutory changes was beyond the scope of our report. Although IRS's enforcement and research programs found some mortgage interest deduction compliance problems, the methods leave gaps in what is known about the extent and specific nature of noncompliance. The four main programs that IRS uses to enforce or research mortgage interest deduction compliance include the following. (1) A computer matching program designed to identify taxpayers whose mortgage interest deduction exceeds amounts reported on Form 1098; (2) IRS's National Research Program (NRP), the main ongoing study of taxpayer compliance based on examinations of a random sample of tax returns; (3) Routine examinations done by correspondence, in an IRS office, or on site; and (4) Two special compliance initiative projects (CIP) primarily examining acquisition debt for a limited segment of individual taxpayers with high mortgage interest deductions or high adjusted gross incomes. The mortgage interest deduction rules create compliance problems for taxpayers, reflecting the deduction's complexity. The effects of the problems, however, are uneven. Although many taxpayers might encounter few problems, others could face many more. Problems cited by tax practitioners and in our review of articles on deducting home mortgage interest included the following: (1) Taxpayers need to distinguish between acquisition and home equity debt but did not always do so. (2) Taxpayers deducted interest on loans exceeding the limitations, including the acquisition, home equity, and two-home limits. (3) Taxpayers who were subject to the AMT and thus not eligible to deduct home equity interest claimed it nonetheless. (4) Tax practitioners also missed the limitations and did not comply with the AMT rules. (5) Depending on the circumstances, some taxpayers and practitioners faced extensive recordkeeping and calculations related to such matters as refinancing, the AMT, business use of the home, other uses of loan proceeds, and the periodic use and repayment of home equity lines of credit. (6) Taxpayers may have been unaware that they might need documentation on matters such as how proceeds of home equity loans are spent to properly determine the amount of the mortgage interest deduction on their tax returns. (7) Mortgage interest deduction limits based on debt amounts are not directly comparable with the information on Form 1098, which lists interest paid. If taxpayers' debts exceed the limits, taxpayers must calculate how much interest they can deduct. Additional information about taxpayers' mortgages could help IRS identify the most productive cases to examine and determine whether taxpayers are claiming the correct amount of mortgage interest deduction. IRS could obtain more helpful information about taxpayers' mortgages by expanding information collected on Form 1098. IRS officials said that in implementing certain additional reporting requirements, the agency would need to meet the terms of the Paperwork Reduction Act, which requires agencies to minimize the paperwork burden they impose on the public and maximize the practical utility of the information they collect. Taken as a whole, IRS taxpayer guidance--Schedule A and its instructions, Publication 17, Your Federal Income Tax, and Publication 936, Home Mortgage Interest Deduction--generally informed taxpayers that mortgage interest deductions are subject to limits. Even though the guidance was generally sufficient, Schedule A does not explicitly mention the limitations. IRS's examiners' guidance and training materials included information for identifying and calculating home-equity and the acquisition-debt limitations. Overall, examiners we interviewed were satisfied with training and guidance on the mortgage interest deduction. The three companies' tax preparation software for individuals that we analyzed differed from each other in how they treated the limitations on the amount of debt for which interest can be deducted. |
The estimated costs of planned airport capital development vary depending on which projects are included in the estimates. According to FAA’s estimate, which includes only projects that are eligible for Airport Improvement Program (AIP) grants, the total cost of airport development will be about $46 billion, or about $9 billion per year, for 2001 through 2005. FAA’s estimate is based on the agency’s National Plan of Integrated Airport Systems, which FAA published in August 2002. ACI’s estimate includes all of the projects in FAA’s estimate, plus other planned airport capital projects that may or may not be eligible for AIP grants. ACI estimates a total cost of almost $75 billion, or nearly $15 billion per year for 2002 through 2006. Projects that are eligible for AIP grants include runways, taxiways, and noise mitigation and noise reduction efforts; projects that are not eligible for AIP funding include parking garages, hangars, and expansions of commercial space in terminals. Both FAA’s and ACI’s estimates cover projects for every type of airport. As table 1 indicates, the estimates are identical for all but the large- and medium-hub airports, which are responsible for transporting about 90 percent of the traveling public. For these airports, ACI’s estimate of planned development costs is about twice as large as FAA’s. According to FAA’s analysis of the planned capital development for 2001 through 2005, airports will use 61 percent of the $46 billion for capacity enhancement, reconstruction, and modifications to bring airports up to the agency’s design standards and 39 percent to fund safety, security, environmental, and other projects. See figure 1. Neither ACI’s nor FAA’s estimate includes funding for the terminal modification projects that are needed to accommodate the new explosives detection systems required to screen checked baggage. ACI estimates that these projects will cost a total of about $3 billion to $5 billion over the next 5 years. A key reauthorization issue facing the Congress is how these terminal modification projects will be funded. In 2001, the Congress allowed FAA to use AIP funds to help pay for some new security projects; however, this use of AIP funds affected the amount of funding that was available for some development projects. Specifically, in fiscal year 2002, FAA used $561 million in AIP grant funds for security projects, or about 17 percent of the $3.3 billion available. The use of AIP grant funds for new security projects in fiscal year 2002 reduced the funding available for other airport development projects, such as projects to bring airports up to FAA’s design standards and reconstruction projects. The use of AIP grant funds for security also caused FAA to defer three letter-of-intent payments totaling $28 million to three airports until fiscal year 2003 or later. From 1999 through 2001, the 3,364 airports that make up the national airport system received an average of about $12 billion per year for planned capital development. The single largest source of these funds was bonds, followed by AIP grants and passenger facility charges. (See table 2.) It is important to note that the authorized AIP funding for fiscal years 2002 and 2003 totaled $3.3 billion and $3.4 billion, respectively. However, because data for funding from other sources were not available for these years, we used the figures from 1999 through 2001, the most recent years for which consistent data were available. The amount and type of funding vary depending on the airport’s size. For example, as shown in figure 2, the large- and medium-hub airports depend primarily on bonds, while the smaller airports rely principally on AIP grants. Passenger facility charges are a more important source of revenue for the large- and medium-hub airports because they have the majority of commercial-service passengers. If the funding for airport capital development remains at about $12 billion a year over the next 5 years, it would cover all of the projects in FAA’s estimate. However, it would be about $3 billion less per year than ACI’s estimate. Figure 3 compares the average annual funding airports received from 1999 through 2001 with FAA’s and ACI’s estimated annual planned development costs for 2001 through 2006. This difference is not an absolute predictor of future funding shortfalls; both funding and planned development may change in the future. However, it does provide a useful indication of where funding differences may be the greatest. In percentage terms, the difference between recent funding levels and ACI’s estimate of planned capital development is somewhat greater for smaller airports than it is for large- and medium-hub airports. From 1999 through 2001, smaller airports received an average of about $2.4 billion a year for planned capital development while large- and medium-hub airports received an average of about $9.4 billion. If these funding levels continued, smaller airports would not be able to fund about 27 percent of their planned development, while large- and medium-hub airports would not be able to fund about 20 percent of their planned development. Figures 4 and 5 illustrate the differences between recent funding levels and the costs of planned capital development projected for smaller and for large- and medium-hub airports. The difference between past funding and planned development has declined over the past 5 years, and, at recent funding levels, airports would be able to fund a higher percentage of their planned capital development than they could fund in 1998. At that time, we reported that smaller airports could fund about 52 percent of their planned capital development, compared with about 73 percent today, which represents an increase of 21 percent. We also reported that large- and medium-hub airports were able to fund about 80 percent of their development and are able to fund the same amount today. See figure 6. The primary reason why smaller airports can fund more of their planned capital development today than they could in 1998 is that AIR-21 increased both the total amount of funding for AIP grants and the proportion of AIP funding that went to smaller airports. Specifically, AIR-21 increased the funding for two AIP funds that primarily or exclusively benefit smaller airports—the state apportionment fund and the small airport fund—and it created general aviation entitlement grants, which also benefit smaller airports. As a result of these changes, smaller airports received almost 63 percent of the $2.4 billion in AIP grant funds that airports received each year, on average, from 1999 through 2001. Large- and medium-hub airports can also fund more of their planned development today than they could in 1998 primarily because they are able to issue more bonds and to charge a higher passenger facility fee. Options are available to increase airport funding or to make better use of the existing funding. These options, some of which were authorized or implemented as part of AIR-21, include increasing the AIP grant funding for smaller airports, increasing passenger facility charges, creating a separate fund for new security projects, and using innovative financing approaches. The various options would benefit different types of airports to varying degrees. It is also important to note that even though the airlines may be experiencing financial problems, most large airports have very solid credit ratings and could, if necessary, issue more debt without facing exorbitant interest rates. To help address the difference between funding and planned development, AIR-21 provided that up to $150,000 a year in AIP grant funds be made available to all general aviation airports for up to 3 years for airfield capital projects, such as runways, taxiways, and airfield construction and maintenance projects. On February 11, 2003, we reported that since the program’s inception in fiscal year 2001, general aviation airports have received about $325 million, which they have used primarily to help build runways, purchase navigational aids, and maintain pavements and airfield lighting. Most of the state aviation officials and general aviation airport managers we surveyed said the grants were useful in meeting their needs, and some suggested that the $150,000 grant limit be increased so that general aviation airports could undertake larger projects. However, a number of state officials cautioned that an increase in the general aviation entitlement grant could cause a decrease in the state apportionment fund that states use to address their aviation priorities. Another option would be to increase or eliminate the cap on passenger facility charges. This option would primarily benefit larger airports, because passenger facility charges are a function of the volume of passenger traffic. However, under AIP, large- and medium-hub airports that collect passenger facility charges must forfeit a certain percentage of their AIP formula funds. These forfeited funds are subsequently divided between the small airport fund, which is to receive 87.5 percent, and the discretionary fund, which is to receive 12.5 percent. Thus, smaller airports would benefit indirectly from any increase in passenger facility charges. In our 1999 report on passenger facility charges, we estimated that a small increase in these charges would have a modest effect on passenger traffic. At that time, we estimated that each $1 increase would reduce passenger levels by about 0.5 to 1.8 percent, with a midrange estimate of 0.85 percent. Since AIR-21 raised the cap on passenger facility charges from $3.00 to $4.50, the full effect of the increase has not been realized because only 17 of the 31 large-hub airports (55 percent) and 11 of the 37 medium-hub airports (30 percent) have increased their rates to $4.50. Additionally, 3 large-hub airports and 6 medium-hub airports do not charge a passenger facility fee. The reluctance to raise passenger facility charges is likely the result of several factors, including the views of airlines, which are opposed to any increase in passenger facility charges because such an increase would raise passenger costs and reduce passenger traffic. Nonetheless, if all airports were to increase passenger facility charges to the current ceiling, additional revenue could be generated. Recently, the head of the Transportation Security Administration suggested setting up a separate fund for security projects. Such a fund might be comparable to AIP, which receives revenue from various aviation-related taxes through the Airport and Airway Trust Fund. Having a separate fund would be consistent with the recent separation of aviation safety and security responsibilities. FAA has introduced other mechanisms to make better use of existing funding sources, the most successful of which has been letters of intent, a tool that has effectively leveraged private sources of funding. As noted, letters of intent represents a nonbinding commitment from FAA to provide multiyear funding to an airport beyond the current AIP authorization period. Thus, the letter allows the airport to proceed with a project without waiting for a future AIP grant because the airport and investors know that allowable costs are likely to be reimbursed. A letter of intent may also enable an airport to receive a more favorable interest rate on bonds that are sold to refinance a project because the federal government has indicated its support for the project. FAA has issued 64 letters of intent with a total commitment of about $3 billion; large- and medium-hub airports account for the majority of the total. Other approaches to making better use of existing funding resources were authorized under AIR-21. Specifically, the act authorized FAA to continue its innovative finance demonstration program, which is designed to test the ability of innovative financing approaches to make more efficient use of AIP funding. Under this program, FAA enabled airports to leverage additional funds or lower development costs by (1) permitting flexible local matching on some projects, (2) purchasing commercial bond insurance, (3) paying interest costs on debt, and (4) paying principal and interest debt service on terminal development costs incurred before the enactment of AIR-21. FAA has provided about $31 million for smaller airports to test these innovative uses of AIP funding. According to FAA officials, the results of the program have been mixed. The most popular option for airports has been flexible matching, which has resulted in several creative loan arrangements. In conclusion, Mr. Chairman, the aviation industry and the national economy are still struggling to recover their health. Analysts nonetheless expect the demand for air travel to rebound, and the nation’s aviation system must be ready to accommodate the projected growth safely and securely. As the Congress moves forward with reauthorizing FAA, it will have to decide on several key issues, including how it wants to consider the airports’ estimate of $15 billion a year for planned capital development over the next 5 years, how terminal modification projects will be funded, and what priorities it wants to set, both for development and security. Sustaining recent funding levels would allow the majority of planned airport capital development to move forward, but it would not cover all of the airports’ estimated costs, and it would not address the costly terminal modifications needed to accommodate explosives detection systems. Options such as additional AIP grant funds, increases in passenger facility charges, or the creation of a separate fund for new security projects could make more funding available for airport improvements. However, the growing competition for federal budget dollars and concerns about the impact of higher charges on airline ticket sales may limit the practicality of these options. To determine how much planned development would cost over the next 5 years, we obtained planned development data from FAA and ACI. ACI provided its estimate to us in January 2003, and we are still analyzing the data on which the estimate is based. To determine the sources of airport funding, we obtained capital funding data from FAA, the National Association of State Aviation Officials, Thomson Financial, and our survey of 400 general aviation and reliever airports. We obtained funding data from 1999 through 2001 because these were the most recent years for which consistent data were available. We screened the planned development and funding data for accuracy and compared funding streams across databases where possible. We also clarified ambiguous development or funding source information directly with airports. We did not, however, audit how the databases were compiled, except for our own survey. However, we have not finished analyzing the results of our survey, and the results presented in this testimony are still preliminary. We have been performing our ongoing work from May 2002 through February 2003 in accordance with generally accepted government auditing standards. This concludes my statement. I would be pleased to answer any questions that you or other members of the Subcommittee might have. | Since Congress enacted the Wendell H. Ford Aviation Investment and Reform Act for the 21 Century (AIR-21) 3 years ago, much has changed. At that time, the focus was on reducing congestion and flight delays. Today, flights are being canceled for lack of business, two major air carriers are in bankruptcy, and attention has shifted from increasing the capacity of the national airspace system to enhancing aviation security. Furthermore, as the federal budget deficit has increased, competition for federal resources has intensified, and the costs of airport capital development are growing, especially with the new requirements for security. Nonetheless, analysts expect the demand for air traffic services to rebound. Until that time, the unexpected slump in air traffic creates a window of opportunity to improve the safety and efficiency of the national airport system. Although there is general consensus among stakeholders that maintaining the integrity of the national airport system requires continual capital investment, estimates vary as to the type and cost of planned airport capital development required to ensure a safe and efficient system. For 2001 through 2005, the Federal Aviation Administration (FAA) has estimated annual planned capital development costs of about $9 billion, while the Airport Council International (ACI), a key organization representing the airport industry, has estimated annual costs of about $15 billion for 2002 through 2006. The estimates differ primarily because FAA's includes only projects that are eligible for federal funding, whereas ACI's includes projects that may or may not be eligible for federal funding. Neither FAA's nor ACI's estimate covers the airport terminal modifications needed to accommodate the new explosives detection systems required to screen checked baggage. According to ACI, the total cost of these modifications could be $3 billion to $5 billion over the next 5 years. From 1999 through 2001, airports received an average of about $12 billion a year for planned capital development. The primary source of this funding was bonds, which accounted for almost $7 billion, followed by federal grants and passenger facility charges, which accounted for $2.4 billion and $1.6 billion, respectively. The amounts and types of funding also varied by airport type. Of the $12 billion, large- and medium-hub airports received over $9 billion, and smaller airports received over $2 billion. If airports continue to receive about $12 billion a year for planned capital development, they would be able to fund all of the projects included in FAA's estimate, but they would not be able to fund about $3 billion in planned development estimated by ACI. While this projected shortfall could change with revisions in future funding, planned development, or both, it nevertheless indicates where funding differences may be the greatest. Options are available to increase or make better use of the funding for airport development, and these options would benefit different types of airports to varying degrees. For example, raising the current cap on passenger facility charges would primarily benefit larger airports, while increasing or redistributing Airport Improvement Program grant funds would be more likely to help smaller airports. |
In March 2002, we reported that SEC’s workload and staffing imbalances had challenged SEC’s ability to protect investors and maintain the integrity of securities markets. Appendix I graphically depicts SEC’s workload and staffing imbalance from 1990 through 2000 as reported in our 2002 report and appendix II updates this graphic using SEC budget documents including its 2003 and 2004 workload and staffing estimates. As reported in March 2002, we found that SEC generally managed to bridge the gap between its workload and staff by determining which of its statutorily mandated duties it could accomplish with existing resources or only marginally increased resource levels. This approach, while practical, forced SEC to be largely reactive rather than proactive. We also reported that SEC tended to develop its annual budget request based on the previous year’s appropriation rather than on what it would actually need to fulfill its mission. In 2003, this practice resulted in a modest increase over the previous year’s request. But several high-profile corporate failures and accounting scandals, plus concerns that public companies should be held more accountable for information they report to investors, led Congress to pass the Sarbanes-Oxley Act of 2002 (Sarbanes-Oxley Act). The act addresses a number of concerns involving corporate governance, auditor independence, regulation and oversight of the accounting profession, and SEC’s resource limitations. In part because of the level authorized in the Sarbanes-Oxley Act, SEC increased its initial 2003 budget request of $466 million to $769 million. Ultimately, Congress appropriated $716 million. For 2004, SEC requested a budget of almost $842 million reflecting a supplemental carryover, annualization of new 2003 positions, inflation (pay and nonpay), and merit pay increases less one-time 2003 information technology costs. SEC’s planned allocations appear to be consistent with the Sarbanes-Oxley Act, which mandated that the $776 million authorization be used to: fund pay parity, allowing SEC to set salaries for certain staff positions at levels comparable to those at other federal financial regulators; fund information technology, security enhancements, and recovery and mitigation activities in light of the terrorist attacks of September 11, 2001; and fund no fewer than 200 additional professional staff to increase oversight of auditors and audit services in order to improve SEC’s investigative and disciplinary efforts as well as additional professional support staff necessary to strengthen existing program areas. SEC’s allocations were also apparently influenced by its internal review of operations and resource needs and on justifications made by each division and office. SEC determined that most of the planned increase would be used to hire an additional 842 staff, primarily accountants, attorneys, and examiners, and to upgrade its technological resources over the next few years. Table 1 provides information on SEC’s staff allocation as of July 1, 2003, by program area. The 2002 numbers include 125 new positions that were authorized by a supplemental appropriation to SEC’s 2002 budget to deal with the increasing workload from financial fraud and reporting cases, to improve and expedite the review of periodic filings, and to deal with new programmatic needs and policy. According to an SEC official, the current and proposed budgets factor in the increased workload resulting from SEC’s new responsibilities under various new laws including the Sarbanes- Oxley Act, Gramm-Leach-Bliley Act, and Commodity Futures Modernization Act. For example, between 2002 and 2004, the full disclosure program is slated to receive the largest percentage increase in positions 39 percent. This program includes the Division of Corporation Finance and the Office of the Chief Accountant, which are responsible for reviewing the financial statement filings for over 17,000 reporting public companies and providing rule-making and interpretive advice. In this area, staffing is driven in part by the Sarbanes-Oxley Act, which requires SEC to review the financial statements of each reporting company every 3 years. In 2002 SEC’s average translated into a review once every 6 years. The area slated to receive the next largest percentage increase (35 percent) is the supervision and regulation of securities markets. This program includes the Division of Market Regulation and part of the Office of Compliance, Inspections and Examinations and is responsible for establishing and maintaining policies for fair, orderly, and efficient markets and conducting examinations and inspections of 9 registered securities exchanges and an estimated 8,000 brokerage firms among others. The prevention and suppression of fraud program, which includes the Division of Enforcement, is slated to receive a 21 percent increase, which SEC said would help with the increasing number of investigations into possible violations of securities laws. SEC’s staff allocations appear consistent with legislative requirements and what is currently known about its operating environment. However, because SEC’s staff positions were allocated without the benefit of a strategic plan, we are unable to fully assess the appropriateness or effectiveness of this use of its budget increase. Given that staff salaries and benefits average about 70 percent of SEC’s budget, we would expect the spending allocations to roughly correlate to its staffing allocations. However, SEC was unable to provide us information to analyze SEC’s budgetary allocation across each program area. At the time of this study, SEC was in the process of completing its 2005 budget request for OMB, which will include its allocation of its budgetary resources for its 2004 budget estimate by program area. SEC expects to have these estimates completed by sometime in late August or early September. In 2002, we reported the difficulty SEC faced in hiring accountants for the 125 positions authorized by its 2002 supplemental appropriation. SEC had identified the existing competitive service hiring requirements as hampering its ability to fill these and other positions because of the length of time involved. SEC subsequently asked for and received relief from competitive hiring requirements under the Accountant, Compliance and Enforcement Staffing Act of 2003, which was enacted in July 2003. This new legislation is designed to enable SEC to expedite the hiring of accountants, economists, and examiners so that the agency can more quickly fill the 842 positions created. As of July 1, 2003, SEC has only filled a few of the vacancies for the allocated positions but is now better positioned to hire under its new authority. It is too soon to determine whether this new authority will enable SEC to quickly fill the hundreds of vacancies it needs to fill by the end of 2004. Information technology was another area identified in our 2002 report as having funding gaps that had contributed to existing inefficiencies. Like the rest of the government, SEC’s needs in the area of information technology continue to increase, and SEC staff must have the necessary tools to successfully meet the agency’s increasing demands. SEC maintains a list of technology improvement projects that have not been funded due to budgetary constraints, which SEC officials said include applications to improve the manipulation and connectivity of various SEC data systems and computerized reports. The budget increase has allowed SEC to begin improving its information technology capabilities. SEC’s Office of Information Technology, which supports the agency’s information systems and computer users, received an increase in its 2003 operating budget of more than 100 percent, from around $44 million to $100 million. Our understanding is that SEC plans to undertake a few small projects each year such as system upgrades and software purchases, to enhance its systems and will implement larger long-term projects over time. SEC began developing an enterprise architecture a strategic approach to information technology planning in 2001. This architecture is designed to allow SEC to fund and develop information technology initiatives based on agencywide needs by strategically identifying and organizing technology projects. In 2002, SEC continued to develop its enterprise architecture in order to identify and document relationships between agency business functions and supporting technologies. SEC management also began incorporating the enterprise architecture into its information technology capital planning process. Although most of SEC’s long-term projects are in the developmental stages, we are cautiously optimistic that, if properly implemented, they can improve SEC’s operational efficiencies. Some of these longer-term projects include Converting SEC’s Electronic Data Gathering Analysis and Retrieval (EDGAR) system into a searchable database that would help SEC conduct various types of industry and trend analyses. EDGAR is the database system that public companies use to file registration statements, periodic reports, and other forms electronically. Currently, EDGAR receives and archives data, but staff cannot immediately and easily analyze it. The goal is to create filings that will allow anyone to extract relevant data. Implementing a document management and imaging initiative, intended to eventually eliminate paper documents and allow SEC staff to review and electronically file the large volumes of information that are part of litigation, examination, and enforcement activities. Staff told us that the planned system will provide an agencywide electronic capture, search, and retrieval mechanism for all investigative and examination materials. Implementing a disaster recovery program that is being designed to store and move large amounts of data among regional or district offices without first going through Washington, D.C. The current project, when completed, will allow the agency to back up critical information and data on a daily basis at multiple locations. In 2002, we found that SEC had not engaged in a comprehensive agencywide strategic planning process and little has changed in this regard in 2003. As we have previously reported in earlier reports, high-performing organizations identify their current and future human capital needs— including the appropriate number of employees, the key competencies needed, and plans for deploying staff across the organization—and then create strategies to fill any gaps. Given the SEC’s role in the securities industry’s self-regulatory structure, a critical element of SEC’s strategic planning process is an evaluation of the external environment in which the agency operates. SEC’s budget increase has heightened the need for strategic planning and the significance of the process, as SEC’s spending plan will have to withstand considerable scrutiny. SEC’s lack of a current strategic plan may also affect other aspects of SEC’s operations as strategic plans are the starting point for each agency’s performance measurement efforts and should provide the basis for strategic human capital planning. In 2002, SEC took a critical step toward developing a strategic plan when it conducted an internal study of SEC’s current operations, workload, resource allocations, methods for assigning and managing work, and measures of performance, productivity and quality of effort. The study, which was facilitated by a consulting firm (McKinsey & Company) and includes discussions of staffing and resource allocation issues, appears to have been a factor in SEC’s allocation of many of the 842 new positions. But this confidential study has not been widely distributed within SEC, and it is unclear whether it will be in the near future. This study serves as a useful framework for SEC as it begins developing a dynamic comprehensive strategic plan that will better enable it to identify its mission and staffing needs. More immediately, such an effort is vital as it determines how best to use its additional resources. We acknowledge that over the past year and a half, SEC has had to deal with a considerable amount of change, which has limited its ability to focus on a new strategic plan. SEC has had to acclimate itself to two new chairmen and adjust to new management teams, manage a 45 percent budget increase, negotiate its first agreement with its newly organized union, implement and manage a new fee rate structure, prepare for its first financial statement audit, and respond to dozens of new requirements under the Sarbanes-Oxley Act. However, since SEC issued its existing plan in September 2000, the financial world has changed significantly. Although SEC’s Government Performance and Results Act (GPRA) annual reports attempt to provide a tactical focus, a new long-range planning effort is long overdo. As stated in SEC’s 2000 plan, “Our strategic plan is a living document, one that must be continually reexamined and modified to assure it remains responsive and relevant in an ever-changing environment.” In addition to the changing external environment, a number of internal processes and organizational efforts within SEC hinge on SEC completing a new strategic plan, including developing more outcome- oriented performance measures to gauge the effectiveness of its regulatory operations in fulfilling its statutory mission and formalizing its strategic human capital plan. Rather than measuring outputs, SEC is working to develop measures for how effectively its actions achieve its goals and fulfill its mission. SEC is also beginning to take steps that will improve its ability to leverage its technological capabilities. Consistent with the findings in our March 2002 report, SEC’s subsequent GPRA 2002 annual performance report continued to use measures of outputs rather than outcomes. For example, under the goal of protecting investors by improving public awareness and educating investors, SEC tracks the number of investor education events organized by senior Commission staff in a given year. Within the goal of maintaining fair, honest, and efficient markets SEC uses the number self-regulatory organization rule changes reviewed as a measure of performance. As we reported, performance measures can help to provide detailed information SEC needs to make informed workforce decisions, including (1) the relationship between its budget request for full-time equivalent staff years and the agency’s plans and ability to meet individual strategic goals and (2) any excesses or shortages in needed competencies. In late June, SEC began to take steps to transform its annual plan into a management tool aimed at helping SEC move to a more outcome-oriented approach to measuring the performance of its regulatory activities—an important part of strategic planning. To achieve this end, each program area is to develop a “performance dashboard”—a collection of measures identifying those key performance measures that will allow each program area manager to track performance. This movement to a performance dashboard, also involves managing the budget at the program level with each division head being held accountable for managing its individual budgetary resources. While this outcome-oriented approach is promising, we are concerned that SEC is developing new performance measures before it has completed or even started its new agencywide strategic plan. By identifying performance measures before it develops a new strategic plan, SEC runs the risk of having to redo any measures that are inconsistent with its newly defined strategic vision or allowing the existing measures to constrain its planning so that the new plan is consistent with them. We see this approach as analogous to a commuter rail company exploring the most efficient way to expand rail service to a new location before deciding whether that location is the best place for the new line. We are also reviewing the status of SEC’s strategic human capital planning. As you may recall, in our September 2001 report, we examined SEC’s strategies for managing its human capital and found that its human capital practices were driven by its need to confront its growing staffing crisis. This crisis was evidenced in a turnover rate that was almost twice the government average for attorneys, accountants, and examiners; hundreds of vacant positions; and the average tenure for examiners and attorneys had fallen below 3 years. We found that to counter its compensation challenge, SEC—more than the rest of the government— was aggressively using special pay rates and retention allowances to improve staff compensation. However, such actions were not stemming their turnover problems. We also identified a number of nonpay issues that threatened to impair SEC’s ability to carry out its mission and thus warranted SEC management’s attention. As we have reported, strategic planning is a key part of human capital management. Strategic human capital planning focuses on developing long-term strategies for acquiring, developing, and retaining an organization’s employees and for implementing human capital approaches that are clearly linked to achieving programmatic goals. In our 2001 human capital report, we found that SEC had begun to take key steps toward developing a strategic human capital plan but lacked adequate succession planning because of its high turnover rate. Moreover, we found that SEC had not articulated the details of its plans for carrying out its recruiting and retention efforts. SEC also lacked any formal mechanism to evaluate the effectiveness of its recruiting efforts and ways to gauge the effectiveness of its worklife programs. We also found that SEC had not created a culture that ensured ongoing attention to human capital issues, that human capital management was still focused on traditional personnel functions, and that it was not a priority for senior management in decisionmaking. We made a number of recommendations to SEC aimed at improving its human capital management, including a recommendation that it expand its annual performance plan into a comprehensive human capital plan that includes all program areas. We are looking into SEC’s progress in the above identified areas. However, we have found that SEC has not yet developed a formal strategic human capital plan that articulates how it intends to align its human capital approaches with its organizational goals. While it has yet to do this, we have found that SEC continues to take important steps to improve its strategic human capital management. First, as previously discussed, SEC has taken steps to improve its recruiting/hiring process. Second, SEC has begun to take steps to develop its people and has announced plans for an agencywide training program. One key training component that is currently in the early stages of development is targeted training for supervisors—which was an area identified in our 2001 human capital report as warranting management’s attention. However, it is too soon to determine the effectiveness of this new training effort. Third, SEC has taken actions to retain its human capital and address its staffing crisis. Most significantly, SEC has negotiated an agreement with the union, which outlines a uniform program for various worklife programs, such as flextime, flexiplace, and tuition reimbursement, among others, and has standardized various of these human capital policies. Historically, many of these programs have varied by division and office. SEC has just begun to review the use and effectiveness of these programs, therefore, it is too soon to determine what effect, if any, they will have on employee retention and morale. In our 2001 report we found that the single largest retention issue among attorneys, accountants, and examiners involved compensation. To enhance SEC’s ability to adequately compensate its employees, Congress enacted legislation that allows SEC to create a new pay system. In May 2002, acting on its new compensation authority, SEC implemented a new system, which established a pay structure more comparable with other federal financial regulators. This new pay structure increased base pay for attorneys, accountants, and examiners similar to that of other federal financial services regulators. More specifically, this new system structure consists of 20 grade levels, some with up to 31 steps. This new system has also provided additional compensation based on performance and has established new pay categories to compensate staff in supervisory positions. In conjunction with this new merit-based compensation system, SEC has also implemented a new performance management system, which is also an important part of the human capital planning process. Since our 2001 human capital report, we found that at least one symptom of SEC’s staffing crisis has improved. SEC’s turnover rate for attorneys, accountants, and examiners has decreased from 9 percent in 2001 to 6 percent on average in 2002, which in part may be attributed to pay parity. To date SEC reports that its average turnover rate is about 4 percent. However, the declining turnover rate may also reflect the state of the economy and resulting changes in the job market. SEC’s dynamic regulatory environment and tumultuous past year has made focusing on a strategic direction and vision for the agency difficult. Moreover, because SEC operated under its 2002 allocation for five months of the year, and had difficulty hiring needed expertise, it has been unable to fully implement its 2003 spending plan. Although SEC has begun to take a number of important steps aimed at addressing its operational and human capital challenges, additional work is needed to ensure that it has appropriately positioned itself to operate more efficiently and effectively in the 21st century. First, it is critical that SEC complete its strategic planning effort, which includes the systematic reevaluation of all of its current approaches, efforts, goals and activities in light of its current regulatory environment. An important part of any such effort would include working with the industry to ensure that SEC has accurately established priorities that reflect the current environment. For example, SEC would be benefited by reevaluating its existing rules, regulations, and regulatory approaches to ensure that they continue to reflect the realities of today’s financial markets and are consistent with the mission and goals established by SEC. Second, a critical step involves identifying ways to leverage existing resources, be it through better technology or regulatory processes. For example, SEC needs to fully fund and follow through on technology initiatives that offer the greatest opportunities to increase its effectiveness. SEC’s technology evolution could perhaps be one of the most important aspects in improving the efficiency of SEC’s operations and will likely require a sustained and ongoing resource commitment. SEC could also reevaluate its historical focus in areas such as small businesses and initial public offerings to ensure that it continues to meet the needs of the securities markets. Finally, aligning SEC’s human capital with its strategic plan is an important part of strategic human capital planning. To date, SEC has taken important steps aimed at establishing a coordinated human capital management approach but still lacks a formal plan. Thank you for your attention to SEC’s operations and planning processes. The leadership this subcommittee has shown, by holding this hearing should help to maintain the momentum needed for change at SEC. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you or other members of the subcommittee may have at this time. For further information regarding this testimony, please contact Orice M. Williams at (202) 512-8678. Individuals making key contributions to this testimony include Toayoa Aldridge, Joe E. Hunter, Jose Martinez-Fabre, and David Tarosky. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In February 2003, the Securities and Exchange Commission (SEC) received the largest budget increase in the history of the agency. The increased funding was designed to better position SEC to address serious issues identified in the Sarbanes-Oxley Act and to better enable SEC to address numerous operational and human capital management challenges discussed in the GAO report entitled SEC Operations: Increased Workload Creates Challenges (GAO-02-302). To help ensure that SEC spends its budgetary resources in an efficient and effective manner, GAO was asked to review the SEC's efforts to address the issues raised in the 2002 report and to report on how SEC intends to utilize its new budgetary resources. GAO's final report on these matters is expected to be completed this Fall. This testimony provides requested information on the status of SEC's current spending plan and preliminary observations on SEC's strategic and human capital planning efforts. In GAO's 2002 operations report, GAO identified a number of operational challenges facing SEC stemming from an increasing workload (e.g., filings, applications, and examinations) and staffing imbalances that threatened to impair SEC's ability to fulfill its mission. SEC's workload had grown at a much higher rate than its staffing since the mid- 1990s. In response to congressional concerns involving a number of highprofile corporate failures and accounting scandals, SEC's funding was increased 45 percent in 2003. SEC plans to spend most of its 2003 and 2004 budget increases to fund 842 new staff positions and double its information technology budget. However, given the late appropriation and hiring challenges, SEC has to date filled few of these positions, and it is unlikely that SEC will be able to utilize all of its 2003 funds. GAO also found that SEC recognizes the need to develop a new strategic plan and that such a plan is a vital component of its staff allocation and human capital planning processes. A new strategic plan is also vital to SEC's ability to develop performance-oriented, outcome-based performance measures. GAO found that while SEC has not updated its strategic plan, it has begun efforts to overhaul its performance measures to make them more outcome-oriented. This effort seems premature given its lack of a new strategic plan. Moreover, while GAO found that SEC has completed certain aspects of a strategic human capital plan, including development of a new pay structure comparable to other federal financial regulators, greater flexibility to expedite the hiring of certain critically needed professions, plans for more training, and implementation of agencywide-worklife programs, the lack of a new strategic plan inhibits SEC's ability to develop a formal human capital plan. |
The federal government uses grants—forms of federal financial assistance to a state, a local government, or a nongovernmental recipient authorized to receive U.S. government assistance (e.g., charitable or educational institutions) for a specified public purpose—to achieve national objectives and respond to emerging trends, such as changing demographics and changing threats to homeland security. Federal grant programs are diverse and can vary in numerous ways including type, size, nature of recipients, and types of programs they fund. Grant programs also vary in the amount of discretion they give to the recipient in how the funds will be used and the way the funds are allocated or awarded. The types of grants awarded by the agencies in our review include the following: Block grants allow recipients, predominately states, to fund a broad range of activities within more general policy areas, such as community development or law enforcement. Formula grants are awarded to all eligible grantees based on a statutory allocation formula, which may be based on a number of variables, including population, poverty rate in a given area, or tax effort. The grants are typically awarded to states, which often pass funds through to eligible local government agencies and nonprofit organizations. Project grants are generally awarded on a competitive basis and provide funding for fixed or known periods for specific projects or the delivery of specific services or products. Project grants can include fellowships, scholarships, research grants, training grants, traineeships, experimental and demonstration grants, evaluation grants, planning grants, technical assistance grants, survey grants, and construction grants. These categories are not mutually exclusive and can overlap when used to define grant programs. For example, HUD awards Community Development Block Grant funds to states using a formula, and then the states distribute the funds to localities, sometimes as project grants. Some agencies also use cooperative agreements, another form of financial assistance similar to grants, but one in which the federal agency is more involved with the recipient in implementing the financial assistance program. The distinguishing factor between a grant and a cooperative agreement is the degree of federal participation or involvement during the performance of the work activities. Most of the program offices included in our review oversee and monitor large block and formula grant programs, while others award and manage numerous project or other smaller grants, as described in table 1. At all stages of the grants life cycle, it is essential that effective internal control systems are in place. Internal control represents an agency’s plans, methods, and procedures used to meet its mission, goals, and objectives and serves as the first line of defense in safeguarding assets and preventing and detecting errors, fraud, waste, abuse, and mismanagement. Effective oversight of internal controls for grant programs is important to provide reasonable assurance to taxpayers that grants are awarded properly, recipients are eligible, and federal funds are used as intended and in accordance with applicable laws and regulations. The term reasonable assurance is important because no matter how well- designed and well-operated, internal control cannot provide absolute assurance that agency objectives will be met. Cost-benefit is an important concept to internal control considerations. Internal control is very broad and encompasses all controls within an agency, covering an agency’s entire mission and operations, not just financial operations. Figure 1 shows examples of agencies’ control activities in each phase of the grants life cycle. Congress passed the Federal Managers’ Financial Integrity Act (FMFIA) to help improve the management of federal government programs and strengthen agency internal control systems by requiring ongoing evaluations and reports on the adequacy of executive agency internal control systems. FMFIA, as implemented by GAO standards and OMB guidance, requires agency managers to establish internal control systems that provide reasonable assurance regarding the agency’s proper use of funds and resources, compliance with statutes and regulations, and preparation of reliable financial reports. In accordance with FMFIA, OMB has established guidance for agencies for evaluating and reporting on their internal controls. This guidance is now contained in OMB Circular No. A-123, Management’s Responsibility for Internal Control. FMFIA also requires GAO to issue internal control standards for the federal government, which are now contained in Standards for Internal Control in the Federal Government. These standards provide the overall framework for establishing and maintaining internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. The standards also provide that for internal control to be effective, an agency’s management should establish both a supportive overall control environment and specific control activities directed at carrying out the agency’s objectives. OMB Circular No. A-123 states that management is responsible for establishing and maintaining internal control to achieve the objectives of effective and efficient operations, reliable financial reporting, and compliance with applicable laws and regulations. OMB Circular No. A-123 provides guidance to federal managers on improving the accountability and effectiveness of federal programs and operations by establishing, assessing, correcting, and reporting on internal control. OMB Circular No. A-123 requires federal agencies and managers to take systematic and proactive measures to, among other things, assess the adequacy of internal control in federal programs and operations and report annually on internal control through management assurance statements. To assess the adequacy of internal controls, federal agencies conduct an annual assessment that includes testing key controls, identifying needed improvements, and developing and implementing corrective action plans. As part of this assessment, OMB Circular No. A-123 states that management should consider a variety of internal and external sources, including management knowledge and internal program reviews, financial statement audit reports, single audit reports, and improper payment reviews and reports. Further, OMB Circular No. A-123 provides that the agency head should consider input from senior program officials and the OIG. OMB Circular No. A-123 requires management to annually provide assurances about the adequacy and effectiveness of its internal control in a management assurance statement. This statement of assurance, which is included in an agency’s performance and accountability report or agency financial report, represents the agency head’s informed judgment as to the overall adequacy and effectiveness of internal control within the agency based on management’s assessment as described above, including the consideration of outside sources such as the results of financial statement audits. Management will give an unqualified statement of assurance when there are no material weaknesses to report and will give a qualified statement of assurance when there are material weaknesses related to internal control. Federal agencies, including the five in our review, use other mechanisms and processes, in addition to OMB Circular No. A-123, to oversee and monitor their grants internal controls. The single audit—audit of federal awards, including grant funds, administered by state and local governments and nonprofit organizations—is intended to be an important mechanism used by federal agencies to ensure accountability for federal funds.expenditures, adequacy of internal controls over federal funds, compliance with grant rules and regulations, and questioned amounts. Grant-making agencies use single audit reports to monitor grantees’ compliance with program and financial requirements. The agencies also Single audit reports provide information about the validity of grant use other mechanisms to monitor their grant programs, for example, creating annual grant performance plans, performing eligibility determinations, reviewing grantee financial reports, and conducting management control reviews. Congress passed three successive acts from 2002 to 2013 toward eliminating improper payments. The Improper Payments Information Act of 2002 (IPIA) required executive branch agencies to annually review all programs and activities to identify those that are susceptible to significant improper payments, to estimate the annual amount of improper payments for such programs and activities, and to report these estimates along with actions taken to reduce improper payments for programs with estimates that exceed $10 million. This act was subsequently amended by the Improper Payments Elimination and Recovery Act of 2010 (IPERA), which expanded on the previous requirements for identifying, estimating, and reporting on programs and activities susceptible to significant improper payments and for recovering overpayments across a broad range of federal programs. On January 10, 2013, the Improper Payments Elimination and Recovery Improvement Act of 2012 was enacted to impose further requirements to identify, prevent, and recover payment errors. IPERA requires that OIGs determine whether their agencies are in compliance with the criteria listed in IPERA and annually report the results to the head of the agency, among others. OMB also plays a key role in the oversight of the government-wide improper payments issue. OMB has established guidance in Appendix C of OMB Circular No. A-123 to implement the requirements of IPERA. IPERA and the OMB guidance require an OIG to use the following criteria in making a compliance determination on whether the agency has published the agency financial report and OMB-required accompanying information on the agency’s website, conducted a risk assessment for specific programs to determine whether they are susceptible to significant improper payments, published improper payment estimates for all high-risk programs, reported on actions to reduce improper payments, published and met targets for reducing improper payments, reported a gross improper payment rate of less than 10 percent for each high-risk program, and reported efforts to recapture improper payments. For the period within the scope of this report, IPERA defined significant improper payments as those in any particular program or activity that exceed both 2.5 percent of program or activity payments and $10 million annually or that exceed $100 million. For each program identified as susceptible and determined to be at risk of improper payments, agencies are required to report to the President and Congress the annual amount of estimated improper payments, along with steps taken and actions planned to reduce them. Agencies report this information in their annual performance and accountability report or agency financial report. The processes used by all five agencies to conduct their internal control assessments for fiscal year 2012 were consistent with the requirements in OMB Circular No. A-123. The agencies identified areas of risk in which to implement key controls and then monitored and tested those controls. The agencies identified deficiencies through the control tests, prepared and implemented corrective action plans to address the deficiencies identified, and reported on their internal control through management assurance statements. OMB Circular No. A-123 requires that agencies identify significant areas within their operations in which to implement key controls and then continuously monitor and test those controls. All five agencies conducted a risk assessment to identify significant processes or programs to include in their internal control assessment processes for fiscal year 2012. Based on these risk assessments, as shown in table 2, each agency selected its grants management process (or parts of its grants management process) or it selected grant programs to test controls as part of its fiscal year 2012 OMB Circular No. A-123 internal control assessment process. OMB Circular No. A-123 does not specify a recommended approach for evaluating or testing controls for processes or programs. When conducting these risk assessments, the agencies considered the effect that various factors could have on the risk that program objectives may not be met or a misstatement may be made in the financial statements. OMB Circular No. A-123 does not include requirements regarding specific factors to consider when determining which controls to test, and the agencies differed in the number of and specific risk factors they considered, as shown in table 3. Management at all of the agencies approved the selection of processes or programs to be included in their agencies’ internal control assessments. OMB Circular No. A-123 requires agencies to identify needed improvements in controls and take corresponding corrective actions. It also notes that agency managers and staff should be encouraged to identify control deficiencies, as this reflects positively on the agency’s commitment to recognizing and addressing management problems. Further, Standards for Internal Control in the Federal Government states that findings of audits and other reviews should be promptly resolved. The agencies identified deficiencies in certain grants internal controls as a result of their fiscal year 2012 OMB Circular No. A-123 assessment processes. When agencies identified control deficiencies, they prepared corrective action plans that included steps to be taken, responsible parties, and timeframes for completion. We reviewed these plans and determined that if implemented effectively, the steps to be taken should correct the deficiencies. For example: HHS’s Administration for Children and Families tested controls for the grants management process and identified deficiencies that contributed to delays in receiving grantee financial reports and resolving single audit findings. The Administration for Children and Families developed and implemented corrective actions, including following up with and providing assistance to grantees to improve the timeliness of periodic reporting and taking steps to ensure proper approvals and signatures on documents related to addressing single audit findings. HHS’s National Institutes of Health tested controls for the grants management process and found deficiencies in following up with grantees regarding their late filing of financial reports during the grant closeout phase and resolution of single audit findings. The National Institutes of Health prepared a corrective action plan that includes procedures to improve the timeliness of grantee reports and actions to improve instructions to grantees regarding repayment of disallowed funds found in single audits. It plans to complete these actions by December 2014. Education’s Internal Controls Evaluation Group (Evaluation Group) in the Office of the Chief Financial Officer tested related controls for discretionary and formula grants across several program offices, including the Office of Elementary and Secondary Education and Office of Special Education and Rehabilitative Services, and identified deficiencies. Some controls were found to be deficient primarily because of a lack of documentation. The deficiencies resulted in two findings in various offices, including the Office of Elementary and Secondary Education. According to the Evaluation Group, the Office of Elementary and Secondary Education implemented corrective actions, which included ensuring that grantees are making progress on their noncompliance issues before being awarded additional grant funds and providing technical assistance to grantees to help them successfully carry out their projects. DOT’s Federal Transit Administration identified a deficiency related to tracking single audit findings. The findings were not being entered into the automated system used to manage oversight program activities and were being tracked offline. The Federal Transit Administration implemented its corrective action plan by issuing a memo to regional offices reiterating the requirements to enter single audit findings in the system in a timely manner and holding routine conference calls and training sessions with regional offices about the issue. Also, according to the Federal Transit Administration, it updated the standard operating procedures for single audit findings, which were in the agency’s review process as of April 2014. USDA’s Food and Nutrition Service and its National Institute of Food and Agriculture tested controls for the grants management process and identified deficiencies related to grant accruals, completion of site visit reports, and review of single audits. For example, the National Institute of Food and Agriculture identified a deficiency in the control related to timely review and resolution of single audits. Its corrective action plan included drafting standard operating procedures; developing a single audit tracking spreadsheet to document open reviews, findings, and timelines; and obtaining contractor assistance to help clear the extensive backlog of single audit reviews. As of February 2014, according to the National Institute of Food and Agriculture, the backlog was being addressed. HUD’s Office of Community Planning and Development assessed controls for the Neighborhood Stabilization Program and identified a deficiency related to the timeliness of grantee reports used to monitor grantees. In response to the finding, a corrective action plan was created to address the untimely filing of these reports. According to HUD, the Office of Community Planning and Development solicited and received recommendations from field offices for improvements to the process. The field office feedback is being incorporated into guidance, and the Office of Community Planning and Development plans to communicate the updated guidance to field offices no later than June 30, 2014. OMB Circular No. A-123 requires agencies to report annually on management’s judgment regarding the adequacy and effectiveness of internal control in assurance statements signed by management. All five agencies included management assurance statements in their annual agency financial reports, as required. Based on their fiscal year 2012 OMB Circular No. A-123 assessment processes, which included testing of key program and process controls, results of financial statement audits, and other information, all five agencies gave a qualified statement of assurance for fiscal year 2012. Three of the five agencies (HHS, DOT, and USDA) qualified their assurance statements because of weaknesses in grants processes or grants programs. Specifically, HHS and USDA reported material weaknesses because of noncompliance with IPERA for some large grant programs, which is discussed later in this report. DOT reported a material weakness involving unliquidated obligations, including in grant programs. These weaknesses were also identified by financial statement auditors, as discussed later in this report. HUD and Education qualified their fiscal year 2012 assurance for reasons not directly related to grants. HUD qualified its assurance because of weaknesses in its human capital operations, and Education qualified its assurance because of weaknesses in its servicing systems for student loan programs. In their fiscal year 2013 management assurance statements, HHS and USDA continued to qualify their assurance regarding internal controls because of noncompliance with IPERA. For fiscal year 2013, Education management gave an unqualified statement of assurance (no material weaknesses reported). DOT took steps to correct the problems with its unliquidated obligations and no longer reported the issue as a material weakness in fiscal year 2013. However, HUD qualified its fiscal year 2013 assurance because of, among other reasons, a material weakness related to its accounting for formula grants. In addition to the issues identified through the OMB Circular No. A-123 process, other audits and reviews have reported internal control issues related to the grants management process and grant programs. We examined the agencies’ financial statement audit reports and other reports issued by GAO and the agencies’ OIGs and found that internal control deficiencies related to grants, including weaknesses in controls for monitoring and oversight of grantees and unliquidated grant obligations and certain accounting methods for grant programs, have been reported. Further, some of the agencies identified and reported on improper payments in grants programs, and OIGs for some of the agencies reported issues related to agencies’ improper payments information. OIGs have also reported management challenges related to grants. The primary purpose of a financial statement audit is to provide an opinion about whether an entity’s financial statements are presented fairly in all material respects in conformity with an applicable financial reporting framework. Reporting on financial statement audits also includes reports on internal control over financial reporting and on compliance with provisions of laws, regulations, contracts, and grant agreements that have a material effect on the financial statements. Financial statement auditors for three of the agencies reported deficiencies in grants monitoring controls and grants accounting methods. Specifically, for fiscal years 2012 and 2013, financial statement auditors for DOT, USDA, and HUD reported significant deficiencies, some of which were considered to be material weaknesses, in the agencies’ internal controls associated with grant programs. Reported grants- related deficiencies primarily related to accounting for grant programs and monitoring procedures over grant activities. A common issue at many agencies is deficiencies in controls over unliquidated grants obligations. Financial statement auditors at DOT, USDA, and HUD reported such deficiencies, as discussed below, and we have previously reported on the issue. In April 2012, we reported that millions of dollars of grant obligations remain in expired grant accounts—accounts that were more than 3 months past the grant end date and had no activity for 9 months or more. Agency regulations typically impose closeout procedures upon both the awarding agency and the grantee. Generally, within 90 days after the completion of the grant award, grantees must submit all financial, performance, and other reports as required by the terms and conditions of the award. Also within this 90-day period, grantees generally are to liquidate all obligations incurred under the award. These closeout procedures make funds less susceptible to fraud, waste, and mismanagement; reduce the potential costs to agencies for fees paid to maintain grant accounts; and may enable agencies to redirect resources to other projects. DOT’s financial statement auditors reported deficiencies related to unliquidated obligations in grants programs for both fiscal years 2012 and 2013 and controls over grant accruals in fiscal year 2012. In fiscal year 2012, the auditors reported a material weakness related to controls over the review and monitoring of grant and nongrant unliquidated obligations, including those at the Federal Highway Administration and the Federal Transit Administration. Specifically, the Federal Highway Administration and the Federal Transit Administration did not timely identify and deobligate unused obligation balances. DOT subsequently performed the necessary deobligations to correct its financial data. Further, to correct the underlying cause, the Federal Highway Administration and the Federal Transit Administration revised their procedures for reviewing and addressing unliquidated obligations in February 2013 and May 2013, respectively. In fiscal year 2013, while the auditors continued to identify issues with unused obligations, they no longer considered the issue a material weakness and reduced it to a significant deficiency. In fiscal year 2012, the auditors reported a significant deficiency related to controls over grant accruals at some program offices, including the Federal Highway Administration. At fiscal year-end, DOT calculated and recorded an estimated liability for amounts owed to its grantees for costs incurred under grant agreements but not yet billed to or reimbursed by DOT. The auditors reported that the Federal Highway Administration did not have adequate controls in place to ensure that all significant grantees and grant programs were properly included in DOT’s grant accrual estimate methodology, and as a result, the grant accruals were potentially understated. This issue was no longer reported as a significant deficiency in fiscal year 2013. For fiscal years 2012 and 2013, USDA financial statement auditors reported a significant deficiency involving unliquidated obligations. They reported that ineffective monitoring and reviewing resulted in invalid obligations remaining open, thus restricting the availability of funding authority and increasing the risk of misstating obligations as of year-end. In the fiscal year 2012 audit report, the auditors noted that substantial improvements had been made to effectively monitor and review unliquidated obligations since fiscal year 2011, when the auditors reported this issue as a material weakness. The auditors made no new related recommendations in the fiscal year 2012 report and downgraded the material weakness to a significant deficiency. In the fiscal year 2013 audit report, the auditors again reported the issue as a significant deficiency and recommended that the department emphasize the importance of complying with USDA’s requirements related to unliquidated obligations. The auditors also noted that USDA management reported that the Forest Service was not in compliance with certain federal requirements related to managing grants programs in fiscal years 2012 and 2013. HUD’s financial statement auditors reported significant deficiencies related to controls over unliquidated obligations and controls for monitoring some grant programs for fiscal years 2012 and 2013. In addition, the auditors reported a material weakness caused by HUD’s noncompliance with the Federal Financial Management Improvement Act of 1996 (FFMIA), including noncompliance associated with the Office of Community Planning and Development’s information systems for fiscal years 2012 and 2013. During the financial statement audits for fiscal years 2012 and 2013, HUD’s financial statement auditors reported a significant deficiency related to unliquidated obligations. The auditors found inadequate procedures related to timely reviews and recapture of unexpended obligations for several grant programs, including those within the Office of Community Planning and Development, which led to misstatements in HUD’s obligation balances. In the fiscal year 2013 audit report, the auditors reported that review and closeout policies had been drafted but not finalized or implemented because of competing priorities. The auditors recommended that the Office of Community Planning and Development review and deobligate invalid obligations. For fiscal year 2012, the auditors reported a significant deficiency related to the Office of Community Planning and Development’s controls for monitoring grantees, including that the system used to communicate the results and status of on-site monitoring was not effective because field offices did not always follow the monitoring handbook and did not always update the information system to reflect current status of monitoring reviews. The auditors repeated recommendations from the prior year to ensure that the monitoring handbook is followed, the information in the grants management system is accurate, and findings from grantee reviews are closed and funds are collected. By fiscal year 2013, most of the recommendations had been implemented, and the auditors no longer reported this issue as a significant deficiency for fiscal year 2013. The auditors reported material weaknesses for fiscal years 2012 and 2013 because HUD’s financial management systems did not comply with FFMIA, including noncompliance associated with the Office of Community Planning and Development’s information systems. The material weakness was due, in part, to the Office of Community Planning and Development’s method of accounting and the system used for disbursing obligations, which did not comply with the requirements of FFMIA for maintaining financial management systems that substantially comply with federal financial management systems requirements and applicable federal accounting standards. The auditors reported that the Office of Community Planning and Development was taking steps to address the issues. Because of many of these internal control issues and ones identified at other federal agencies, we reported a significant deficiency in internal controls related to grants management in our audit reports on the U.S. government’s consolidated financial statements for fiscal years 2012 and 2013.adversely affect the federal government’s ability to ensure that grant funds are being properly reported and used in accordance with applicable program laws and regulations. Deficiencies in grants management internal controls could In addition to reporting on deficiencies in internal controls, financial statement auditors also report on instances of noncompliance with certain laws and regulations. Auditors for HHS and USDA reported instances of noncompliance with IPERA for some grant programs for fiscal years 2012 and 2013, which are discussed later in this report. We and agency OIGs have reviewed federal grants programs and reported findings and recommendations related to monitoring and other aspects of grant programs at many federal agencies, including the five in our review. For example: In December 2012, we reported on HHS’s Administration for Children and Families Temporary Assistance for Needy Families program. We found that the Temporary Assistance for Needy Families program’s accountability framework provided incomplete information on how states’ noncash services were contributing to the program’s purposes. States are required to submit several reports to HHS. Taken together, these reports serve as the accountability framework in place to help HHS and Congress ensure that states use Temporary Assistance for Needy Families program funds for proper purposes and identify needed program improvements. We recommended that HHS develop a detailed plan with timelines to revise reporting categories for program expenditures. HHS formed an internal workgroup to review and revise a Temporary Assistance for Needy Families program reporting form for collecting more-detailed expenditure data, and published two Federal Register notices in September 2013 and January 2014, regarding proposed revisions to the financial reporting form. HHS plans to issue the form effective for fiscal year 2015. In June 2011, we reported on education reform grants to states required by the American Recovery and Reinvestment Act of 2009 We found that Education had provided extensive (Recovery Act). support to grantee states and had begun monitoring states’ progress in meeting program goals.grantees to share information, such as hosting meetings on specific initiatives. Some officials from grantee states said they would find this information useful, but they were generally unaware of these resources or were unable to access them. We recommended that Education facilitate information sharing among grantees. Among other things, Education established communities of practice to address multiple topics. Education oversees the activities of each community Education also developed ways for by managing a work plan that includes the target dates for each activity. In addition, Education communicates these activities to members of each community of practice via emails and calls as well as a monthly update to all grantees with upcoming dates and events. In December 2011, HHS’s OIG reported on a science awards program at the National Institutes of Health. The program is administered through cooperative agreements, which are similar to grants but require more involvement by the awarding agency. The OIG reported that it found documentation and reporting issues, which the National Institutes of Health is taking steps to correct. Specifically, the Clinical and Translational Science Awards Program did not properly document awardees’ progress under their cooperative agreements. The OIG recommended that the National Institutes of Health ensure that program staff document their monitoring of awardee progress; ensure timely submission of required reports; maintain official files in accordance with federal policy; and, as required for cooperative agreements, provide substantial involvement to Clinical and Translational Science Awards Program awardees. The National Institutes of Health concurred with the recommendations and stated that it had taken or planned to take steps to implement them. In August 2012, DOT’s OIG reported on weaknesses in grants oversight at the Federal Transit Administration and made several related recommendations. Federal Transit Administration lacked adequate guidance for ensuring that findings from grantee oversight reviews were consistently identified and adequately tracked to ensure grantees were complying with laws and regulations and implementing corrective actions. In response, the Federal Transit Administration stated that it was (1) revising its oversight and review processes to, among other things, identify grant recipients needing technical assistance and (2) developing standard operating procedures to provide more consistency across regions and implementing new performance measures. The OIG reported that these actions, both those taken and planned, were responsive to the recommendations. Department of Transportation Office of Inspector General, Improvements Needed in FTA’s Grant Oversight Program, MH-2012-168 (Washington, D.C.: Aug. 2, 2012). In February 2012, USDA’s OIG performed work related to the Recovery Act and reported that it found that the Food and Nutrition Service did not create adequate, proactive controls to ensure that states awarded grants for schools to purchase and renovate food service equipment based on Recovery Act criteria, and did not ensure timely reporting of Recovery Act spending data on Recovery.gov as required by OMB guidance on using funds pursuant to the Recovery Act. The OIG recommended that the Food and Nutrition Service continue to update and implement adequate, proactive controls for its standard competitive grant award process. The Food and Nutrition Service agreed that continuous improvement of controls over the competitive award process was warranted and agreed with the OIG’s recommendation to update controls. According to USDA’s Office of the Chief Financial Officer, the Food and Nutrition Service took corrective action and the recommendation was closed in February 2013. In July 2013, HUD’s OIG reported that HUD’s guidance for ensuring grantee compliance with the Community Development Block Grant program’s timeliness spending requirement was not always implemented effectively, and documentation of HUD’s rationale for not sanctioning noncompliant grantees was inadequate. HUD is to annually determine whether Community Development Block Grant grantees—states, cities, and counties—are carrying out grant activities in a timely manner, which includes timely drawdown of grant funds. HUD is to generate a monthly report to help field offices monitor grantees’ compliance with the timeliness spending requirement. The OIG recommended, among other things, that HUD (1) strengthen controls over Community Development Block Grant procedures related to the monthly timeliness report and notification to noncompliant grantees and (2) establish documentation requirements and procedures for grantees that do not comply with the timeliness spending requirement. HUD generally concurred with these recommendations and stated that it would enhance current procedures to address the recommendations. In both fiscal years 2012 and 2013, four of the largest grant-making agencies reported almost $5 billion in estimated improper payments in grants programs. Agencies are required by IPIA, as amended by IPERA and implemented by OMB guidance, to periodically review and assess all programs and activities to identify those susceptible to significant improper payments, estimate the annual amount of improper payments, and report these estimates along with actions taken to reduce improper payments when the estimates exceed both $10 million annually and 2.5 percent of program or activity payments (or 1.5 percent of program or activity payments starting in fiscal year 2014) or when the estimates exceed $100 million. As shown in table 4, four of the agencies reported improper payment estimates and error rates (i.e., amount of estimated improper payments divided by total program outlays) in grant programs administered by the program offices in our review for fiscal years 2012 and 2013. IPERA requires, among other things, OIGs to determine whether their agencies are in compliance with the criteria listed in IPERA and annually report the results to the heads of their agencies, the Comptroller General, and certain congressional committees. Many OIGs do so by reviewing their agencies’ improper payment reporting in the annual agency financial report and accompanying materials. For fiscal years 2012 and 2013, the OIGs reported the following for their respective agencies. The HHS OIG reported that while HHS met many of IPERA’s compliance requirements, it did not fully comply with IPERA for fiscal years 2012 and 2013 for some of its grant programs. Specifically, the OIG reported that HHS did not comply with IPERA for the Children’s Health Insurance Program for fiscal year 2012 because it did not publish a corrective action plan for the program’s improper payments. HHS did publish a corrective action plan for this program for fiscal year 2013. For the Foster Care program, the OIG reported that HHS did not comply for fiscal year 2012 because the improper payment rate of 6.2 percent exceeded HHS’s target rate of 4.5 percent. For fiscal year 2013, HHS’s Foster Care improper payments rate of 5.3 percent met the target rate of 6 percent for that year. In the case of the Temporary Assistance for Needy Families program, the OIG reported that HHS did not report an improper payments estimate for either fiscal year 2012 or fiscal year 2013. According to HHS, statutory limitations related to obtaining state data prohibited HHS from developing an estimate. However, HHS has reported that is taking steps and has planned actions to help prevent and reduce improper payments, including providing technical assistance to help strengthen state program integrity efforts, revisions to the Temporary Assistance to Needy Families financial reporting form so that states can provide more accurate information about how they are using program funds, and sharing with states information and best practices for reducing improper payments. The USDA OIG reported that USDA did not meet IPERA’s compliance requirements for fiscal years 2012 and 2013 for some of its grants programs, including the School Lunch and School Breakfast Programs and the Child and Adult Care Food Program. The OIG reported that the School Lunch and School Breakfast Programs did not comply because they reported improper payment rates that exceeded 10 percent for both fiscal years 2012 and 2013. According to the OIG, the Food and Nutrition Service’s administration of these programs was highly decentralized and involved many governmental and nongovernmental organizations to provide benefits at approximately 100,000 locations. The OIG reported that Food and Nutrition Service officials stated that they were aware of the significant improper payment rate in these two programs, and they hired a contractor to conduct a study to help develop initiatives and practices to address this problem. The OIG reported that the results of the study were pending at the time of its review for fiscal year 2013 and noted that Food and Nutrition Service officials believed it would take time to achieve an error rate less than 10 percent. For the Child and Adult Care Food Program, the OIG reported that USDA reported only a partial estimate of improper payments for both fiscal years 2012 and 2013 because the Food and Nutrition Service did not have a cost- effective method for estimating improper payments for one of the program’s two components—Family Day Care Homes Meal Claims. Food and Nutrition Service officials continued to report difficulties in determining a gross estimate for the Child and Adult Care Food Program, which included over 190,000 participating day care homes and centers with varied eligibility requirements for each of the program’s two components. According to the OIG’s report on its fiscal year 2013 review, the Food and Nutrition Service has conducted feasibility studies since 2006 to develop a reliable method for estimating improper payments, but the studies were not reliable. The Food and Nutrition Service told us that it continues to assess methods of estimating erroneous payments and that a new study will be conducted in 2014 to explore an alternative method of measuring the rate of erroneous payments. For both fiscal years 2012 and 2013, Education’s OIG reported that Education met IPERA’s compliance requirements; however, improvements were needed in the estimation methodology for both the Pell Grant and Direct Loan Programs. The OIG recommended that Education (1) continue working with OMB to obtain approval for an estimation methodology that addresses current limitations with some data used and (2) where estimates are based on the results of program reviews, include all program review results for a sufficient period of time in the universe of potential improper payments. Education agreed with the OIG’s recommendations and noted that it had modified its fiscal year 2014 estimation methodologies. For fiscal years 2012 and 2013, DOT’s OIG reported that although DOT met IPERA’s compliance requirements, some of the information reported in its agency financial reports was inaccurate. The OIG recommended, among other things, that DOT implement procedures to ensure that all elements required for IPERA reporting are accurate and supported by documentation. The OIG also reported that while DOT’s programs met IPERA’s required improper payment rate of less than 10 percent for both fiscal years 2012 and 2013, the Federal Transit Administration did not meet its target error rates for those years. In its 2012 and 2013 agency financial reports, DOT reported error rates of 0.44 percent and 0.73 percent, respectively, for its Formula Grants Program. These rates did not did not meet the target error rates reported in DOT’s fiscal year 2012 and 2013 agency financial reports of 0.25 percent and 0.50 percent, respectively. The Reports Consolidation Act of 2000 requires each OIG to prepare a statement that summarizes what the OIG considers to be the most serious management and performance challenges facing its agency. Agencies are to include the OIG’s summary of management challenges in their annual agency financial report or performance and accountability report. HHS’s OIG reported grants management and identifying and reducing improper payments as top grants-related management challenges for fiscal year 2012. HHS is the largest grant-making agency in the federal government, and oversight and management of both new and continuing grant programs are crucial to HHS’s mission. The OIG has found internal control deficiencies, problems with financial stability, inadequate organizational structures, inadequate procurement and property management, and inadequate personnel policies and procedures among grantees. In addition, the OIG reported that HHS faces challenges complying with IPERA for some of its programs, including the Temporary Assistance for Needy Families program, as discussed above. For fiscal year 2013, the OIG reported a similar challenge for grants management—protecting HHS grants and contract funds from fraud, waste, and abuse. The improper payments- related challenge reported by the OIG for fiscal year 2013 was focused on Medicare programs and not on grants programs. Education’s OIG reported management challenges for fiscal years 2012 and 2013, including oversight and monitoring of the department’s programs and grantees and improper payments. Regarding oversight and monitoring, the OIG reported that the number of different entities and programs requiring monitoring and oversight, the amount of funding that flows through the department, and the impact that ineffective monitoring could have on stakeholders make monitoring and oversight a challenge. The OIG also reported that Education faces challenges in its efforts to successfully prevent, identify, and recapture improper payments in some programs, including the Pell Grant and Title I Programs. For fiscal year 2013, while the OIG noted progress was being made in some areas, it reported management challenges similar to those for fiscal year 2012. For fiscal year 2012, DOT’s OIG reported a grants-related management challenge: ensuring effective oversight of Recovery Act projects and applying related lessons learned to improve DOT’s infrastructure programs. For fiscal year 2013, the OIG reported challenges in strengthening existing highway and transit project oversight mechanisms and improving financial management over grants to better use funds, create jobs, and improve infrastructure. For fiscal years 2012 and 2013, USDA’s OIG reported a management challenge: identifying, reporting, and reducing improper payments, including those in some grant programs. The OIG reported, among other things, that USDA continues to struggle to meet IPERA’s compliance requirements for reporting and preventive measures. Also, the OIG noted that related quarterly reports did not always provide accurate, complete, and timely information. Among other issues, HUD’s OIG reported oversight and monitoring of some of its grant programs as a management challenge in fiscal years 2012 and 2013. The challenges included oversight of Recovery Act funds and monitoring grants for housing assistance programs and for disaster recovery. For example, the OIG reported that HUD faces challenges in monitoring Community Development Block Grant Disaster Recovery Assistance funds because of, among other things, limited resources for performing oversight, the broad nature of HUD projects, a lack of understanding of disaster recovery grants by the recipients, and the length of time needed to complete projects. In fiscal year 2013, the federal government obligated over $555 billion in grants for a wide array of activities. Effective oversight of internal controls is important for providing reasonable assurance to federal managers and taxpayers that grants are awarded properly, recipients are eligible, and federal grant funds are used as intended and in accordance with applicable laws and regulations. Our review found that the five largest grant-making agencies’ internal control assessment processes were consistent with the requirements in OMB Circular No. A-123 and its Appendix A. The findings from these assessment processes and from other reviews identified material weaknesses and significant deficiencies in grants management. OMB Circular No. A-123 notes that agency managers and staff should be encouraged to identify control deficiencies, as this reflects positively on the agency’s commitment to recognizing and addressing management problems. Further, agencies are estimating that billions of dollars in improper payments occur each year in the grants programs. Financial statement auditors, OIGs, and we continue to focus on grants internal controls through audits and reviews, and the agencies continue to use the results of these reviews and their own assessments to develop corrective actions and oversee internal controls of federal grants to ensure controls are in place and operating effectively. We provided a draft of this report to the Secretaries of Health and Human Services, Education, Transportation, Agriculture, and Housing and Urban Development. In general, all five agencies concurred with the information in the report. Education, DOT, USDA, and HUD also provided technical comments that were incorporated as appropriate. In addition, Education provided written comments, which are reprinted in appendix II. We also provided a draft of the report to the Director of OMB, and OMB responded that it had no comments. We are sending copies of this report to the Secretaries of Health and Human Services, Education, Transportation, Agriculture, and Housing and Urban Development; the Director of the Office of Management and Budget; appropriate congressional committees; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2623 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. Our objectives were to (1) examine whether the five largest grant-making agencies’ internal control oversight processes for their grant programs were consistent with Office of Management and Budget (OMB) requirements and (2) describe internal control issues that have been reported related to the grants management process and key grant programs. To select the agencies to include in our review and to obtain adequate coverage of federal grant dollars for fiscal year 2012—the most recently completed fiscal year at the time we started our review—we reviewed and summarized grant obligations data reported by agencies through USAspending.gov (as of November 19, 2012).five agencies with the highest amounts of grants obligations, which accounted for almost 75 percent of total federal grant obligations for fiscal year 2012, excluding Medicaid, as discussed further below. The Departments of Health and Human Services (HHS), Education (Education), Transportation (DOT), Agriculture (USDA), and Housing and Urban Development (HUD) were the top five federal grant-making agencies based on grants obligations for fiscal year 2012. We also reviewed USAspending.gov data for fiscal year 2013 (as of January 27, We decided to include the 2014) and confirmed that the five largest grant-making agencies, based on obligations, in fiscal year 2012 remained the same in fiscal year 2013. To assess the reliability of the USAspending.gov obligations data, we (1) reviewed recent GAO work that used the data and (2) compared results of ranking agencies by obligation amounts to the results of ranking agencies based on outlay amounts reported by OMB, as discussed below. In our recent work, we had assessed the reliability of USAspending.gov obligations data for a report we issued in September 2012 by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data, and we determined the data were sufficiently reliable for the purposes of that report. We also determined the data were sufficiently reliable for the purpose of selecting agencies and program offices to review for this report. OMB collects data from federal agencies each year to prepare the President’s budget. OMB uses these data for a number of purposes related to the budget, including producing the Historical Tables, which are publicly available on OMB’s website. One series of Historical Tables contains information on federal outlays for grants to state and local governments. According to OMB, the purpose of this series of Historical Tables is to identify federal government outlays that constitute income to state and local governments to help finance their services. rigorous review by OMB, we considered them sufficiently reliable for the purposes of this report. In our review of the USAspending.gov data, we noted that grant obligations for Medicaid, one of HHS’s grant programs, amounted to over 43 percent of total obligations incurred by the federal government for grants in fiscal year 2012. We elected to exclude the Medicaid program from our review because it has been the subject of numerous reviews, including many by GAO and HHS’s OIG, and by doing so, we were able to include HHS grant programs that otherwise would not have been covered in our review and still comply with our selection criteria to cover over 80 percent of each agency’s grant obligations. (See table 5 and the related discussion.) Even after excluding Medicaid funds for fiscal year 2012, HHS was still the largest grant-making agency as measured by grant obligations. During our initial planning work, we learned that agencies generally include cooperative agreements as part of their grants programs. We also learned that although USAspending.gov data for Education categorize Pell Grants as “direct payments for a specified use” and do not identify them as “grants,” Education considers Pell Grants to be grants, and the Pell Grant Program is Education’s largest grant program. Therefore, in addition to USAspending.gov’s obligations for grants, we included in the scope of our review USAspending.gov amounts for cooperative agreements for the five agencies and Pell Grants. In addition, while HUD considers its Public Housing Operating Fund a subsidy program, and the Catalog of Federal Domestic Assistance categorizes the program as “direct payments for specified use,” we included the program in our review based on the USAspending.gov categorization. In addition, during our planning work, we found that federal agencies rely on their program offices to provide assurance statements regarding the effectiveness of their internal controls. The program office assurance statements are then rolled up into the agency-wide management assurance statement, which the agencies are required by OMB to provide in their performance and accountability reports or agency financial reports. Therefore, to obtain the most dollar coverage and cover the greatest subject area during our review, we selected the 2 or 3 largest program offices from each agency in order to cover over 80 percent of grant obligations for each agency, resulting in a total of 12 program offices, as shown in table 5. These 12 program offices together represented almost 75 percent of the total government-wide grants obligations (excluding Medicaid) for fiscal year 2012. If we had not excluded Medicaid, the National Institutes of Health grant programs would not have been selected for inclusion in our review as the Centers for Medicare & Medicaid Services and the Administration for Children and Families together would have accounted for about 88 percent of HHS grant obligations. By excluding Medicaid, we were able to include the National Institutes of Health grant obligations, allowing us to cover additional grant programs. To examine whether the five largest grant-making agencies’ internal control oversight processes for their grants programs were consistent with OMB requirements, we focused on the agencies’ fiscal year 2012 internal control evaluations performed to meet the internal control assessment and reporting requirements of OMB Circular No. A-123 and its Appendix A. We also considered Standards for Internal Control in the Federal Government, specifically, the standard for monitoring. The standards provide the overall framework for establishing and maintaining internal control in federal programs. The monitoring standard states that internal control monitoring should assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. We obtained and analyzed documentation from the 12 program offices and from each of the five agencies’ Office of the Chief Financial Officer. This documentation included or described relevant policies and procedures, risk assessments, testing methodologies and results, deficiencies found and corrective action plans developed and implemented as part of the fiscal year 2012 OMB Circular No. A-123 internal control assessment process. We used a data collection instrument to collect and summarize information obtained from the agencies. We did not reperform agency testing of internal controls or conduct our own testing of controls. We also interviewed officials in the 12 program offices and officials from each of the five agencies’ Office of the Chief Financial Officer and OIG about the agencies’ assessment processes and oversight of grants- related internal control. To identify internal control issues that have been reported related to the grants management process and key grant programs, we reviewed the agencies’ financial statement audit reports for fiscal years 2012 and 2013 and relevant GAO and OIG reports issued in fiscal years 2012 and 2013 to identify grants-related internal control findings and recommendations. We also reviewed improper payment information reported by the five agencies in their agency financial reports for fiscal years 2012 and 2013 to determine whether the agencies identified improper payments in their grants programs. We reviewed the OIGs’ reports on their agencies’ compliance with the criteria listed in the Improper Payments Elimination and Recovery Act of 2010 for fiscal years 2012 and 2013. We also reviewed management challenges reported by the agencies’ OIGs for fiscal years 2012 and 2013. We conducted this performance audit from November 2012 to July 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Kimberly McGatlin (Assistant Director), Laura Bednar, Maria C. Belaval, Bruce David, Lauren S. Fassler, Maxine Hattery, Diane Morris, Danietta Williams, and Elizabeth Wood made significant contributions to this report. School-Meals Programs: USDA Has Enhanced Controls, but Additional Verification Could Help Ensure Legitimate Program Access. GAO-14-262. Washington, D.C. May 15, 2014. Recovery Act: Grant Implementation Experiences Offer Lessons for Accountability and Transparency. GAO-14-219. Washington, D.C.: January 24, 2014. Grants Performance: Justice and FEMA Collect Performance Data for Selected Grants, but Action Needed to Validate FEMA Performance Data. GAO-13-552. Washington, D.C.: June 24, 2013. Grants Management: Improved Planning, Coordination, and Communication Needed to Strengthen Reform Efforts. GAO-13-383. Washington, D.C.: May 23, 2013. Grants Management: Oversight of Selected States’ Disbursement of Federal Funds Addresses Timeliness and Administrative Allowances. GAO-13-392. Washington, D.C.: April 16, 2013. WIC Program: Improved Oversight of Income Eligibility Determination Needed. GAO-13-290. Washington, D.C.: February 28, 2013. Temporary Assistance for Needy Families: More Accountability Needed to Reflect Breadth of Block Grant Services. GAO-13-33. Washington, D.C.: December 6, 2012. Grants to State and Local Governments: An Overview of Federal Funding Levels and Selected Challenges. GAO-12-1016. Washington, D.C.: September 25, 2012. Recovery Act: Housing Programs Met Spending Milestones, but Asset Management Information Needs Evaluation. GAO-12-634. Washington, D.C.: June 18, 2012. Highway Infrastructure: Federal-State Partnership Produces Benefits and Poses Oversight Risks. GAO-12-474. Washington, D.C.: April 26, 2012. Grants Management: Action Needed to Improve the Timeliness of Grant Closeouts by Federal Agencies. GAO-12-360. Washington, D.C.: April 16, 2012. Improper Payments: Remaining Challenges and Strategies for Governmentwide Reduction Efforts. GAO-12-573T. Washington, D.C.: March 28, 2012. Foster Care Program: Improved Processes Needed to Estimate Improper Payments and Evaluate Related Corrective Actions. GAO-12-312. Washington, D.C.: March 7, 2012. National Institutes of Health: Employment and Other Impacts Reported by NIH Recovery Act Grantees. GAO-12-32. Washington, D.C.: November 10, 2011. Highway Emergency Relief: Strengthened Oversight of Project Eligibility Decisions Needed. GAO-12-45. Washington, D.C.: November 8, 2011. TANF and Child Welfare Programs: Increased Data Sharing Could Improve Access to Benefits and Services. GAO-12-2. Washington, D.C.: October 7, 2011. Disadvantaged Students: School Districts Have Used Title I Funds Primarily to Support Instruction. GAO-11-595. Washington, D.C.: July 15, 2011. Race to the Top: Reform Efforts Are Under Way and Information Sharing Could Be Improved. GAO-11-658. Washington, D.C.: June 30, 2011. Federal Grants: Improvements Needed in Oversight and Accountability Processes. GAO-11-773T. Washington, D.C.: June 23, 2011. Grants.gov: Additional Action Needed to Address Persistent Governance and Funding Challenges. GAO-11-478. Washington, D.C.: May 6, 2011. Medicaid and CHIP: Reports for Monitoring Children’s Health Care Services Need Improvement. GAO-11-293R. Washington, D.C.: April 5, 2011. Department of Education: Improved Oversight and Controls Could Help Education Better Respond to Evolving Priorities. GAO-11-194. Washington, D.C.: February 10, 2011. | In fiscal year 2013, the federal government obligated over $555 billion for grants. Effective oversight of internal controls is important for providing reasonable assurance that grants are awarded properly, recipients are eligible, and federal grant funds are used as intended. GAO was asked to review internal control issues over grants. This report (1) examines whether the five largest grant-making agencies' internal control oversight processes for their grant programs were consistent with OMB requirements and (2) describes internal control issues that have been reported related to the grants management process and key grant programs. To achieve these objectives, GAO reviewed the fiscal year 2012 internal control assessment processes for the five largest grant-making agencies' grants programs, as conducted under OMB Circular No. A-123. GAO reviewed the agencies' assessment documentation and reviewed grants internal control findings reported in other reviews of these agencies for fiscal years 2012 and 2013. GAO did not include Medicaid—the largest federal grant program—which has been well covered in other GAO and HHS OIG reviews. GAO is not making any recommendations but continues to monitor grants management as part of its work on key issues. In general, the five agencies concurred with the information in the report. Some agencies also provided technical comments that were incorporated, as appropriate. For fiscal year 2012, the processes used by the five largest grant-making agencies to conduct their internal control assessments were consistent with the requirements of Office of Management and Budget (OMB) Circular No. A-123, which requires that agencies identify significant areas within their operations in which to implement key controls, continuously monitor and test those controls, and report annually on management's judgment regarding the adequacy and effectiveness of internal control. The five largest grant-making agencies by amount of grant obligations are the Departments of Health and Human Services (HHS), Education, Transportation (DOT), Agriculture (USDA), and Housing and Urban Development (HUD). The agencies identified areas of risk, including grants programs and grants management processes, in which to implement key controls and then monitored and tested those controls. The agencies identified deficiencies through control tests, prepared and implemented corrective action plans to address the deficiencies identified, and reported on their internal control through annual management assurance statements. HHS, DOT, and USDA qualified their internal control management assurance statements for fiscal year 2012, in part because of material weaknesses affecting their grant programs. For fiscal year 2013, HHS, USDA, and HUD gave qualified statements of assurance in part because of material weaknesses in their grant programs. In addition to issues identified through the OMB Circular No. A-123 process, other audits and reviews have reported internal control issues related to the grants management process and grant programs. For example: For fiscal years 2012 and 2013, DOT's financial statement auditors reported that DOT did not timely identify and deobligate unused grant obligations. DOT took actions to address the recommendations, resulting in the auditors reducing the issue from a material weakness in fiscal year 2012 to a significant deficiency in fiscal year 2013. In July 2013, HUD's Office of Inspector General (OIG) reported that guidance for ensuring whether grantees are carrying out grant activities in a timely manner in compliance with requirements of the Community Development Block Grant program was not always implemented effectively. HUD generally concurred with the recommendations and stated its intent to address them. In both fiscal years 2012 and 2013, HHS reported over $1 billion in estimated improper payments in grant programs. HHS's OIG determined that HHS did not comply with all requirements of the Improper Payments Elimination and Recovery Act of 2010 for certain grant programs. HHS reported that it has steps planned and under way to help prevent and reduce improper payments. OMB Circular No. A-123 notes that agency managers and staff should be encouraged to identify control deficiencies, as this reflects positively on the agency's commitment to recognizing and addressing management problems. Financial statement auditors, OIGs, and GAO continue to focus on grants internal controls through their audits and reviews, and the agencies continue to use the results of these reviews and their own assessments to develop corrective actions and oversee internal controls of federal grants to ensure that necessary controls are in place and operating effectively. |
AIS technology, which has been under development worldwide since the early 1990s to improve navigation safety, helps prevent collisions by enabling ships to electronically “see” and track the movements of similarly equipped ships and to receive pertinent navigational information from shore. Like other wireless technologies, AIS uses a portion of the radio frequency spectrum to carry information. In the United States, specific frequencies within the radio spectrum are allocated primarily by two agencies: FCC—an independent agency that regulates spectrum use for nonfederal users, including commercial, private, and state and local government users—and the National Telecommunications and Information Administration (NTIA), an agency within the Department of Commerce that regulates spectrum for federal government users. These agencies (1) decide how various frequencies are used and (2) assign the frequencies to specific users. FCC makes these assignments by issuing licenses to nongovernmental parties; NTIA does so by assigning specific frequencies to federal agencies that have radio communication needs. AIS is designed to improve upon information available through vessel- monitoring systems already in use. Existing VTS systems apply radar, closed-circuit television, radios, and other devices to monitor and manage vessel traffic from a central onshore location, much as an air traffic control tower does (see fig. 1). An AIS unit consists of a global navigation satellite system; computer hardware and software; three radio receivers; and one radio transmitter-receiver, or transceiver. The unit gathers vessel information—including the vessel’s name, identification number, dimensions, position, course and speed, destination, and cargo—from shipboard instruments or from manual input and transmits it to receiving AIS stations installed on other ships or on shore. Radio frequencies, or channels, carry the information. AIS also requires considerable infrastructure on shore—including antennas and base stations equipped with electric power, transceivers, computers, and displays—to monitor vessel activity and transmit information or instructions back to vessels. In the United States, such infrastructure now exists only in areas where VTS systems operate. MTSA and Coast Guard regulations require that certain vessels on U.S. navigable waterways install AIS equipment between January 1, 2003, and December 31, 2004. Coast Guard regulations implementing the law provide that vessels include (1) commercial vessels 65 feet long or more on international voyages, including all tankers regardless of tonnage; (2) passenger vessels of 150 tons or more; and (3) commercial vessels on strictly domestic U.S. voyages in the 10 VTS areas, which encompass approximately 10 percent of the U.S. ports recognized by the Department of Transportation’s Maritime Administration (see fig. 2). Currently excluded from Coast Guard regulations are fishing vessels and passenger vessels certified to carry 150 or fewer passengers. Regardless of itinerary, any private vessels not in commercial service, such as a pleasure craft, less than 300 gross tons are not required by Coast Guard regulations to carry AIS equipment. Conflict over the frequencies used for transmitting AIS signals in the United States has been developing for several years. In 1998, to promote flexibility in the use of maritime radio frequencies and to encourage development of competitive new services, FCC created and auctioned licenses to the remaining unassigned U.S. radio frequencies in the very high frequency (VHF) band reserved for maritime public correspondence communications. For approximately $7 million, MariTEL won the bid for these licenses. The announcements for the auction stated that potential bidders should be aware of international agreements and other issues that might affect the ability to use the licenses on the two specific internationally designated AIS frequencies, known as channels 87B and 88B. Issues that could affect the licenses were not explicitly laid out in the announcements, but potential bidders were directed to a prior FCC document and specific federal regulations for assistance in evaluating the degree to which such issues may affect spectrum availability. Different interpretations of issues such as these may have contributed to the conflict that continues to exist between MariTEL and the Coast Guard. This conflict extends to the use of both frequencies. FCC regulations required the winning bidder to negotiate with the Coast Guard for the use of frequencies for AIS but did not specify any particular frequency. In March 2001, in response to FCC’s auction requirements, MariTEL and the Coast Guard signed a memorandum of agreement (MOA) that allowed the use of channel 87B for AIS in U.S. waters. MariTEL terminated the MOA in May 2003, however, after disagreements arose over interpretations of the MOA’s provisions, including technical properties of the frequencies that the Coast Guard could use for AIS. After termination of the MOA, MariTEL asserted that the Coast Guard had no authority to use channel 87B for AIS, but the Coast Guard maintains that an FCC announcement still gives it that authority. With respect to channel 88B, MariTEL asserts, in general, that it obtained through the FCC auction the exclusive rights to channel 88B in certain areas within approximately 75 miles of the U.S.-Canadian border, and it has petitioned FCC for a declaratory ruling to that effect. The Coast Guard, NTIA, and the Department of Transportation disagree and assert, in general, that channel 88B has already been allocated on a primary basis to the federal government. The total cost and time frame for the development of a nationwide AIS remain uncertain. As of June 2004, the Coast Guard’s efforts to install AIS equipment nationwide had followed two tracks: first, installing AIS quickly in the 10 VTS areas and, second, launching a widespread planning effort for the rest of the nation’s navigable waters. Having taken advantage of existing facilities, electronic systems, and plans for AIS development to enhance safety in the 10 VTS areas, the Coast Guard plans to complete AIS implementation in those areas by December 2004. At the same time, the Coast Guard has begun to plan for U.S. waters outside the VTS areas, defining the goals, technical requirements, and waterways and vessels to be covered under a nationwide AIS. The Coast Guard expects planning for the technical requirements to be completed between December 2004 and February 2005. The Coast Guard also estimates that the nationwide system could cost between $62 million and $165 million. According to the Coast Guard, the cost estimate is preliminary, because geographic and other factors are expected to significantly affect the cost of installation at different locations, and the impacts are yet to be determined. The first effort in the Coast Guard’s two-track AIS development has involved installing, testing, and operating AIS equipment in the 10 VTS areas. To enable monitoring of vessels carrying AIS, the Coast Guard accelerated onshore AIS installation under way in its navigation safety program. A combination of existing facilities, equipment, plans, and funding has allowed rapid establishment of AIS in the VTS areas. Since much of the AIS infrastructure for conventional safety monitoring (e.g., to avert collisions) is the same for security monitoring (e.g., to avert acts of terrorism), bringing AIS into service involved primarily adapting and modifying existing systems to accommodate their additional security purpose. AIS facilities are completely operational at Berwick Bay, Louisiana; Los Angeles–Long Beach, California; Prince William Sound, Alaska; and St. Marys River, Michigan. AIS is being tested along the lower Mississippi River in Louisiana, and it is partially operational at Houston- Galveston, Texas, and New York, New York. The facilities at Port Arthur, Texas; Puget Sound, Washington; and San Francisco, California, are under construction. The Coast Guard expects AIS installations at the VTS areas to be completed by December 2004. To enhance safety and efficiency at the ports of Los Angeles and Long Beach, the Marine Exchange of Southern California, a nonprofit corporation formed to provide vessel arrival and departure information to the local maritime industry, took the initiative to install and pay for AIS on its own. The total cost to the Coast Guard for the installation of AIS equipment at the other 9 VTS areas comes to approximately $20.5 million. Bringing AIS into service in the 10 VTS areas should improve vessel- monitoring capability at these locations. Before AIS, VTS facilities relied on such means as radar, closed-circuit television, ship-to-shore voice communications via radio, and people with binoculars. Signals and other information from the monitoring equipment went to a central vessel traffic center (VTC), where the information was collated and where staff tracked ships’ movements. With AIS, for a vessel equipped with a properly operating AIS transceiver, VTC staff have access to so-called static information, which rarely changes, such as dimensions, vessel name, and identification number; dynamic information, which changes continuously, such as course and speed; and voyage-specific information such as cargo type, destination, and estimated time of arrival (see fig. 3). This detail allows VTC staff to immediately identify any transmitting ship, particularly if it is on a collision course with another ship or if it is headed toward a hazardous or restricted area. In some VTS areas, AIS also extends monitoring coverage over a wider radius than originally covered by VTS. On the lower Mississippi River, for example, AIS will cover more than 240 miles along the river—from its mouth to Baton Rouge, Louisiana—rather than the 8 miles around New Orleans covered by the original VTS system. In New York, AIS equipment will allow vessels to be monitored farther out to sea than possible with radar monitoring. From installing AIS shore facilities in the VTS areas, the Coast Guard has learned that the two primary drivers of installation cost are port geography and vessel traffic. Specifically, because AIS radio signals transmit in straight lines, installation can be complicated by the amount of water to be covered, as well as by terrain features such as islands, bays, and peninsulas. In addition, secondary features at a site have an impact, including availability of electrical power, previous presence or absence of communications links, availability of antenna towers, and costs to lease or buy land for antenna towers. For example, after completing site surveys of the area, the Coast Guard estimated that installing AIS in Puget Sound— an arm of the Pacific Ocean extending into Washington State that features many bays and islands and is surrounded by mountains—would likely cost $6.6 million. In contrast, the AIS installation at Berwick Bay, Louisiana, one of the first AIS installations completed by the Coast Guard, generally monitors a roughly 5-mile radius around a short stretch of the Atchafalaya River and surrounding waterways; this installation cost approximately $1 million. On the basis of its experience installing AIS in the VTS areas, the Coast Guard estimates that installing AIS equipment nationwide could cost between $62 million and $165 million—a preliminary estimate that one Coast Guard official responsible for reviewing such programs characterizes as “ballpark.” At the same time the Coast Guard is completing installation of AIS equipment in the 10 VTS areas, it is also planning for nationwide AIS installation, in waters where most of the needed infrastructure is not now available. This planning consists of two primary components: The Coast Guard will soon be defining the technical requirements of the system needed to meet both the safety and security missions of AIS, including how elaborate it will be. For example, will the system need to involve satellites to receive AIS signals beyond the range of stations on land, or will an installation that can receive signals only along the shore be adequate? The Coast Guard will also investigate whether AIS can share shore infrastructure, such as antenna towers, with systems in place or under development, such as its search-and-rescue communications system called Rescue 21. As of June 2004, the Coast Guard estimated it will be able to complete this planning sometime between December 2004 and February 2005. The Coast Guard is also determining the extent of AIS coverage needed in its overall AIS strategy, including a reexamination of which vessels should carry AIS in U.S. waters outside of VTS areas. This process includes selecting which waterways will be covered (e.g., deciding whether relatively small rivers and lakes will be covered); setting priorities for which waterways will be covered first (e.g., deciding whether large ports will receive coverage before open coastline); and identifying which additional vessels will be required to carry and operate AIS equipment (e.g., whether noncommercial, pleasure craft will still be outside AIS requirements). The Coast Guard has held public meetings and requested public comment on these issues and expects to complete its review of these comments by July 2004. Even after these planning efforts are completed, the Coast Guard will not be able to install AIS equipment outside VTS areas immediately. The factors that shape the cost of an AIS installation also shape the equipment requirements. For example, the more obstructions, such as mountains or tall buildings, that could block AIS signals, the more antennas will be required. At every location where the Coast Guard decides to install AIS equipment, it will have to evaluate the presence or absence of such design factors. Site surveys that detail local terrain and the volume and variety of vessel traffic will have to be carried out before the Coast Guard can determine a location’s precise equipment needs. As of June 2004, the continuing dispute between MariTEL and the Coast Guard over various frequency issues was in the hands of FCC, which expected to respond in summer 2004. At issue are competing views over the use of the internationally designated AIS frequencies. The commission’s response could involve any number of actions or conditions regarding the internationally designated AIS frequencies, especially on access to frequencies needed to carry AIS information. FCC’s specific findings could lead to varied technical, cost, and legal implications for AIS installation and operation, including potential delay. Depending on how FCC responds, and any subsequent actions by the interested parties, one factor that offers an opportunity to lower the federal government’s costs is the demonstrated or expressed willingness of certain local port entities to shoulder the expense and responsibility for AIS installation if they, along with the Coast Guard, can use AIS data for their own purposes. Since 2003, there have been a number of petitions, proposals, and other actions put before FCC on who may and should use channels 87B and 88B and for what purposes. In October 2003, for example, MariTEL petitioned FCC seeking a ruling that would prohibit transmission on channels 87B and 88B by entities other than those authorized by MariTEL. In this petition MariTEL asserts, among other things, that the termination of the memorandum of agreement ended the Coast Guard’s right to use channels for which MariTEL holds licensing rights. The company further contends that transmissions by entities other than those authorized by MariTEL would interfere with its other maritime frequency licenses and prevent its benefiting from the investment it made at the auction. On behalf of the Coast Guard and the Department of Transportation, NTIA also petitioned FCC in October 2003, opposing MariTEL’s petition and proposing instead that FCC allocate channels 87B and 88B exclusively to AIS for government and nongovernment use. The government’s position was that navigation safety and homeland security would be compromised if the United States and the maritime industry did not have unrestricted access to the frequencies designated by the International Telecommunication Union for AIS use worldwide. Then in February 2004, citing a desire to protect its licensed rights and to reach a quick “resolution to the AIS frequency controversy,” MariTEL submitted a proposal to FCC, “to share its licensed rights to channels 87B and 88B for use by ship stations and by the USCG at no cost.” In this proposal, MariTEL generally agreed with NTIA’s proposal to use channels 87B and 88B only for AIS, but unlike NTIA, it sought to limit access to the signals to ships, MariTEL, the Coast Guard, and the St. Lawrence Seaway Development Corporation. In other words, under this proposal, unless authorized by MariTEL, the Coast Guard and the St. Lawrence Seaway Development Corporation would be the only entities allowed to use AIS information received by a shore station. In effect, under this proposal, the transmission and receipt of AIS signals by other entities, such as marine exchanges, port authorities, or state and local government agencies, would require MariTEL’s consent. FCC has been gathering public comment from groups representing vessel pilots, port authorities, ship and barge operators, and others on these competing proposals, and a response is expected in summer 2004. The implications of this response for nationwide AIS development will depend on just how the commission resolves the competing proposals. If FCC allocates the internationally designated frequencies exclusively to AIS use but limits access to ships, MariTEL, the Coast Guard, and the St. Lawrence Seaway Development Corporation, other organizations will no longer be able to use the signals and would therefore have no incentive to pay for installing AIS infrastructure. Such loss of incentive would likely mean the loss of federal cost-sharing opportunities, potentially closing off a possible long-term cost-reduction strategy in the development of AIS nationwide. For example, an official of the Merchants Exchange of Portland told us that the exchange would not be willing to pay for AIS facilities unless access to AIS data is unrestricted. In addition, according to an AIS consultant, enforcing a ban on parties other than MariTEL and the federal government to receive AIS signals at shore stations, as MariTEL has requested, could prove impossible, because an AIS receiver that is only receiving signals cannot be detected by an enforcement authority. For its part, MariTEL maintains that it should be able to protect its investors and to profit from the licenses it won and that AIS can be operated as required by FCC’s preauction rules. The company also maintains that even if FCC grants MariTEL’s proposal for shared access to the internationally designated AIS frequencies, technical issues could still harm the company’s ability to use other frequencies for which it holds licenses. In its February 2004 proposal, MariTEL contends that FCC rules now permit an AIS transmission technology that causes interference with maritime communications on channels adjacent to 87B and 88B. The company’s proposal asserts that such interference impairs non-AIS shore- to-ship communications, with significant impact to MariTEL’s ability to use its licensed spectrum, including its construction of a wide-area radio system for maritime services. The Coast Guard argues that transmitting AIS signals on frequencies other than those internationally designated could compromise navigation safety and homeland security and complicate nationwide AIS development already under way using channels 87B and 88B. The Coast Guard cites examples such as the following: A ship traveling near or in U.S. waters may have to decide between broadcasting and receiving signals on the international frequencies—to “see” foreign vessels operating under international frequency requirements—and United States–specific frequencies—to “see” domestic vessels operating under U.S. frequency requirements. The inability of vessels to broadcast and monitor the U.S frequencies and the internationally designated AIS frequencies simultaneously heightens the risk of collisions. Until a fully automated frequency management system has been established nationwide, the use of frequencies other than channels 87B and 88B would require transmitting foreign ships to manually change frequencies when approaching U.S. shores. According to the Coast Guard, such so-called manual channel switching is cumbersome and vulnerable to human errors and, if a ship’s crew fails to change to the U.S. channel when necessary, could leave the ship “invisible” to ships in the same waters broadcasting on the U.S. frequency. Any U.S. channel management plans that become necessary would, the Coast Guard believes, impair existing operations in the border regions with Canada and Mexico, as well as AIS communications with international vessels operating within or near U.S. waters. For example, the St. Lawrence Seaway AIS system, jointly operated by the United States and Canada, is viewed by the Coast Guard as a complement to its nationwide AIS. The Seaway system, however, operates on channels 87B and 88B, and any U.S.-specific frequencies would reduce the efficiency of this international shipping thoroughfare. Transmissions on channels 87B and 88B from vessels operating outside U.S. jurisdiction would interfere with the effective use of channels 87B and 88B within the United States. According to the Coast Guard, such interference would encumber four frequencies in U.S. coastal areas instead of just the two internationally designated frequencies. Finally, any additional actions by the interested parties stemming from specifics of FCC’s response could slow or otherwise affect nationwide AIS development. An opportunity that may help the Coast Guard speed AIS installation at lower cost to the federal government is potential partnerships between the Coast Guard and local port entities. For projects like AIS whose costs and benefits extend 3 or more years, the Office of Management and Budget instructs federal agencies, including the Coast Guard, to consider alternative means of achieving program objectives, such as different methods of providing services and different degrees of federal involvement. Similarly, in 1996 a congressional conference committee report directed the Coast Guard to review user fee options and public- private partnerships for its VTS program. In carrying out these directives, the Coast Guard learned of potential partnership opportunities. The initiative for the actual partnerships has come mainly from the local port entities following their interactions with the Coast Guard on navigation safety issues. As a part of the VTS program, the Coast Guard has been performing a series of safety assessments at U.S. ports to help determine if additional VTS areas are warranted. In a number of cases, when the Coast Guard determined that a federal VTS was not warranted, local entities approached the Coast Guard for assistance in setting up their own vessel-monitoring system. Coast Guard assistance has ranged from full partnerships on vessel traffic management systems, to memorandums of understanding regarding uses of local vessel-monitoring systems, to advice and counsel on possible local efforts. The offers from port entities have come at a number of locations and reflect a realization that vessel monitoring can provide a range of benefits. Entities have explored partnership with the Coast Guard at ports including Baltimore, Maryland; Charleston, South Carolina; Corpus Christi, Texas; Delaware Bay, Delaware, Pennsylvania, and New Jersey; Hampton Roads, Virginia; Los Angeles–Long Beach, California; Portland, Oregon; San Diego, California; and Tampa, Florida. Given the level of interest, these partnerships offer an alternative to exclusive federal involvement in nationwide AIS development. Entities at some of the listed locations have used, or want to use, AIS data about incoming vessels to improve port efficiency, for example, by helping schedule tugs or dock workers; to improve safety by mitigating risks uncovered during the Coast Guard’s safety assessments; and to increase their own security by monitoring vessels as they approach the port. Some of these entities have installed AIS or similar systems and have offered to share their information with the Coast Guard. Such work relieves the Coast Guard from having to carry out its own installation of AIS shore stations in certain locations, thus accelerating and facilitating nationwide AIS implementation. As of June 2004, some of the port entities that either used AIS or planned to do so included the following: The Marine Exchange of Southern California, which provides vessel information at the ports of Los Angeles and Long Beach, California, to support port safety and the efficient movement of commerce. As a part of that support, the marine exchange financed, with port pilots, and built the VTS system at Los Angeles–Long Beach and purchased and installed AIS equipment to that system. The Marine Exchange and the Coast Guard share information received on the AIS equipment. The Coast Guard estimated that the cost of installation at Los Angeles–Long Beach was comparable to the Coast Guard’s installation at San Francisco, which the Coast Guard estimates at $2.2 million. The Tampa (Florida) Port Authority, which currently operates a vessel traffic advisory service. In 1997 the authority installed an earlier version of AIS that did not meet current international or Coast Guard standards but was designed to help the harbor pilots and vessel masters as they navigated in the Tampa Bay channels. The port authority recently requested a grant from the state of Florida to upgrade its AIS equipment to international and Coast Guard standards so as to improve security at the port of Tampa. The port authority has expressed willingness to share AIS information with the Coast Guard when its system becomes operational. Merchants Exchange of Portland, Oregon, which has expressed a desire to build an AIS system around Portland and the Columbia River as a means of supplying information on vessel movements to interested port entities. The goal is again to improve the efficiency of port operations. According to an exchange official, Merchant Exchange would be willing to share AIS information with the Coast Guard but would not build the facility until the conflict over AIS transmission frequencies is settled. In all three cases, the local port entity has already paid, or is willing to pay, for AIS installation, but the port entities’ ability to use AIS information depends on the coming FCC response. Although the local entities are building systems for their own purposes, all are sharing, or are planning to share, AIS information with the Coast Guard when the systems are complete. For example, the initiative taken by the Marine Exchange of Southern California alone likely saved the federal government $2.2 million for AIS installation. The more local port organizations that are willing to pay for the purchase and installation of AIS facilities, the more the Coast Guard can save on nationwide AIS installation. If the FCC response does not allow these entities to make unrestricted use of AIS information, they are likely to be less willing to invest in such facilities. The development of AIS nationwide is an important step in the overall effort to increase port safety and security. The Coast Guard has made an expeditious start with its installations at VTS areas and its continued planning for additional coverage, but before the system can be fully implemented, the Coast Guard faces a number of challenges. It must make some key decisions to determine AIS’s technical requirements, waterway coverage, and vessels to be equipped with AIS. The dispute with MariTEL must be resolved, and the Coast Guard must obtain financing for installation nationwide. Pending the outcome of FCC’s response, financing is one area where the Coast Guard may find help in meeting its challenges. Although the Coast Guard did not actively pursue cost-sharing options under the VTS program, by actively doing so now, it could potentially accomplish its nationwide AIS installation goals more quickly and reduce installation costs to the federal government. To help reduce federal costs and speed development of AIS nationwide, we recommend that, depending on the outcome of the expected FCC response, the Secretary of Homeland Security direct the Commandant of the Coast Guard to seek and take advantage of opportunities to partner with organizations willing to develop AIS systems at their own expense. We provided a draft of this report to the Department of Homeland Security, the Coast Guard, and FCC for their review and comment. The Coast Guard and FCC generally agreed with the facts presented in the report and offered technical comments that were incorporated into the report where applicable. While agreeing with our recommendation, the Coast Guard also said that developing partnerships would face challenges such as ensuring that locally built systems meet all Coast Guard requirements, dealing with reluctant partners, or developing partnerships that maximize savings to the federal government. Given our assumption that the Coast Guard would not sacrifice AIS capability or standards in developing partnerships, we agree that developing partnerships will not necessarily be easy. We continue to believe, however, that doing so with willing local entities is in the public interest, and we continue to be encouraged in this regard by the level of interest in partnering with the Coast Guard that we found in the VTS program. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after its issue date. At that time, we will send copies of this report to the Department of Homeland Security and the Federal Communications Commission. We will also make copies available to others upon request. In addition, this report will also be available at no charge at GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (415) 904-2200 or at [email protected] or Steve Calvo, Assistant Director, (206) 287-4800 or at [email protected]. Key contributors to this report are listed in appendix I. In addition to those named above, Jonathan Bachman, Chuck Bausell, Ellen W. Chu, Mathew Coco, Geoffrey Hamilton, Anne Laffoon, and Jeffrey Larson made key contributions to this report. | As part of international efforts to ensure maritime safety and security--and to carry out its mandates under the Maritime Transportation Security Act of 2002--the U.S. Coast Guard is developing an automatic identification system (AIS) that should enable it to monitor ships traveling to and through U.S. waters. For AIS to operate nationwide, ships need equipment to transmit and receive AIS signals, and the Coast Guard needs shore stations and designated radio frequencies to keep track of the ships' identities and movements. Yet unresolved frequency issues between the Coast Guard and a private company, MariTEL, have come before the Federal Communications Commission (FCC). GAO reviewed federal agencies' progress in developing AIS nationwide and identified certain challenges and opportunities in completing the work. Because the Coast Guard is in the early stages of progress toward nationwide AIS development, the total cost and completion time are uncertain. The Coast Guard has taken advantage of opportunities to bring AIS into service quickly in 10 areas where vessel-monitoring technology already exists, and it is simultaneously defining and planning for full nationwide coverage. The Coast Guard has only preliminary cost estimates for a nationwide system, because geographic and other factors will affect installation at different locations. The Coast Guard estimates that planning and testing will be completed, and a request for proposals from potential contractors issued, between December 2004 and February 2005. The Coast Guard faces both challenges and potential opportunities in its development of a nationwide AIS. Nationwide development depends in part on how FCC resolves a continuing dispute between federal agencies and MariTEL over issues including who should have access to the internationally designated AIS frequencies and for what uses. To help protect its licensed rights to certain frequencies, MariTEL generally seeks either sole control over the international standard AIS frequencies or shared control with ships and the federal government. The federal government seeks a resolution that will reserve the internationally designated frequencies for AIS use by government and nongovernment entities. FCC expects to respond in summer 2004. This response--and whether it leads to any additional actions on the part of the interested parties--could affect the overall cost and pace of nationwide AIS development. Depending on FCC's response, one factor that offers an opportunity to reduce federal costs is that some local port entities are willing to assume the expense and responsibility for AIS construction if they can use AIS data, along with the Coast Guard, for their own purposes. |
With the terrorist attacks of September 2001, the threat of terrorism rose to the top of the country’s national security and law enforcement agendas. As stated by the President in his National Strategy for Homeland Security in July 2002, our nation’s terrorist enemies are constantly seeking new tactics or unexpected ways to carry out their attacks and magnify their effects, such as working to obtain chemical, biological, radiological, and nuclear weapons. In addition, terrorists are gaining expertise in less traditional means, such as cyber attacks. In response to these growing threats, Congress passed and the President signed the Homeland Security Act of 2002 creating the DHS. The overall mission of this new cabinet-level department includes preventing terrorist attacks in the United States, reducing the vulnerability of the United States to terrorist attacks, and minimizing damage and assisting in recovery from attacks that do occur. To accomplish this mission, the act established specific homeland security responsibilities for the department and directed it to coordinate its efforts and share information within DHS and with other federal agencies, state and local governments, the private sector, and other entities. This information sharing is critical to successfully addressing increasing threats and fulfilling the mission of DHS. DHS’s responsibilities include the protection of our nation’s publicly and privately controlled resources essential to the minimal operations of the economy and government against the risks of physical as well as computer-based or cyber attacks. Over the last decade, physical and cyber events, as well as related analyses by various entities, have demonstrated the increasing threat to the United States. With the coordinated terrorist attacks against the World Trade Center in New York City and the Pentagon in Washington, D.C., on September 11, 2001, the threat of terrorism rose to the top of the country’s national security and law enforcement agendas. Even before these catastrophic incidents, the threat of attacks against people, property, and infrastructures had increased concerns about terrorism. The terrorist bombings in 1993 of the World Trade Center in New York City and in 1995 of the Alfred P. Murrah Federal Building in Oklahoma City, which killed 168 people and wounded hundreds of others, prompted increased emphasis on the need to strengthen and coordinate the federal government’s ability to effectively combat terrorism domestically. The 1995 Aum Shinrikyo sarin nerve agent attack in the Tokyo subway system also raised new concerns about U.S. preparedness to combat terrorist incidents involving weapons of mass destruction. However, as clearly demonstrated by the September 11, 2001, incidents, a terrorist attack would not have to fit the definition of weapons of mass destruction to result in mass casualties, destruction of critical infrastructures, economic losses, and disruption of daily life nationwide. U.S. intelligence and law enforcement communities continuously assess both foreign and domestic terrorist threats to the United States. Table 1 summarizes key physical threats to homeland security. In addition to these physical threats, terrorists and others with malicious intent, such as transnational criminals and intelligence services, pose a threat to our nation’s computer systems. As dramatic increases in computer interconnectivity, especially in the use of the Internet, continue to revolutionize the way much of the world communicate and conducts business, this widespread interconnectivity also poses significant risks to the government’s and our nation’s computer systems and, more importantly, to the critical operations and infrastructures they support. For example, telecommunications, power distribution, water supply, public health services, national defense (including the military’s warfighting capability), law enforcement, government services, and emergency services all depend on the security of their computer operations. If not properly controlled, the speed and accessibility that create the enormous benefits of the computer age also allow individuals and organizations to inexpensively eavesdrop on or interfere with these operations from remote locations for mischievous or malicious purposes. Government officials are increasingly concerned about cyber attacks from individuals and groups with malicious intent, such as crime, terrorism, foreign intelligence gathering, and acts of war. According to the FBI, terrorists, transnational criminals, and intelligence services are quickly becoming aware of and are using information exploitation tools such as computer viruses, Trojan horses, worms, logic bombs, and eavesdropping sniffers that can destroy, intercept, degrade the integrity of, or deny access to data. In addition, the disgruntled organization insider is a significant threat, since these individuals often have knowledge that allows them to gain unrestricted access and inflict damage or steal assets without possessing a great deal of knowledge about computer intrusions. As greater amounts of money are transferred through computer systems, as more sensitive economic and commercial information is exchanged electronically, and as the nation’s defense and intelligence communities increasingly rely on commercially available IT, the likelihood increases that cyber attacks will threaten vital national interests. Table 2 summarizes the key cyber threats to our infrastructure. As the number of individuals with computer skills has increased, more intrusion or “hacking” tools have become readily available and relatively easy to use. A hacker can literally download tools from the Internet and “point and click” to start an attack. Experts also agree that there has been a steady advance in the sophistication and effectiveness of attack technology. Intruders quickly develop attacks to exploit vulnerabilities discovered in products, use these attacks to compromise computers, and share them with other attackers. In addition, they can combine these attacks with other forms of technology to develop programs that automatically scan the network for vulnerable systems, attack them, compromise them, and use them to spread the attack even further. Along with these increasing threats, the number of computer security incidents reported to the CERT® Coordination Center has also risen dramatically from just under 10,000 in 1999 to about 82,000 in 2002, and to over 76,000 for the first and second quarters of 2003. And these are only the reported attacks. The Director of CERT Centers stated that he estimates that as much as 80 percent of actual security incidents goes unreported, in most cases because (1) the organization was unable to recognize that its systems had been penetrated or there were no indications of penetration or attack or (2) the organization was reluctant to report. Figure 1 shows the number of incidents reported to the CERT Coordination Center from 1995 through the first half of 2003. According to the National Security Agency, foreign governments already have or are developing computer attack capabilities, and potential adversaries are developing a body of knowledge about U.S. systems and methods to attack these systems. Since the terrorist attacks of September 11, 2001, warnings of the potential for terrorist cyber attacks against our critical infrastructures have also increased. For example, in February 2002, the threat to these infrastructures was highlighted by the Special Advisor to the President for Cyberspace Security in a Senate briefing when he stated that although to date none of the traditional terrorists groups, such as al Qaeda, have used the Internet to launch a known assault on the United States’ infrastructure, information on water systems was discovered on computers found in al Qaeda camps in Afghanistan. Also, in his February 2002 statement for the Senate Select Committee on Intelligence, the director of central intelligence discussed the possibility of cyber warfare attack by terrorists. He stated that the September 11 attacks demonstrated the nation’s dependence on critical infrastructure systems that rely on electronic and computer networks. Further, he noted that attacks of this nature would become an increasingly viable option for terrorists as they and other foreign adversaries become more familiar with these targets and the technologies required to attack them. Since September 11, 2001, the critical link between cyberspace and physical space has also been increasingly recognized. In his November 2002 congressional testimony, the Director, CERT Centers at Carnegie- Mellon University, noted that supervisory control and data acquisition (SCADA) systems and other forms of networked computer systems have been used for years to control power grids, gas and oil distribution pipelines, water treatment and distribution systems, hydroelectric and flood control dams, oil and chemical refineries, and other physical systems, and that these control systems are increasingly being connected to communications links and networks to reduce operational costs by supporting remote maintenance, remote control, and remote update functions. These computer-controlled and network-connected systems are potential targets for individuals bent on causing massive disruption and physical damage, and the use of commercial, off-the-shelf technologies for these systems without adequate security enhancements can significantly limit available approaches to protection and may increase the number of potential attackers. Not only is the cyber protection of our critical infrastructures important in and of itself, but a physical attack in conjunction with a cyber attack has also been highlighted as a major concern. In fact, the National Infrastructure Protection Center (NIPC) has stated that the potential for compound cyber and physical attacks, referred to as “swarming attacks,” is an emerging threat to the U.S. critical infrastructure. As NIPC reports, the effects of a swarming attack include slowing or complicating the response to a physical attack. For example, cyber attacks can be used to delay the notification of emergency services and to deny the resources needed to manage the consequences of a physical attack. In addition, a swarming attack could be used to worsen the effects of a physical attack. For example, a cyber attack on a natural gas distribution pipeline that opens safety valves and releases fuels or gas in the area of a planned physical attack could enhance the force of the physical attack. As our government and our nation has become ever more reliant on interconnected computer systems to support critical operations and infrastructures and as physical and cyber threats and potential attack consequences have increased, the importance of sharing information and coordinating the response to threats among stakeholders has increased. Information sharing and coordination among organizations are central to producing comprehensive and practical approaches and solutions to combating threats. For example, having information on threats and on actual incidents experienced by others can help an organization identify trends, better understand the risk it faces, and determine what preventive measures should be implemented. In addition, comprehensive, timely information on incidents can help federal and nonfederal analysis centers determine the nature of an attack, provide warnings, and advise on how to mitigate an imminent attack. Also, sharing information on terrorists and criminals can help to secure our nation’s borders. The Homeland Security Act of 2002 created DHS with the primary responsibility of preventing terrorist attacks in the United States, reducing the vulnerability of the United States to terrorist attacks, and minimizing damage and assisting in recovery from attacks that do occur. To help DHS accomplish its mission, the act establishes, among other entities, five under secretaries with responsibility over directorates for management, science and technology, information analysis and infrastructure protection, border and transportation security, and emergency preparedness and response. As part of DHS’s responsibilities, the act includes several provisions specifically related to coordinating and sharing information within the department and among other federal agencies, state and local governments, the private sector, and other entities. It also includes provisions for protecting CIP information shared by the private sector and for sharing different types of information, such as grand jury and intelligence information. Other DHS responsibilities related to information sharing include requesting and receiving information from other federal agencies, state and local government agencies, and the private sector relating to threats of terrorism in the United States; distributing or, as appropriate, coordinating the distribution of warnings and information with other federal agencies, state and local governments and authorities, and the public; creating and fostering communications with the private sector; promoting existing public/private partnerships and developing new public/private partnerships to provide for collaboration and mutual support; and coordinating and, as appropriate, consolidating the federal government’s communications and systems of communications relating to homeland security with state and local governments and authorities, the private sector, other entities, and the public. Each DHS directorate is responsible for coordinating relevant efforts with other federal, state, and local governments. The act also established the Office for State and Local Government Coordination to, among other things, provide state and local governments with regular information, research, and technical support to assist them in securing the nation. Further, the act included provisions as the “Homeland Security Information Sharing Act” that requires the President to prescribe and implement procedures for facilitating homeland security information sharing and establishes authorities to share different types of information, such as grand jury information; electronic, wire, and oral interception information; and foreign intelligence information. In July 2003, the President assigned these functions to the Secretary of Homeland Security. The following sections illustrate how DHS will require successful information sharing within the department and between federal agencies, state and local governments, and the private sector to effectively carry out its mission. The Information Analysis and Infrastructure Protection Directorate (IAIP) is responsible for accessing, receiving, and analyzing law enforcement information, intelligence information, and other threat and incident information from respective agencies of federal, state, and local governments and the private sector, and for combining and analyzing such information to identify and assess the nature and scope of terrorist threats. IAIP is also tasked with coordinating with other federal agencies to administer the Homeland Security Advisory System to provide specific warning information along with advice on appropriate protective measures and countermeasures. Further, IAIP is responsible for disseminating, as appropriate, information analyzed by DHS within the department, to other federal agencies, to state and local government agencies, and to private- sector entities. The Homeland Security Act of 2002 makes DHS and its IAIP directorate also responsible for key CIP functions for the federal government. CIP involves activities that enhance the security of our nation’s cyber and physical public and private infrastructure that are critical to national security, national economic security, and/or national public health and safety. Information sharing is a key element of these activities. Over 80 percent of our nation’s critical infrastructures are controlled by the private sector. As part of its CIP responsibilities, IAIP is responsible for (1) developing a comprehensive national plan for securing the key resources and critical infrastructure of the United States and (2) recommending measures to protect the key resources and critical infrastructure of the United States in coordination with other federal agencies and in cooperation with state and local government agencies and authorities, the private sector, and other entities. Federal CIP policy has continued to evolve since the mid-1990s through a variety of working groups, special reports, executive orders, strategies, and organizations. In particular, Presidential Decision Directive 63 (PDD 63) issued in 1998 established CIP as a national goal and described a strategy for cooperative efforts by government and the private sector to protect the physical and cyber-based systems essential to the minimum operations of the economy and the government. To accomplish its goals, PDD 63 established and designated organizations to provide central coordination and support. These included the Critical Infrastructure Assurance Office (CIAO), an interagency office established to develop a national plan for CIP, and NIPC, which was expanded to address national- level threat assessment, warning, vulnerability, and law enforcement investigation/response. The Homeland Security Act of 2002 transferred these and certain other CIP entities and their functions (other than the Computer Investigations and Operations Section of NIPC) to DHS’s IAIP directorate. Federal CIP policy, beginning with PDD 63 and reinforced through other strategy documents, including the National Strategy for Homeland Security issued in July 2002, called for a range of activities intended to establish a partnership between the public and private sectors to ensure the security of our nation’s critical infrastructures. To ensure coverage of critical infrastructure sectors, this policy identified infrastructure sectors that were essential to our national security, national economic security, and/or national public health and safety. For these sectors, which now total 14, federal government leads (sector liaisons) and private-sector leads (sector coordinators) were to work with each other to address problems related to CIP for their sector. In particular, they were to (1) develop and implement vulnerability awareness and education programs and (2) contribute to a sectoral plan by assessing the vulnerabilities of the sector to cyber or physical attacks; recommending a plan to eliminate significant vulnerabilities; proposing a system for identifying and preventing major attacks; and developing a plan for alerting, containing, and rebuffing an attack in progress and then, in coordination with the Federal Emergency Management Agency as appropriate, rapidly reconstituting minimum essential capabilities in the aftermath of an attack. CIP policy also called for sector liaisons to identify and assess economic incentives to encourage the desired sector behavior in CIP. Federal grant programs to assist state and local efforts, legislation to create incentives for the private sector and, in some cases, regulation are mentioned in CIP policy. Federal CIP policy also encourages the voluntary creation of information sharing and analysis centers (ISACs) to serve as mechanisms for gathering, analyzing, and appropriately sanitizing and disseminating information to and from infrastructure sectors and the federal government through NIPC. Their activities could improve the security posture of the individual sectors, as well as provide an improved level of communication within and across sectors and all levels of government. While PDD 63 encouraged the creation of ISACs, it left the actual design and functions of the ISACs, along with their relationship with NIPC, to be determined by the private sector in consultation with the federal government. PDD 63 did provide suggested activities, which the ISACs could undertake, including establishing baseline statistics and patterns on the various infrastructures; serving as a clearinghouse for information within and among the various sectors; providing a library for historical data for use by the private sector and reporting private-sector incidents to NIPC. As we reported in our April 8, 2003, testimony, table 3 shows the sectors identified in federal CIP policy, the lead agencies for these sectors, and whether or not an ISAC has been established for the sector. The Interstate ISAC shown in table 3 was established by the National Association of State Chief Information Officers (NASCIO) and is intended to provide a mechanism for informing state officials about DHS threat warnings, alerts, and other relevant information, and for state officials to report information to DHS. According to a NASCIO official, currently, there are limited resources available to provide suggested ISAC activities. For example, there is not a watch operation, although notifications can be sent out to members at any time and some states have their own watch centers. He also stated that NASCIO’s efforts have focused on working with DHS to develop an intergovernmental approach, similar to other federal and state efforts such as law enforcement task forces, where state and federal agencies share resources and responsibilities. As called for by the National Strategy for Homeland Security, on February 14, 2003, the President also released the National Strategy to Secure Cyberspace and the complementary National Strategy for the Physical Protection of Critical Infrastructures and Key Assets. These two strategies identify priorities, actions, and responsibilities for the federal government (including lead agencies and DHS) as well as for state and local governments and the private sector. These two strategies also emphasize the importance of developing mechanisms for the public and private sectors to share information about vulnerabilities, incidents, threats, and other security data. For example, the National Strategy to Secure Cyberspace calls for the development of a National Cyberspace Security Response System. To be coordinated by DHS, this system is described as a public/private architecture for analyzing and warning, managing incidents of national significance, promoting continuity in government systems and private-sector infrastructures, and increasing information sharing across and between organizations to improve cyberspace security. The system is to include governmental and nongovernmental entities, such as private-sector ISACs. The strategies also encourage the continued establishment of ISACs and efforts to enhance the analytical capabilities of existing ISACs. As we reported in April 2003, according to a DHS official, the department is continuing to carry out the CIP activities of the functions and organizations transferred to it by the Homeland Security Act of 2002. Further, this official stated that the department is taking actions to enhance those activities as it integrates them within the new department and is continuing previously established efforts to maintain and build relationships with other federal entities, including the FBI and other NIPC partners, and with the private sector. To fulfill its mission, the IAIP directorate will need to ensure effective information sharing with other federal entities. For example, information sharing with the recently formed Terrorist Threat Integration Center (TTIC) is a central function of the directorate. TTIC was created to merge and analyze terrorist-related information collected domestically and abroad to enhance coordination, facilitate threat analysis, and enable more comprehensive threat assessments. DHS is providing staff to work at TTIC, and the center is to provide DHS with a comprehensive assessment of threat information that will guide the department’s response to any potential attacks. To help implement its cybersecurity responsibilities, in June 2003, DHS created the National Cyber Security Division within IAIP, and on September 15, 2003, DHS announced the appointment of the first director of the division. According to DHS, this division will identify, analyze, and reduce cyber threats and vulnerabilities; disseminate threat warning information; coordinate incident response; and provide technical assistance in continuity of operations and recovery planning. Building on capabilities transferred to DHS from the CIAO, the NIPC, the Federal Computer Incident Response Center (FedCIRC), and the National Communications System, the division is organized around three units designed to: identify risks and help reduce the vulnerabilities to government’s cyber assets and coordinate with the private sector to identify and help protect America's critical cyber assets; oversee a consolidated Cyber Security Tracking, Analysis, & Response Center, which will detect and respond to Internet events; track potential threats and vulnerabilities to cyberspace; and coordinate cybersecurity and incident response with federal, state, local, private-sector and international partners; and create, in coordination with other appropriate agencies, cybersecurity awareness and education programs and partnerships with consumers, businesses, governments, academia, and international communities. Also, on September 15, 2003, DHS announced the creation of the U.S. Computer Emergency Response Team (US–CERT)—a partnership between the National Cyber Security Division and CERT/CC. According to DHS, it will improve warning and response time to security incidents by fostering the development of detection tools and using common commercial incident and vulnerability reporting protocols—with the goal to reduce the response time to a security event to an average of 30 minutes by the end of 2004; increase the flow of critical security information throughout the Internet community; provide a coordination center that, for the first time, links public and private response capabilities to facilitate communication across all infrastructure sectors; collaborate with the private sector to develop and implement new tools and methods for detecting and responding to vulnerabilities; and work with infrastructure owners and operators and technology experts to foster the development of improved security technologies and methods to increase cybersecurity at all levels across the nation. In its announcement, DHS also stated that the US–CERT is expected to grow to include other partnerships with private-sector security vendors and other domestic and international CERT organizations. These groups will work together to coordinate national and international efforts to prevent, protect, and respond to the effects of cyber attacks across the Internet. According to the act, the Border and Transportation Security Directorate (BTS) is responsible for, among other things, (1) preventing the entry of terrorists and the instruments of terrorism into the United States; (2) securing the borders, territorial waters, ports, terminals, waterways, and air, land, and sea transportation systems, including managing and coordinating those functions transferred to the department; (3) carrying out immigration enforcement functions; (4) establishing and administering rules for granting visas, and (5) administering customs laws. A number of federal entities are under its responsibility, such as the Transportation Security Administration, U.S. Customs Service, the border security functions of the Immigration and Naturalization Service (INS), Animal and Plant Health Inspection Service, and the Federal Law Enforcement Training Center. To successfully protect the borders and transportation systems of the United States, BTS faces the challenge of sharing information across the various organizations under its responsibility. According to the National Strategy for Homeland Security, to successfully prevent the entry of contraband, unauthorized aliens, and potential terrorists, DHS will have to increase the level of information available on inbound goods and passengers to the border management component agencies under the BTS. For example, the strategy discusses the need to increase the security of international shipping containers—noting that 50 percent of the value of U.S. imports arrives via 16 million containers. To increase security, U.S. inspectors will need shared information so that they can identify high-risk containers. In addition, protecting our borders from the entry of unauthorized aliens and potential terrorists will require the sharing of information between various law enforcement and immigration services. For example, we recently reported on the use of watch lists as important tools to help secure our nation’s borders. These lists provide decision makers with information about individuals who are known or suspected terrorists and criminals so that these individuals can be prevented from entering the country, apprehended while in the country, or apprehended as they attempt to exit the country. According to the act, the Emergency Preparedness and Response Directorate (EPR) ensures that the nation is prepared for, and able to recover from, terrorist attacks, major disasters, and other emergencies. In addition, EPR is responsible for building a comprehensive national incident management system with federal, state, and local governments and authorities to respond to such attacks and disasters. This project will require developing an extensive program of information sharing among federal, state, and local governments. Further, EPR is to develop comprehensive programs for developing interoperable communications technology and helping to ensure that emergency response providers acquire such technology. Among the functions transferred to EPR are the Federal Emergency Management Agency, the Integrated Hazard Information System of the National Oceanic and Atmospheric Administration, and the Metropolitan Medical Response System. Information sharing is important to emergency responders to prepare for and respond to terrorist attacks and other emergencies. For example, if a biological attack were to occur, it would be important for health officials to quickly and effectively exchange information with relevant experts directly responding to the event in order to respond appropriately. To support this type of exchange, the Centers for Disease Control and Prevention (CDC) created the Epidemic Information Exchange (Epi-X), a secure, Web-based communications network that serves as an information exchange between CDC, state and local health departments, poison control centers, and other public health professionals. According to CDC, Epi-X’s primary goals include informing health officials about important public health events, helping them respond to public health emergencies, and encouraging professional growth and the exchange of information. CDC has also created an emergency operations center to respond to public health emergencies and to allow for immediate secure communication between CDC, the Department of Health and Human Services, federal intelligence and emergency response officials, DHS, and state and local public health officials. We have made numerous recommendations over the last several years related to information sharing functions that have been transferred to DHS. One significant area of our work concerns the federal government’s CIP efforts, which is focused on sharing information on incidents, threats, and vulnerabilities and providing warnings related to critical infrastructures both within the federal government and between the federal government and state and local governments and the private sector. Although improvements have been made in protecting our nation’s critical infrastructures and continuing efforts are in progress, further efforts are needed to address the following critical CIP challenges that we have identified: developing a comprehensive and coordinated national plan to facilitate CIP information sharing, which clearly delineates the roles and responsibilities of federal and nonfederal CIP entities, defines interim objectives and milestones, sets timeframes for achieving objectives, and establishes performance measures; developing fully productive information sharing relationships within the federal government and between the federal government and state and local governments and the private sector; improving the federal government’s capabilities to analyze incident, threat, and vulnerability information obtained from numerous sources and share appropriate timely, useful warnings and other information concerning both cyber and physical threats to federal entities, state and local governments, and the private sector; and providing appropriate incentives for nonfederal entities to increase information sharing with the federal government. In addition, we recently identified challenges in consolidating and standardizing watch list structures and policies, which are essential to effectively sharing information on suspected criminals and terrorists. An underlying issue in the implementation of CIP is that no national plan to facilitate information sharing yet exists that clearly delineates the roles and responsibilities of federal and nonfederal CIP entities, defines interim objectives and milestones, sets time frames for achieving objectives, and establishes performance measures. Such a clearly defined plan is essential for defining the relationships among all CIP organizations to ensure that the approach is comprehensive and well coordinated. Since 1998, we have reported on the need for such a plan and made numerous related recommendations. In September 1998, we reported that developing a governmentwide strategy that clearly defined and coordinated the roles of federal entities was important to ensure governmentwide cooperation and support for PDD 63. At that time, we recommended that the Office of Management and Budget (OMB) and the Assistant to the President for National Security Affairs ensure such coordination. In January 2000, the President issued Defending America’s Cyberspace: National Plan for Information Systems Protection: Version 1.0: An Invitation to a Dialogue as a first major element of a more comprehensive effort to protect the nation’s information systems and critical assets from future attacks. The plan proposed achieving the twin goals of making the U.S. government a model of information security and developing a public/private partnership to defend our national infrastructures. However, this plan focused largely on federal cyber CIP efforts, saying little about the private-sector role. In September 2001, we reported that agency questions had surfaced regarding specific roles and responsibilities of entities involved in cyber CIP and the timeframes within which CIP objectives were to be met, as well as guidelines for measuring progress. Accordingly, we made several recommendations to supplement those we had made in the past. Specifically, we recommended that the Assistant to the President for National Security Affairs ensure that the federal government’s strategy to address computer-based threats define specific roles and responsibilities of organizations involved in CIP and related information security activities; interim objectives and milestones for achieving CIP goals and a specific action plan for achieving these objectives, including implementing vulnerability assessments and related remedial plans; and performance measures for which entities can be held accountable. In July 2002, we issued a report identifying at least 50 organizations that were involved in national or multinational cyber CIP efforts, including 5 advisory committees; 6 Executive Office of the President organizations; 38 executive branch organizations associated with departments, agencies, or intelligence organizations; and 3 other organizations. Although our review did not cover organizations with national physical CIP responsibilities, the large number of organizations that we did identify as involved in CIP efforts presents a need to clarify how these entities coordinate their activities with each other. Our report also stated that PDD 63 did not specifically address other possible critical sectors and their respective federal agency counterparts. Accordingly, we recommended that the federal government’s strategy also include all relevant sectors and define the key federal agencies’ roles and responsibilities associated with each of these sectors, and define the relationships among the key CIP organizations. In July 2002, the National Strategy for Homeland Security called for interim cyber and physical infrastructure protection plans that DHS would use to build a comprehensive national infrastructure plan. Implementing a well-developed plan is critical to effective coordination in times of crises. According to the strategy, the national plan is to provide a methodology for identifying and prioritizing critical assets, systems, and functions, and for sharing protection responsibility with state and local governments and the private sector. The plan is also to establish standards and benchmarks for infrastructure protection and provide a means to measure performance. The plan is expected to inform DHS on budgeting and planning for CIP activities and how to use policy instruments to coordinate between government and private entities to improve the security of our national infrastructures to appropriate levels. The strategy also states that DHS is to unify the currently divided responsibilities for cyber and physical security. According to the department’s November 2002 reorganization plan, the Assistant Secretary for Infrastructure Protection is responsible for developing a comprehensive national infrastructure plan. As discussed previously, in February 2003, the President issued the interim strategies—The National Strategy to Secure Cyberspace and The National Strategy for the Physical Protection of Critical Infrastructures and Key Assets (hereafter referred to in this testimony as the cyberspace security strategy and the physical protection strategy). These strategies identify priorities, actions, and responsibilities for the federal government, including federal lead departments and agencies and the DHS, as well as for state and local governments and the private sector. Both define strategic objectives for protecting our nation’s critical assets. The physical protection strategy discusses the goals and objectives for protecting our nation’s critical infrastructure and key assets from physical attack. The cyberspace security strategy provides a framework for organizing and prioritizing the individual and concerted responsibilities of all levels of government to secure cyberspace. According to the physical protection strategy, across government, there are inconsistent methodologies to prioritize efforts to enhance critical infrastructure protection. This problem is compounded with ineffective communication among the federal, state, and local governments that has resulted in untimely, disparate, and at times conflicting communication between those who need it most. DHS has been given a primary role in providing cross-sector coordination to improve communication and planning efforts and serves as the single point of coordination for state and local governments on homeland security issues. To fulfill its role as the cross-sector coordinator, DHS will partner with state and local governments and the private sector to institute processes that are transparent, comprehensive, and results-oriented. This effort will include creating mechanisms for collaborative national planning efforts between the private and public sectors and for consolidating the individual sector plans into a comprehensive plan that will define their respective roles, responsibilities, and expectations. The cyberspace security strategy is the counterpart to the physical protection strategy and provides the framework for organizing and prioritizing the individual and concerted responsibilities of all levels of government to secure cyberspace. DHS serves as the focal point for managing cybersecurity incidents that could affect the federal government or the national information infrastructure and, thus, plays a central role in executing the initiatives assigned in this strategy. While the cyberspace security strategy mentions the responsibility of DHS in creating a comprehensive national plan for securing resources and key infrastructures, much of the strategy’s emphasis remains on coordinating and integrating various plans with the private sector. Neither strategy (1) clearly indicates how the physical and cyber efforts will be coordinated; (2) defines the roles, responsibilities, and relationships among the key CIP organizations, including state and local governments and the private sector; (3) indicates time frames or milestones for their overall implementation or for accomplishing specific actions or initiatives; nor (4) establishes performance measures for which entities can be held responsible. Until a comprehensive and coordinated plan is completed that unifies the responsibilities for cyber and physical infrastructures; identifies roles, responsibilities, and relationships for all CIP efforts; establishes time frames or milestones for implementation; and establishes performance measures, our nation risks not having a consistent and appropriate information sharing framework to deal with growing threats to its critical infrastructure. Information sharing is a key element in developing comprehensive and practical approaches to defending against potential cyber and other attacks, which could threaten the national welfare. Information on threats, vulnerabilities, and incidents experienced by others can help identify trends, better understand the risks faced, and determine what preventive measures should be implemented. However, as we have reported in recent years, establishing the trusted relationships and information-sharing protocols necessary to support such coordination can be difficult. In addition, the private sector has expressed concerns about sharing information with the government and the difficulty of obtaining security clearances. Both the Congress and the administration have taken steps to address information sharing issues in law and recent policy guidance, but their effectiveness will largely depend on how DHS implements its information sharing responsibilities. A number of activities have been undertaken to build information-sharing relationships between the federal government and the private sector, such as InfraGard, the Partnership for Critical Infrastructure Security, efforts by the CIAO, and efforts by lead agencies to establish ISACs. For example, the InfraGard Program, which provides the FBI and NIPC with a means of securely sharing information with individual companies, has expanded substantially. InfraGard membership has increased from 277 in October 2000 to almost 9,400 in September 2003. Members include representatives from private industry, other government agencies, state and local law enforcement, and the academic community. As stated above, PDD 63 encouraged the voluntary creation of ISACs to serve as the mechanism for gathering, analyzing, and appropriately sanitizing and disseminating information between the private sector and the federal government through NIPC. In April 2001, we reported that NIPC and other government entities had not developed fully productive information-sharing relationships but that NIPC had undertaken a range of initiatives to foster information-sharing relationships with ISACs, as well as with government and international entities. We recommended that NIPC formalize relationships with ISACs and develop a plan to foster a two-way exchange of information between them. In response to our recommendations, NIPC officials told us in July 2002 that an ISAC development and support unit had been created, whose mission was to enhance private-sector cooperation and trust so that it would result in a two-way sharing of information. As shown previously in table 3, as of April 2003, DHS reported that there are 16 current ISACs, including ISACs established for sectors not identified as critical infrastructure sectors. DHS officials also stated that they have formal agreements with most of the current ISACs. In spite of progress made in establishing ISACs, additional efforts are needed. All sectors do not have a fully established ISAC, and even for those sectors that do, our recent work showed that participation may be mixed, and the amount of information being shared between the federal government and private-sector organizations also varies. Specifically, as we reported in February 2003, the five ISACs we recently reviewed showed different levels of progress in implementing the PDD 63 suggested activities. For example, four of the five reported that efforts were still in progress to establish baseline statistics, which includes developing a database on the normal levels of computer security incidents that would be used for analysis purposes. Also, while all five reported that they served as the clearinghouse of information (such as incident reports and warnings received from members) for their own sectors, only three of the five reported that they are also coordinating with other sectors. Only one of the five ISACs reported that it provides a library of incidents and historical data that was available to both the private sector and the federal government, and although three additional ISACs do maintain a library, it was available only to the private sector. Table 4 summarizes the reported status of the five ISACs in performing these and other activities suggested by PDD 63. As also noted in our February 2003 report, some in the private sector expressed concerns about voluntarily sharing information with the government. Specifically, concerns were raised that industry could potentially face antitrust violations for sharing information with other industry partners, have their information subject to the Freedom of Information Act (FOIA), or face potential liability concerns for information shared in good faith. For example, the IT, energy, and the water ISACs reported that they did not share their libraries with the federal government because of concerns that information could be released under FOIA. And, officials of the energy ISAC stated that they have not reported incidents to NIPC because of FOIA and antitrust concerns. The recently established ISAC Council may help to address some of these concerns. According to its chairman, the mission of the ISAC Council is to advance the physical and cybersecurity of the critical infrastructures of North America by establishing and maintaining a framework for interaction between and among the ISACs. Activities of the council include establishing and maintaining a policy for inter-ISAC coordination, a dialog with governmental agencies that deal with ISACs, and a practical data and information sharing protocol (what to share and how to share). In addition, the council will develop analytical methods to assist the ISACs in supporting their own sectors and other sectors with which there are interdependencies and establish a policy to deal with matters of liability and anti-trust. The chairman also reported that the council held an initial meeting with DHS and the White House in June 2003 to, among other things, understand mutual DHS and ISAC expectations. There will be continuing debate as to whether adequate protection is being provided to the private sector as these entities are encouraged to disclose and exchange information on both physical and cybersecurity problems and solutions that are essential to protecting our nation’s critical infrastructures. The National Strategy for Homeland Security includes “enabling critical infrastructure information sharing” in its 12 major legislative initiatives. It states that the nation must meet this need by narrowly limiting public disclosure of information relevant to protecting our physical and cyber critical infrastructures in order to facilitate the voluntary submission of information. It further states that the Attorney General will convene a panel to propose any legal changes necessary to enable sharing of essential homeland security related information between the federal government and the private sector. Actions have already been taken by the Congress and the administration to strengthen information sharing. For example, the USA PATRIOT Act promotes information sharing among federal agencies, and numerous terrorism task forces have been established to coordinate investigations and improve communications among federal and local law enforcement. Moreover, the Homeland Security Act of 2002 includes provisions that restrict federal, state, and local government use and disclosure of critical infrastructure information that has been voluntarily submitted to DHS. These restrictions include exemption from disclosure under FOIA, a general limitation on use to CIP purposes, and limitations on use in civil actions and by state or local governments. The act also provides penalties for any federal employee who improperly discloses any protected critical infrastructure information. In April 2003, DHS issued for comment its proposed rules for how critical infrastructure information volunteered by the public will be protected. At this time, it is too early to tell what impact the act will have on the willingness of the private sector to share critical infrastructure information. Information sharing among federal, state and local governments also needs to be improved. In August 2003 we reported the results of our survey of federal, state, and city government officials’ perceptions of the effectiveness of the current information-sharing process. Performed primarily before DHS began its operations, our survey identified some notable information-sharing initiatives, but also highlighted coordination issues and other concerns that many of the surveyed entities had with the overall information-sharing process. For example, the FBI reported it had significantly increased the number of its Joint Terrorism Task Forces and, according to our survey, 34 of 40 states and 160 of 228 cities stated that they participated in information-sharing centers. However, although such initiatives may increase the sharing of information to fight terrorism, none of the three levels of government perceived the current information- sharing process as effective, particularly when sharing information with federal agencies. Respondents reported that information on threats, methods, and techniques of terrorists was not routinely shared; and the information that was shared was not perceived as timely, accurate, or relevant. Further, 30 of 40 states and 212 of 228 cities responded that they were not given the opportunity to participate in national policy making on information sharing. Federal agencies in our survey also identified several barriers to sharing threat information with state and city governments, including the inability of state and city officials to secure and protect classified information, the lack of federal security clearances, and a lack of integrated databases. The private sector has also expressed its concerns about the value of information being provided by the government. For example, in July 2002 the President for the Partnership for Critical Infrastructure Security stated in congressional testimony that information sharing between the government and private sector needs work, specifically, in the quality and timeliness of cybersecurity information coming from the government. In March 2003 we also reported that the officials from the chemical industry noted that they need better threat information from law enforcement agencies, as well as better coordination among agencies providing threat information. They stated that chemical companies do not receive enough specific threat information and that it frequently comes from multiple government agencies. Similarly, in developing a vulnerability assessment methodology to assess the security of chemical facilities against terrorist and criminal attacks, the Department of Justice observed that chemical facilities need more specific information about potential threats in order to design their security systems and protocols. Chemical industry officials also noted that efforts to share threat information among industry and federal agencies will be effective only if government agencies provide specific and accurate threat information. Threat information also forms the foundation for some of the tools available to industry for assessing facility vulnerabilities. The Justice vulnerability assessment methodology requires threat information as the foundation for hypothesizing about threat scenarios, which form the basis for determining site vulnerabilities. The Homeland Security Act, the National Strategy for Homeland Security, the National Strategy to Secure Cyberspace, and the National Strategy for the Physical Protection of Critical Infrastructures and Key Assets all acknowledge the importance of information sharing and identify multiple responsibilities for DHS to share information on threats and vulnerabilities. In particular: The Homeland Security Act authorizes the IAIP Under Secretary to have access to all information in the federal government that concerns infrastructure or other vulnerabilities of the United States to terrorism and to use this information to fulfill its responsibilities to provide appropriate analysis and warnings related to threats to and vulnerabilities of critical information systems, crisis management support in response to threats or attacks on critical information systems, and technical assistance upon request to private-sector and government entities to respond to major failures of critical information systems. The National Strategy for Homeland Security specifies the need for DHS to work with state and local governments to achieve “seamless communication” among all responders. This responsibility includes developing a national emergency communication plan to establish policies and procedures to improve the exchange of information. Ensuring improved communications also involves developing systems that help prevent attacks and minimize damage. Such systems, which would be accessed and used by all levels of government, would detect hostile intents and help locate individual terrorists as well as monitor and detect outbreaks. The cyberspace security strategy encourages DHS to work with the National Infrastructure Advisory Council and the private sector to develop an optimal approach and mechanism to disclose vulnerabilities in order to expedite the development of solutions without creating opportunities for exploitation by hackers. DHS is also expected to raise awareness about removing obstacles to sharing information concerning cybersecurity and infrastructure vulnerabilities between the public and private sectors and is encouraged to work closely with ISACs to ensure that they receive timely and actionable threat and vulnerability data and to coordinate voluntary contingency planning efforts. The physical protection strategy describes DHS’s need to collaborate with the intelligence community and the Department of Justice to develop comprehensive threat collection, assessment, and dissemination processes that are distributed to the appropriate entity in a timely manner. It also enumerates several initiatives directed to DHS to accomplish to create a more effective information-sharing environment among the key stakeholders, including establishing requirements for sharing information; supporting state and local participation with ISACs to more effectively communicate threat and vulnerability information; protecting secure and proprietary information deemed sensitive by the private sector; implementing processes for collecting, analyzing, and disseminating threat data to integrate information from all sources; and developing interoperable systems to share sensitive information among government entities to facilitate meaningful information exchange. The National Strategy for Homeland Security also describes DHS’s need to engage its partners around the world in cooperative efforts to improve security. It states that DHS will increase information sharing between the international law enforcement, intelligence, and military communities. Analysis and warning capabilities should be developed to detect precursors to attacks on the nation so that advanced warnings can be issued and protective measures implemented. Since the 1990s, the national security community and the Congress have identified the need to establish analysis and warning capabilities to protect against strategic computer attacks against the nation’s critical computer-dependent infrastructures. Such capabilities need to address both cyber and physical threats and involve (1) gathering and analyzing information for the purpose of detecting and reporting otherwise potentially damaging actions or intentions and (2) implementing a process for warning policymakers and allowing them time to determine the magnitude of the related risks. In April 2001, we reported on NIPC’s progress and impediments in developing analysis and warning capabilities for computer-based attacks, which included the following: Lack of a generally accepted methodology for analyzing strategic cyber- based threats. For example, there was no standard terminology, no standard set of factors to consider, and no established thresholds for determining the sophistication of attack techniques. According to officials in the intelligence and national security community, developing such a methodology would require an intense interagency effort and dedication of resources. Lack of industry-specific data on factors such as critical system components, known vulnerabilities, and interdependencies. Under PDD 63, such information is to be developed for each of eight industry segments by industry representatives and the designated federal lead agencies. In September 2001, we reported that although outreach efforts had raised awareness and improved information sharing, substantive, comprehensive analysis of infrastructure sector interdependencies and vulnerabilities had been limited. Another challenge confronting the analysis and warning capabilities of our nation is that, historically, our national CIP attention and efforts have been focused on cyber threats. As we also reported in April 2001, although PDD 63 covers both physical and cyber threats, federal efforts to meet the directive’s requirements have pertained primarily to cyber threats since this is an area that the leaders of the administration’s CIP strategy view as needing attention. However, the terrorist attacks of September 11, 2001, have increased the emphasis of physical threats. In addition, in July 2002, NIPC reported that the potential for concurrent cyber and physical (“swarming”) attacks is an emerging threat to the U.S. critical infrastructure. Further, in July 2002, the director of NIPC also told us that NIPC had begun to develop some capabilities for identifying physical CIP threats. For example, NIPC had developed thresholds with several ISACs for reporting physical incidents and, since January 2002, has issued several information bulletins concerning physical CIP threats. However, NIPC’s director acknowledged that fully developing this capability would be a significant challenge. The physical protection strategy states that DHS will maintain a comprehensive, up-to-date assessment of vulnerabilities across sectors and improve processes for domestic threat data collection, analysis, and dissemination to state and local governments and private industry. The administration and the Congress continue to emphasize the need for these analysis and warning capabilities. The National Strategy for Homeland Security identified intelligence and warning as one of six critical mission areas and called for major initiatives to improve our nation’s analysis and warning capabilities. The strategy also stated that no government entity was then responsible for analyzing terrorist threats to the homeland, mapping these threats to our vulnerabilities, and taking protective action. The Homeland Security Act gives such responsibility to the new DHS. For example, the IAIP Under Secretary is responsible for administering the Homeland Security Advisory System, and is to coordinate with other federal agencies to provide specific warning information and advice to state and local agencies, the private sector, the public, and other entities about appropriate protective measures and countermeasures to homeland security threats. An important aspect of improving our nation’s analysis and warning capabilities is having comprehensive vulnerability assessments. The National Strategy for Homeland Security also states that comprehensive vulnerability assessments of all of our nation’s critical infrastructures are important from a planning perspective in that they enable authorities to evaluate the potential effects of an attack on a given sector and then invest accordingly to protect it. The strategy states that the U.S. government does not perform vulnerability assessments of the nation’s entire critical infrastructure. The Homeland Security Act of 2002 states that the DHS’s IAIP Under Secretary is to carry out comprehensive assessments of the vulnerabilities of key resources and critical infrastructures of the United States. Another critical issue in developing effective analysis and warning capabilities is to ensure that appropriate intelligence and other threat information, both cyber and physical, is received from the intelligence and law enforcement communities. For example, there has been considerable public debate regarding the quality and timeliness of intelligence data shared between and among relevant intelligence, law enforcement, and other agencies. Also, as the transfer of NIPC to DHS organizationally separated it from the FBI’s law enforcement activities (including the Counterterrorism Division and NIPC field agents), it will be critical to establish mechanisms for continued communication to occur. Further, it will be important that the relationships between the law enforcement and intelligence communities and the new DHS are effective and that appropriate information is exchanged on a timely basis. The act gives DHS broad statutory authority to access intelligence information, as well as other information relevant to the terrorist threat and to turn this information into useful warnings. For example, DHS is to be a key participant in the multiagency TTIC that began operations on May 1, 2003. According to a White House fact sheet, DHS’s IAIP is to receive and analyze terrorism-related information from the TTIC. Although the purpose of TTIC and the authorities and responsibilities of the FBI and Central Intelligence Agency (CIA) counterterrorism organizations remain distinct, in July 2003, the TTIC Director reported that initiatives are under way to facilitate efforts within the intelligence community to ensure that DHS has access to all information required to execute its mission. He also reported other progress, such as updates to a TTIC-sponsored Web site that provides terrorism-related information. For example, the Web site is to increasingly include products tailored to the needs of state and local officials, as well as private industry. In addition, according to NIPC’s director, as of July 2002, a significant challenge in developing a robust analysis and warning function is the development of the technology and human capital capacities to collect and analyze substantial amounts of information. Similarly, the Director of the FBI testified in June 2002 that implementing a more proactive approach to preventing terrorist acts and denying terrorist groups the ability to operate and raise funds require a centralized and robust analytical capacity that did not then exist in the FBI’s Counterterrorism Division. He also stated that processing and exploiting information gathered domestically and abroad during the course of investigations require an enhanced analytical and data mining capacity that was not then available. According to DHS’s reorganization plans, the IAIP Under Secretary and the chief information officer (CIO) of the department are to fulfill their responsibilities as laid out by the act to establish and uses a secure communications and IT infrastructure. This infrastructure is to include data-mining and other analytical tools in order to access, receive, analyze, and disseminate data and information. PDD 63 stated that sector liaisons should identify and assess economic incentives to encourage sector information sharing and other desired behavior. Consistent with the original intent of PDD 63, the National Strategy for Homeland Security states that, in many cases, sufficient incentives exist in the private market for addressing the problems of CIP. However, the strategy also discusses the need to use all available policy tools to protect the health, safety, or well-being of the American people. It mentions federal grant programs to assist state and local efforts, legislation to create incentives for the private sector, and, in some cases, regulation. The physical protection strategy reiterates that additional regulatory directives and mandates should only be necessary in instances where the market forces are insufficient to prompt the necessary investments to protect critical infrastructures and key assets. The cyberspace security strategy also states that the market is to provide the major impetus to improve cybersecurity and that regulation will not become a primary means of securing cyberspace. Last year, the Comptroller General testified on the need for strong partnerships with those outside the federal government and that the new department would need to design and manage tools of public policy to engage and work constructively with third parties. We have also previously testified on the choice and design of public policy tools that are available to governments. These public policy tools include grants, regulations, tax incentives, and regional coordination and partnerships to motivate and mandate other levels of government or the private sector to address security concerns. Some of these tools are already being used, such as in the water and chemical sectors. Without appropriate consideration of public policy tools, private-sector participation in sector-related information sharing and other CIP efforts may not reach its full potential. For example, we reported in January 2003 on the efforts of the financial services sector to address cyber threats, including industry efforts to share information and to better foster and facilitate sectorwide efforts. We also reported on the efforts of federal entities and regulators to partner with the financial services industry to protect critical infrastructures and to address information security. We found that although federal entities had a number of efforts ongoing, Treasury, in its role as sector liaison, had not undertaken a comprehensive assessment of the potential public policy tools to encourage the financial services sector in implementing information sharing and other CIP-related efforts. Because of the importance of considering public policy tools to encourage private-sector participation, we recommended that Treasury assess the need for public policy tools to assist the industry in meeting the sector’s goals. In addition, in February 2003, we reported on the mixed progress five ISACs had made in accomplishing the activities suggested by PDD 63. We recommended that the responsible lead agencies assess the need for public policy tools to encourage increased private-sector CIP activities and greater sharing of intelligence and incident information between the sectors and the federal government. The President’s fiscal year 2004 budget request for the new DHS includes $829 million for information analysis and infrastructure protection, a significant increase from the estimated $177 million for fiscal year 2003. In particular, the requested funding for protection includes about $500 million to identify key critical infrastructure vulnerabilities and support the necessary steps to ensure that security is improved at these sites. Although the funding also includes almost $300 million for warning advisories, threat assessments, a communications system, and outreach efforts to state and local governments and the private sector, additional incentives may still be needed to encourage nonfederal entities to increase their CIP efforts. We recently reported on the terrorist and criminal watch list systems maintained by different federal agencies. These watch lists are important information-sharing tools for securing our nation’s borders against terrorists. Simply stated, watch lists can be viewed as automated databases that are supported by certain analytical capabilities. These lists contain various types of data, from biographical data—such as a person’s name and date of birth—to biometric data such as fingerprints. Nine federal agencies, which before the establishment of DHS spanned five different cabinet-level departments, currently maintain 12 terrorist and criminal watch lists. These lists are also used by at least 50 federal, state, and local agencies. According to the National Strategy for Homeland Security, in the aftermath of the September 11th attacks, it became clear that vital watch list information stored in numerous and disparate databases was not available to the right people at the right time. In particular, federal agencies that maintained information about terrorists and other criminals had not consistently shared it. The strategy attributed these information- sharing limitations to legal, cultural, and technical barriers that resulted in the watch lists being developed in different ways, for different purposes, and in isolation from one another. To address these limitations, the strategy provides for developing a consolidated watch list that would bring together the information on known or suspected terrorists contained in federal agencies’ respective lists. As we reported, we found that the watch lists include overlapping but not identical sets of data, and that different policies and procedures govern whether and how these data are shared with others. As a general rule, we found that this information sharing is more likely to occur among federal agencies than between federal agencies and either state and local governments agencies or private entities. Among other things, we also found that the extent to which such information sharing is accomplished electronically is constrained by fundamental differences in the watch lists’ systems architecture. Also, differences in agencies’ cultures have been and remain one of the principal impediments to integrating and sharing information from watch lists and other information. We recommended that the Secretary of DHS, in collaboration with the heads of other departments and agencies that have or use watch lists, lead an effort to consolidate and standardize the federal government’s watch list structures and policies to promote better integration and information sharing. DHS generally agreed with our findings and recommendations. Transportation Security Administration; and the Treasury Department’s U.S. Customs Service. Of these, the Immigration and Naturalization Service, the Transportation Security Administration, and the U.S. Customs Service have been incorporated into the new DHS. The success of homeland security relies on establishing effective systems and processes to facilitate information sharing among government entities and the private sector. In May 2003, the CIO of DHS stated that a key goal to protecting our nation is to put in place mechanisms that provide the right information to the right people in a timely manner. He further stated that with the use of IT, homeland security officials throughout the United States will have a more complete awareness of threats and vulnerabilities, as well as knowledge of the personnel and resources available to conquer those threats. We have identified critical success factors to information sharing that DHS should consider. Also, in addition to the need to develop technological solutions, key management issues that DHS must overcome to achieve success include integrating existing IT resources of 22 different agencies, making new IT investments, ensuring that sensitive information is secured, developing secure communications networks, developing a performance focus, integrating staff from different organizations and ensuring that the department has properly skilled staff, and ensuring effective oversight. Addressing these issues will be critical to establishing the effective systems and processes required to facilitate information sharing within the new department. In October 2001, we reported on information sharing practices of organizations that successfully share sensitive or time-critical information. We found that these practices include: establishing trust relationships with a wide variety of federal and nonfederal entities that may be in a position to provide potentially useful information and advice on vulnerabilities and incidents; developing standards and agreements on how shared information will be establishing effective and appropriately secure communications taking steps to ensure that sensitive information is not inappropriately disseminated. Among the organizations we studied, we found some very good models to learn from and build on. For example, CERT/CC is charged with establishing a capability to quickly and effectively coordinate communication between experts in order to limit damage, responding to incidents, and building awareness of security issues across the Internet community. In this role, CERT/CC receives Internet security-related information from system and network administrators, technology managers, and policymakers and provides them with this information along with guidance and coordination to major security events. Further, the Agora is a Seattle-based regional network that at the time of our study had over 600 professionals representing various fields, including information systems security; law enforcement; local, state, and federal governments; engineering; IT; academics; and other specialties. Members work to establish confidential ways for organizations to share sensitive information about common problems and best practices for dealing with security threats. They develop and share knowledge about how to protect electronic infrastructures, and they prompt more research specific to electronic information systems security. In addition, we have previously reported on several other key considerations in establishing effective information sharing, including: identifying and agreeing on the types of information to be collected and shared between parties, developing standard terms and reporting thresholds, balancing varying interests and expectations, and determining the right format and standards for collecting data so that disparate agencies can aggregate and integrate data sets. Some efforts have already taken place in these areas. For example, NIPC obtained information-sharing agreements with most ISACs, which included specific reporting thresholds for physical and cyber incidents. Also, incident reporting thresholds have been publicly issued. It will be important for DHS to incorporate these considerations into its information-sharing efforts. Developing and implementing appropriate technological solutions can improve the effectiveness and efficiency of information sharing. We have previously reported on the lack of connectivity and interoperability between databases and technologies important to the homeland security effort. Databases belonging to federal law enforcement agencies and INS, for example, are not connected, and databases between state, local, and federal governments are not always connected. The technological constraints caused by different system architectures that impede the sharing of different agencies’ watch lists illustrate the widespread lack of interoperability of many federal government information systems. New technologies for data integration and interoperability could enable agencies to share information without the need for radical structural changes. This would allow the component agencies of DHS to work together yet retain a measure of autonomy, thus removing some barriers hindering agencies from embracing change. In August 2002, we reported on various existing technologies that could be more widely implemented to facilitate information sharing. We reported that Extensible Markup Language (XML) is useful for better information sharing. XML is a flexible, nonproprietary set of standards for annotating or “tagging” information so that it can be transmitted over a network such as the Internet and readily interpreted by disparate computer systems. If implemented broadly with consistent data definitions and structures, XML offers the promise of making it significantly easier for organizations and individuals to identify, integrate, and process information that may be widely dispersed among systems and organizations. For example, law enforcement agencies could potentially better identify and retrieve information about criminal suspects from any number of federal, state, and local databases. We also reported that various technologies could be used to protect information in shared databases. For example, data could be protected through electronically secured entry technology (ESET). ESET would allow users of separate databases to cross check or “mine” data securely without directly disclosing their information to others, thus allowing agencies to collaborate as well as address their needs for confidentiality or privacy. Such technology could, for example, allow an airline to cross check a passenger or employee against data held by government agencies in a single-step process without actually disclosing the data to the airline. In checking an individual, the airline would not receive any data from the agencies’ databases; rather, it would receive a “yes or no” type of response and/or a referral for further action. Additionally, appropriate authorities could automatically be notified. We noted that intrusion detection systems could be used to prevent unauthorized users from accessing shared information. Intrusion detection uses normal system and network activity data as well as known attack patterns. Deviations from normal traffic patterns can help to identify potential intruders. We also observed the need to simplify the process of analyzing information to more efficiently and effectively identify information of consequence that must be shared. Great emphasis has been placed upon data mining and data integration, but the third and perhaps most crucial component may be data visualization. The vast amount of information potentially available to be mined and integrated must be intelligently analyzed, and the results effectively presented, so that the right people have the right information necessary to act effectively upon such information. This may involve pinpointing the relevant anomalies. Before DHS was established, the Office of Homeland Security had already begun several technological initiatives to integrate terrorist-related information from databases from different agencies responsible for homeland security. These included (1) adopting meta-data standards for electronic information so that homeland security officials understood what information was available and where it could be found and (2) developing data-mining tools to assist in identifying patterns of criminal behavior so that suspected terrorists could be detained before they could act. To address these technological challenges, the Homeland Security Act emphasized investments in new and emerging technologies to meet some of these challenges and established the Science and Technology Directorate, making it responsible for establishing and administering research and development efforts and priorities to support DHS missions. Improving IT management will be critical to transforming the new department. DHS should develop and implement an enterprise architecture, or corporate blueprint, to integrate the many existing systems and processes required to support its mission. This architecture will also guide the department’s investments in new systems to effectively support homeland security in the coming years. Other key IT management capacities that DHS will need to establish include investment and acquisition management processes, effective IT security, and secure communications networks. Effectively managing a large and complex endeavor requires, among other things, a well-defined and enforced blueprint for operational and technological change, commonly referred to as an enterprise architecture. Developing, maintaining, and using enterprise architectures is a leading practice in engineering both individual systems and entire enterprises. Enterprise architectures include several components, including a (1) current or “as is” environment, (2) target or “to be” environment, and (3) transition plan or strategy to move from the current to the target environment. Governmentwide requirements for having and using architectures to guide and constrain IT investment decision making are also addressed in federal law and guidance. Our experience with federal agencies has shown that attempts to transform IT environments without enterprise architectures often result in unconstrained investment and systems that are duplicative and ineffective. Moreover, our February 2002 report on the federal agencies’ use of enterprise architectures found that their use of enterprise architectures was a work in progress, with much to be accomplished. DHS faces tremendous IT challenges because programs and agencies have been brought together in the new department from throughout the government, each with their own information systems. It will be a major undertaking to integrate these diverse systems to enable effective information sharing among themselves, as well as with those outside the department. The Office of Homeland Security has acknowledged that an enterprise architecture is an important next step because it can help identify shortcomings and opportunities in current homeland-security-related operations and systems, such as duplicative, inconsistent, or missing information. Furthermore, the President’s homeland security strategy identifies, among other things, the lack of an enterprise architecture as an impediment to DHS’s systems interoperating effectively and efficiently. Finally, the CIO of DHS has stated that the most important function of his office will be to design and help implement a national enterprise architecture that will guide the department’s investment in and use of IT. As part of its enterprise development efforts, the department has established working groups comprising state and local CIOs to ensure that it understands and represents their business processes and strategies relevant to homeland security. In addition, OMB, in its current review of DHS’s redundant IT for consolidation and integration, has taken an initial first step to evaluate DHS’s component systems. According to an official in the office of the CIO, DHS has compiled an inventory of systems that represents its current enterprise architecture and will soon have a draft of its future enterprise architecture. In addition, this official anticipates having a preliminary road map of the plan to transition to the future enterprise architecture in September 2003 and estimates that DHS will have the plan itself by next winter. In June 2002, we recommended that the federal government develop an architecture that defined the homeland security mission and the information, technologies, and approaches necessary to perform the mission in a way that was divorced from organizational parochialism and cultural differences. Specifically, we recommended that the architecture describe homeland security operations in both (1) logical terms, such as interrelated processes and activities, information needs and flows, and work locations and users; and (2) technical terms, such as hardware, software, data, communications, and security attributes and performance standards. We observed that a particularly critical function of a homeland security architecture would be to establish protocols and standards for data collection to ensure that data being collected were usable and interoperable and to tell people what they needed to collect and monitor. The CIO Council, OMB, and GAO have collaborated to produce guidance on the content, development, maintenance, and implementation of architectures that could be used in developing an architecture for DHS. In April, we issued an executive guide on assessing and improving enterprise architecture management that extends this guidance. The Clinger-Cohen Act, federal guidance, and recognized best practices provide a framework for organizations to follow to effectively manage their IT investments. This involves having a single, corporate approach governing how an organization’s IT investment portfolio is selected, controlled, and evaluated across its various components, including assuring that each investment is aligned with the organization’s enterprise architecture. The lack of effective processes can lead to cost, schedule, and performance shortfalls, and in some cases, to failed system development efforts. We have issued numerous reports on investment and acquisition management challenges at agencies now transferred into DHS, including INS. INS has had long-standing difficulty developing and fielding information systems to support its program operations. Since 1990, we have reported that INS managers and field officials did not have adequate, reliable, and timely information to effectively carry out the agency’s mission. For example, INS’s benefit fraud investigations have been hampered by a lack of integrated information systems. Also, INS’s alien address information could not be fully relied on to locate many aliens who were believed to be in the country and who might have knowledge that would assist the nation in its antiterrorism efforts. Contributing to this situation was INS’s lack of written procedures and automated controls to help ensure that reported changes of address by aliens are recorded in all of INS’s automated databases. Our work has identified weaknesses in INS’s IT management capacities as the root cause of its system problems, and we have made recommendations to correct the weaknesses. INS has made progress in addressing our recommendations. In his written statement for a May 2003 hearing before the House Government Reform Committee, the DHS CIO stated that IT investments, including mission-specific investments, are receiving a departmentwide review. Benefits envisioned from this capital investment and control process include integrating information and identify and eliminating duplicate applications, gaps in information, and misalignments with business goals and objectives. Sound acquisition management is also central to accomplishing the department’s mission. One of the largest federal departments, DHS will potentially have one of the most extensive acquisition requirements in government. The new department is expected to acquire a broad range of technologies and services from private-sector companies. Moreover, DHS is faced with the challenge of integrating the procurement functions of many of its constituent programs and missions. Inherited challenges exist in several of the incoming agencies. For example, Customs has major procurement programs under way that must be closely managed to ensure that it achieves expectations. Despite some progress, we reported that Customs still lacks important acquisition management controls. For its new import processing system, Customs has not begun to establish process controls for determining whether acquired software products and services satisfy contract requirements before acceptance, nor to establish related controls for effective and efficient transfer of acquired software products to the support organization responsible for software maintenance. Agreeing with one of our recommendations, Customs continues to make progress and plans to establish effective acquisition process controls. Getting the most from its IT investment will depend on how well the department manages its acquisition activities. High-level attention to strong system and service acquisition management practices is critical to ensuring success. The Federal Information Security Management Act of 2002 (FISMA) requires federal agencies to provide information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of information collected or maintained by or on behalf of the agency, and information systems used or operated by an agency or by a contractor of an agency or other organization on behalf of an agency. Further, the Homeland Security Act specifically requires DHS to establish procedures to ensure the authorized use and the security and confidentiality of information shared with the department, including information on threats of terrorism against the United States; infrastructure or other vulnerabilities to terrorism; and threatened interference with, attack on, compromise of, or incapacitation of critical infrastructures or protected systems by either physical or computer-based attack. However, establishing an effective information security program may present significant challenges for DHS, which must bring together programs and agencies from throughout the government and integrate their diverse communications and information systems to enable effective communication and information sharing both within and outside the department. Since 1996, we have reported that poor information security is a widespread problem for the federal government, with potentially devastating consequences. Further, we have identified information security as a governmentwide high-risk issue in reports to the Congress since 1997—most recently in January 2003. Although agencies have taken steps to redesign and strengthen their information system security programs, our analyses of information security at major federal agencies have shown that federal systems were not being adequately protected from computer-based threats, even though these systems process, store, and transmit enormous amounts of sensitive data and are indispensable to many federal agency operations. For the past several years, we have analyzed audit results for 24 of the largest federal agencies, and our latest analyses, of audit reports issued from October 2001 through October 2002, continued to show significant weaknesses in federal computer systems that put critical operations and assets at risk. In particular, we found that all 24 agencies had weaknesses in security program management, which is fundamental to the appropriate selection and effectiveness of the other categories of controls and covers a range of activities related to understanding information security risks, selecting and implementing controls commensurate with risk, and ensuring that the controls implemented continue to operate effectively. In addition, we found that 22 of the 24 agencies had weaknesses in access controls—weaknesses that can make it possible for an individual or group to inappropriately modify, destroy, or disclose sensitive data or computer programs for purposes such as personal gain or sabotage, or in today’s increasingly interconnected computing environment, can expose an agency’s information and operations to attacks from remote locations all over the world by individuals with only minimal computer and telecommunications resources and expertise. In April 2003, we also reported that many agencies still had not established information security programs consistent with requirements originally prescribed by government information security reform legislation and now permanently authorized by FISMA. Considering the sensitive and classified information to be maintained and shared by DHS, it is critical that the department implement federal information security requirements to ensure that its systems are appropriately assessed for risk and that adequate controls are implemented and working properly. Federal information security guidance, such as that issued by the National Institute of Standards and Technology (NIST), can aid DHS in this process. For example, NIST has issued guidance to help agencies perform self-assessments of their information security programs, conduct risk assessments, and use metrics to determine the adequacy of in-place security controls, policies, and procedures. In addition, as we have previously reported, agencies need more specific guidance on the controls that they need to implement to help ensure adequate protection. Currently, agencies have wide discretion in deciding which computer security controls to implement and the level of rigor with which to enforce these controls. Although one set of specific controls will not be appropriate for all types of systems and data, our studies of best practices at leading organizations have shown that more specific guidance is important. In particular, specific mandatory standards for varying risk levels can clarify expectations for information protection, including audit criteria; provide a standard framework for assessing information security risk; help ensure that shared data are appropriately protected; and reduce demands for limited resources to independently develop security controls. Responding to this need, FISMA requires NIST to develop, for systems other than national security systems, (1) standards to be used by all agencies to categorize all of their information and information systems based on the objectives of providing appropriate levels of information security according to a range of risk levels; (2) guidelines recommending the types of information and information systems to be included in each category; and (3) minimum information security requirements for information and information systems in each category. DHS has identified implementing its information security program as a year-one objective. In continuing these efforts, it is important that DHS consider establishing processes to annually review its information security program and to collect and report data on the program, as required by FISMA and OMB. The Homeland Security Information Sharing Act, included in the Homeland Security Act of 2002, provides for the President to prescribe and implement procedures for federal agencies to share homeland security and classified information with others, such as state and local governments, through information sharing systems. Provisions of the act depict the type of information to be shared as that which reveals a threat of actual or potential attack or other hostile acts. Grand jury information; electronic, wire, or oral information; and foreign intelligence information are all included in these provisions. The National Strategy for Homeland Security also refers to the need for states to use a secure intranet to increase the flow of classified federal information to state and local entities. According to the strategy, this network would provide a more effective way to share information about terrorists. The strategy also refers to putting into place a “collaborative classified enterprise environment” to allow agencies to share information in their existing databases. To ensure the safe transmittal of sensitive, and, in some cases, classified, information vertically among everyone from intelligence entities, including the CIA, to local entities, such as those involved in emergency response and law enforcement, as well as horizontally across the same levels of government, requires developing and implementing communications networks with adequate security to protect the confidentiality, integrity, and availability of the transmitted information. Furthermore, these communications networks must be accessible to a variety of parties, from federal agencies to state and local government entities and some private entities. Secure networks for sharing sensitive information between state and federal entities have been implemented and are being used. For example, the National Law Enforcement Telecommunication System (NLETS) links all states and many federal agencies to the FBI’s National Crime Information Center (NCIC) network for the exchange of criminal justice information. Another law enforcement system called the Regional Information Sharing System (RISS) links thousands of local, state, and federal agencies to Regional Organized Crime Information Centers. Information sharing networks for the purpose of sharing sensitive information with some federal agencies also exist within the intelligence community. Other agencies are also engaged in efforts to provide homeland security networking and information management support for crisis management activities. Department of Defense officials have also stated that the Army National Guard’s network GuardNet, which was used to communicate among the states and the District of Columbia during the September 11 terrorist attacks, is being considered for homeland security mission support. For several years, the states have also been working on efforts to establish an information architecture framework for government information systems integration. There also appear to be many new efforts under way to implement secure networks. In addition, according to the recently published the cyberspace security strategy, DHS intends to develop a national cyberspace security response system, the Cyber Warning Information Network (CWIN), to provide crisis management support to government and nongovernment network operation centers. CWIN is envisioned as providing private and secure network communications for both government and industry for the purpose of sharing cyber alert and warning information. Moreover, the National Communications System, one of the 22 entities that were merged into the DHS, has implemented a pilot system, the Global Early Warning Information System (GEWIS), which will measure how critical areas of the Internet are performing worldwide and then use that data to notify government, industry, and allies of impending cyber attacks or possible disturbances. It was also recently reported that the Justice Department and the FBI are expanding two existing sensitive but unclassified law enforcement networks to share homeland security information across all levels of government. When fully deployed, their Antiterrorism Information Exchange (ATIX) will provide law enforcement agencies at all levels access to information. Law enforcement agencies also can use ATIX to distribute security alerts to private-sector organizations and public officials who lack security clearances. Users, who will have different access levels on a need-to-know basis, will include a broad range of public safety and infrastructure organizations, including businesses that have homeland security concerns and duties. They will have access to a secure E-mail system via a secure Intranet, which the FBI and DHS will use to deliver alerts to ATIX users. The FBI and other federal agencies, including DHS, will link to ATIX via Law Enforcement Online, the bureau’s system for sensitive-but-unclassified law enforcement data that provides an encrypted communications service for law enforcement agencies on a virtual private network. The second Department of Justice and FBI network, the Multistate Antiterrorism Regional Information Exchange System, will enable crime analysts working on terrorism investigations to quickly check a broad range of criminal databases maintained by federal, state, and local agencies. DHS reportedly is establishing secure videoconferencing links with emergency operations centers in all 50 states, as well as two territories and the District of Columbia. Also, the DHS CIO has stated that a major initiative in implementing the department’s IT strategy for providing the right information to the right people at all times is establishing the DHS Information Sharing Network Pilot project. Moreover, he sets 2005 as a milestone for DHS to build a “network of networks.” However, at this time, we do not have information on these projects or the extent to which they will rely on existing networks. It is also not clear how the DHS “network of networks” architecture will work with the state architecture being developed by the National Association of State CIOs. As we have previously reported, the new department has the challenge of developing a national homeland security performance focus, which relies on related national and agency strategic and performance planning efforts of the Office of Homeland Security, OMB, and the other departments and agencies. Indeed, the individual planning activities of the various component departments and agencies represent a good start in the development of this focus. However, our past work on implementation of the Government Performance and Results Act (GPRA) has highlighted ongoing difficulty with many federal departments and agencies setting adequate performance goals, objectives, and targets. Accordingly, attention is needed to developing and achieving appropriate performance expectations and measures for information sharing and in ensuring that there is linkage between DHS’s plans, other agencies’ plans, and the national strategies regarding information sharing. Ensuring these capabilities and linkages will be vital in establishing comprehensive planning and accountability mechanisms that will not only guide DHS’s efforts but also help assess how well they are really working. As we previously reported, one of the barriers that the new department faces in establishing effective homeland security is interagency cooperation, which is largely attributed to “turf” issues among the 22 component agencies subsumed by the new department. Strong and sustained commitment of agency leaders would provide performance incentives to managers and staff to break down cultural resistance and encourage more effective information sharing pertaining to homeland security. Moreover, agency leaders have a wide range of tools at their disposal for enforcing and rewarding cooperative efforts, including performance bonuses for senior executives and incentive award programs for staff. Our studies of other cross-cutting federal services with similar “turf” problems have also shown that agency performance plans, which are required by GPRA, offer a good avenue for developing incentives to cooperate. Specifically, agencies can set up goals in their performance plans for participation in cross-cutting programs and report on their progress in meeting these goals to the Congress. The Congress could also build similar incentives into budget resolutions. Shared programmatic goals and metrics would also encourage cooperation and coordination. Agencies subsumed by DHS should all participate in the development of goals, milestones, and metrics to measure progress and success, and such indicators should be clearly articulated and endorsed by senior management. Such goals and metrics must be carefully chosen since how performance is measured greatly influences the nature of the performance itself; poorly chosen metrics may lead to unintended or counterproductive results. However, visible, clearly articulated and carefully chosen shared goals and metrics can effectively overcome “turf” issues. Developing metrics to measure the success of these activities is critical to ensuring a successful effort. Similar indicators more directly related to information sharing could be developed. Human capital is another critical ingredient required for ensuring successful information sharing for homeland security. The cornerstones to effective human capital planning include leadership; strategic human capital planning; acquiring, developing, and retaining talent; and building results-oriented organizational cultures. The homeland security and intelligence communities must include these factors in their management approach in order to benefit from effective collaboration in this critical time. As we have previously reported, the governmentwide increase in homeland security activities has created a demand for personnel with skills in areas such as IT, foreign language proficiencies, and law enforcement, without whom critical information has less chance of being shared, analyzed, integrated, and disseminated in a timely, effective manner. We specifically reported that shortages in staffing at some agencies had exacerbated backlogs in intelligence and other information, adversely affecting agency operations and hindering U.S. military, law enforcement, intelligence, counterterrorism, and diplomatic efforts. We have also previously reported that some of the agencies that moved into DHS have long-standing human capital problems that will need to be addressed. One of these challenges has been the ability to hire and retain a talented and motivated staff. For example, we reported that INS has been unable to reach its program goals in large part because of such staffing problems as hiring shortfalls and agent attrition. We also reported that several INS functions have been affected by the lack of a staff resource allocation model to identify staffing needs. We concluded then that it was likely that increased attention to the enforcement of immigration laws and border control would test the capacity of DHS to hire large numbers of inspectors for work at our nation’s border entry points. Moreover, we reported that other agencies being integrated into DHS were also expected to experience challenges in hiring security workers and inspectors. For example, we reported that the Agriculture Department, the Customs Service, INS, and other agencies were all simultaneously seeking to increase the size of their inspections staffs. To overcome its significant human capital shortfalls, DHS must develop a comprehensive strategy capable of ensuring that the new department can acquire, develop, and retain the skills and talents needed to prevent and protect against terrorism. This requires identifying skill needs; attracting people with scarce skills into government jobs; melding diverse compensation systems to support the new department’s many needs; and establishing a performance-oriented, accountable culture that promotes employee involvement and empowerment. In February, the DHS CIO acknowledged the lack of properly skilled IT staff within the component agencies. Challenges facing DHS in this area, he stated, include overcoming political and cultural barriers, leveraging cultural beliefs and diversity to achieve collaborative change, and recruiting and retaining skilled IT workers. He acknowledged that the department would have to evaluate the talent and skills of its IT workforce to identify existing skill gaps. He further stated that a critical component of DHS’s IT strategic plan would address the actions needed to train, reskill, or acquire the necessary skills to achieve a world-class workforce. He committed to working closely with the department’s Chief Human Capital Officer and with the Office of Personnel Management to achieve this goal. He set July 2003 as a milestone for developing a current inventory of IT skills, resources, and positions and September 2003 as the targeted date for developing an action plan. It is important to note that accountability is also a critical factor in ensuring the success of the new department. The oversight entities of the executive branch—including the inspectors general, OMB, and the Office of Homeland Security—have a vital role to play in ensuring expected performance and accountability. Likewise, congressional committees and GAO, as the investigative arm of the legislative branch, with their long- term and broad institutional roles, also have roles to play in overseeing that the new department meets the demands of its homeland security mission. In summary, information sharing with and between all levels of government and the private sector must become an integral part of everyday operations if we are to be able to identify terrorist threats and protect against attack. As such, information sharing is an essential part of DHS’s responsibilities and is critical to achieving its mission. To implement these responsibilities, DHS will need to develop effective information sharing systems and other information sharing mechanisms. The department will also need to develop strategies to address other challenges in establishing its organization and information architecture and in developing effective working relationships, cooperation, and trust with other federal agencies, state and local governments, and the private sector. Messrs. Chairmen, this concludes my statement. I would be happy to answer any questions that you or members of the subcommittees may have at this time. For information about this statement, please contact Robert Dacey, Director, Information Security Issues, at (202) 512-3317, or William Ritt, Assistant Director, at (202) 512-6443. You may also reach them by E-mail at [email protected] or [email protected]. Individuals who made key contributions to this testimony include Mark Fostek, Sophia Harrison, and Barbarol James. | The Homeland Security Act of 2002, which created the Department of Homeland Security (DHS), brought together 22 diverse organizations to help prevent terrorist attacks in the United States, reduce the vulnerability of the United States to terrorist attacks, and minimize damage and assist in recovery from attacks that do occur. To accomplish this mission, the act established specific homeland security responsibilities for the department, which included sharing information among its own entities and with other federal agencies, state and local governments, the private sector, and others. GAO was asked to discuss the significance of information sharing in fulfilling DHS's responsibilities, emphasizing GAO's related prior analyses and recommendations for improving the federal government's information sharing efforts. DHS's responsibilities include coordinating and sharing information related to threats of domestic terrorism within the department and with and between other federal agencies, state and local governments, the private sector, and other entities. To accomplish its missions, DHS must, for example, access, receive, and analyze law enforcement information, intelligence information, and other threat, incident, and vulnerability information from federal and nonfederal sources and analyze such information to identify and assess the nature and scope of terrorist threats. DHS must also share information both internally and externally with agencies and law enforcement on such things as goods and passengers inbound to the United States and individuals who are known or suspected terrorists and criminals. GAO has made numerous recommendations related to information sharing particularly as they relate to fulfilling DHS's critical infrastructure protection responsibilities. Although improvements have been made, more efforts are needed to address the following challenges, among others, that GAO has identified: (1) developing a comprehensive and coordinated national plan to facilitate information sharing on critical infrastructure protection; (2) developing productive information sharing relationships between the federal government and state and local governments and the private sector; and (3) providing appropriate incentives for nonfederal entities to increase information sharing with the federal government and enhance other critical infrastructure protection efforts. Through our prior work, we have identified critical success factors and other key management issues that DHS should consider as it establishes systems and processes to facilitate information sharing among and between government entities and the private sector. These success factors include establishing trust relationships with a wide variety of federal and nonfederal entities that may be in a position to provide potentially useful information and advice on vulnerabilities and incidents. Further, as part of its information technology management, DHS should continue to develop and implement an enterprise architecture to integrate the many existing systems and processes required to support its mission and to guide the department's investments in new systems to effectively support homeland security in the coming years. Other key management issues include ensuring that sensitive information is secured, developing secure communications networks, integrating staff from different organizations, and ensuring that the department has properly skilled staff. |
Congress, concerned about the burden on grantees of multiple, varying requirements imposed by different grant programs, passed P.L. 106-107 in 1999. The act’s objective is to improve the effectiveness and performance of federal financial assistance programs, simplify federal financial assistance application and reporting requirements, improve the delivery of services to the public, and facilitate greater coordination among those responsible for delivering such services. The act required agencies to establish common applications, systems, and uniform rules to improve the effectiveness and performance of federal grants with the goal of improved efficiency and delivery of services to the public. Under P.L. 106-107, OMB is required to direct, coordinate, and assist federal agencies in developing and implementing a common application and reporting system, including electronic processes with which a nonfederal entity can apply for, manage, and report on the use of funds from multiple grant programs that serve similar purposes but are administered by different federal agencies. The act sunsets in November 2007. The complexity and diversity of the grants system makes streamlining a difficult endeavor. Multiple federal entities are involved in grants administration; the grantor agencies have varied grants management processes; the grantee groups are diverse; and grants themselves vary substantially in their types, purposes, and administrative requirements. The federal grant system continues to be highly fragmented, potentially resulting in a high degree of duplication and overlap among federal programs. Hundreds of federal grant programs implement various domestic policies and have administrative requirements that may be duplicative, burdensome, or conflicting—which can impede the effectiveness of grants programs. Multiple federal entities are involved in grants management. The Federal Grant and Cooperative Agreement Act of 1977 gives OMB the authority to issue supplementary interpretive guidelines to promote consistent and efficient use of grant agreements. OMB publishes this guidance to federal agencies in OMB circulars and federal agencies issue regulations implementing the OMB guidance. The General Services Administration is the lead agency in charge of disseminating information on funding opportunities. It publishes, in both electronic and print form, the Catalog of Federal Domestic Assistance, a searchable database of federal financial assistance programs. There is substantial diversity among the federal agencies that administer grants. Some agencies administer many grants through multiple, decentralized subagencies, while other agencies have small, centralized grant-making offices that administer only a few, small grant programs. For example, in fiscal year 2003, HHS administered 282 grant programs that distributed approximately $246 billion through its 16 subagencies, while the National Endowment for the Arts administered 3 grant programs that distributed approximately $95 million. Grant programs are diverse in their structure and purpose. Grants can be grouped into three types based on the amount of discretion given to the grantee for the use of funds. Each type strikes a different balance between the desire of the federal grantor that funds be used efficiently and effectively to meet national objectives and the desire of the grantee to use the funds to meet local priorities and to minimize the administrative burdens associated with accepting the grant. Categorical grants allow the least amount of recipient discretion, general revenue-sharing grants the most, and block grants an intermediate amount. Grant funds may also be grouped by their method of allocating funds, that is, by formula, through discretionary project grants, or both. Formula grants allocate funds based on distribution formulas prescribed by legislation or administrative regulation. Project grants are generally awarded on a competitive basis to eligible applicants. Grant programs fund a variety of types of programs, including training, research, planning, evaluation, capacity building, demonstration projects, construction, and service provision in many different areas including health care, education, law enforcement, and homeland security. The diversity of grant programs is matched by the diversity of grant recipients. Grant announcements identify the eligible recipients, which may include states and their agencies, local governments, tribal governments, nonprofit organizations, research institutions, and individuals. The opportunities to streamline grants administration differ throughout the life cycle of a grant. While there is substantial variation among grants, generally grants follow the life cycle as shown in figure 1: announcement, application, award, postaward, and closeout. Once established through legislation, which may specify particular objectives, eligibility, and other requirements, a grant program may be further defined by grantor agency requirements. For competitive grant programs, the public is notified of the grant opportunity through an announcement, and potential grantees must submit applications for agency review. In the awards stage, the agency identifies successful applicants or legislatively defined grant recipients and awards funding. The postaward stage includes payment processing, agency monitoring, and grantee reporting, which may include financial and performance information. The closeout phase includes preparation of final reports, financial reconciliation, and any required accounting for property. Audits may occur multiple times during the life cycle of the grant and after closeout. To implement P.L. 106-107’s requirement to improve the effectiveness and performance of federal grants, a common plan was developed and most, but not all, grant-making agencies have submitted reports annually on their progress toward this plan as required by the law. The work groups have identified several changes that should be made, but many of these are still in the developmental or approval stages. One particular extensive effort— the development of a Web portal called Grants.gov that represents a common face to grantees—has enabled grantees to identify relevant grant opportunities and, to a limited extent, apply electronically for grants. For the later phases of the grant life cycle, a new initiative is under way, the Grants Management Line of Business, that will encompass all phases of the grant life cycle and specifically address simplifying the administration and management of grants. P.L. 106-107 requires that under OMB leadership, agencies develop common applications, systems, and administrative rules to improve the effectiveness of federal grants. To implement this requirement, a cross- agency committee established cross-agency work groups. The work groups then identified needed changes and developed a common plan for implementing P.L. 106-107. Twenty-six federal grant-making agencies agreed to use this common plan to meet the law’s requirements, since meeting its objectives required them to work together to a large extent. The plan, submitted to Congress and OMB in May 2001, was developed under the oversight of the initial interagency governance structure established to implement P.L. 106-107. A series of five public consultation meetings was held with representatives from states, local governments, Native American tribes and tribal organizations, universities and nonprofit organizations that conduct research, and other nonprofit organizations. Comments from these meetings were considered in developing the plan. The common plan contained goals and objectives intended to meet the requirements of P.L. 106-107. It included progress, accomplishments, and planned activities for streamlining and simplifying the award and administration of federal grants. The plan addressed the life cycle of the grant process, supporting processes, systems and standards, as well as other issues. Some specific objectives included (1) streamlining, simplifying, and improving announcements of funding opportunities and related business processes, application requirements and procedures, and award documents; (2) streamlining and simplifying standard and unique report forms, allowing for electronic submission of reports, achieving greater uniformity in federal business processes for reporting, and improving reporting by recipients; (3) simplifying and standardizing, to the extent appropriate, general administrative requirements and agency treatment of them in the terms and conditions of award; and (4) fully developing and implementing a portal for identifying and applying for grants, and ensuring that any revised electronic data standards are interoperable and present a common face to grant-making agencies, applicants, and recipients. The common plan also included some process improvements that began before passage of P.L. 106-107 and were completed prior to adoption of the plan or are still continuing today. For example, since 1998 the federal government has required grant-making agencies to transition from various payment systems to one of three designated systems. The common plan included objectives and milestones directly related to such past activities that have been incorporated into the plan. The plan is also built on successful models resulting from earlier initiatives of individual agencies or interagency groups. For example, one objective of the common plan was to ensure that federal agencies’ grant financial systems comply with requirements established by the Joint Financial Management Improvement Program. Annual governmentwide progress reports describe the collaborative efforts of 26 federal agencies. Each agency also reports annually on its progress implementing the plan, although not all agencies have regularly submitted these reports. The annual governmentwide progress report describes the collaborative efforts to streamline and simplify the award and administration of federal grants. The report includes the federal government’s steps toward simplification of the grant policy framework. For instance, the establishment of a central location for OMB guidance to federal agencies and agency regulations implementing that guidance will make it easier for the applicants and recipients to find and follow administrative requirements. It also includes completed initiatives, such as the development and use of a standard format for agencies’ funding announcements, which aims to make it easier for potential applicants to quickly find specific information in the announcements. P.L. 106-107 requires each federal grant-making agency to provide an annual progress report that evaluates its performance in meeting the common plan’s goals and objectives. However, only 22 of the 26 agencies have submitted their 2004 annual report to Congress. (See app. I for information on agencies submitting reports for 2002 to 2004.) Agencies have reported progress in implementing some streamlining activities. For example, HHS has worked toward the internal consolidation from nine to two grant management systems, one primarily supporting research grants and the other primarily supporting nonresearch, or service grants. Another agency, the National Science Foundation, reported it is conducting a comprehensive business analysis that will highlight areas where grant processes can be streamlined and simplified. Also, the National Endowment for the Humanities reported it has streamlined the internal agency clearance process, which is the mechanism by which all grant applications’ guidelines and forms are reviewed and updated every year. Some factors, both internal and external to the grant-making agencies, may have slowed agencies’ progress in fully implementing streamlining activities and have contributed to the lack of progress in adopting common governmentwide systems. The different business processes at various agencies was one reason agencies reported a hesitation to migrate to a common grant management system. For example, the National Science Foundation reported that it conducts peer reviews of broad research grant programs, which require an entirely different type of management system when compared to the Department of Transportation, which generally manages noncompetitive formula grants to state and local governments. The structure and size of an agency’s grant management program is another factor that may affect the agency’s progress toward grant streamlining. For example, some smaller agencies such as the National Endowment for the Humanities, which has a highly centralized grant management operation, reported being able to more quickly adopt some of the governmentwide grant streamlining initiatives. However, other agencies that manage grant programs from many different operating divisions may take longer to make changes due to the decentralized organizational structure and the larger number of grant programs. Lastly, some agencies had existing online grant management systems before the passage of P.L. 106-107 and the development of Grants.gov. The integration of preexisting grant streamlining achievements in some agencies, such as the common announcement form adopted from National Science Foundation and National Endowment for the Humanities work, allows those agencies to realize more immediate benefits because much of the work was completed prior to implementation of the common plan. Agencies, such as the Department of Transportation, that have not fully implemented internal streamlining initiatives need to do so before they can fully benefit from the approaches adopted by other agencies or the cross- agency work groups. P.L. 106-107 also required agencies to establish performance measures and a process for assessing the extent to which specified goals and objectives have been achieved. In developing these performance measures, the agencies were to consider input from applicants, recipients, and other stakeholders. The annual agency progress reports did not include any such performance measures or evaluations. Each of the agencies’ progress reports varied in detail and included a narrative of some of the actions taken to meet identified goals and objectives. Attempts to compare the progress of federal agencies to each other are difficult due to the missing reports and the lack of performance measures. After P.L. 106-107 was enacted, several cross-agency work groups were created to facilitate the law’s implementation; while some of their developments have been implemented, others are still in progress. The teams, which focused on different phases of the life cycle of grants, identified initiatives that should be undertaken. To identify priorities for action, the teams relied on comments from the grantee community on what streamlining should occur and on their own knowledge of grants management. With many potential areas on which to focus, some work group representatives commented to us that they addressed the “low- hanging fruit,” preferring to work on those tasks that were more readily accomplished while yielding strong results. The current work groups and their responsibilities are shown in table 1. In addition, some groups have subgroups that have taken responsibility for key products. The work groups are supported to some extent by additional contract staff funded initially by the Chief Financial Officers Council. The cross-agency work groups have accomplishments that are expected to streamline grant activity for grantees, as described in table 2. For example, the Pre-Award Work Group focused on reducing the time a grantee must spend searching for information on grants. One concern was inconsistent announcement formats. The team believed that a consistent format for grant announcements would save time and reduce frustration for grantees that applied to different programs. The group also developed the standard set of data elements for the Grants.gov “find” feature, thereby ensuring that users of Grants.gov will find similar information in the same places for different grant descriptions. The Audit Work Group developed and distributed a pamphlet clarifying the single audit process. It also ensured that OMB Circular A-133, Compliance Supplement, was updated annually. This update should ensure that grantees’ auditors can more easily identify the criteria that they should use as they assess whether grantees are in compliance with grant requirements. One area on which the work groups made progress was establishing a common electronic system through which information on available grants could be found and applicants could apply for grants, now called Grants.gov. At that point, identifying grant opportunities required searching information from many agencies and applying for them using a variety of application forms and processes. The work groups developed a common format for the full announcement to be used governmentwide and a related set of data elements for an electronic synopsis of the announcement. Grants.gov, now administered by a program management office based in HHS, has provided the ability for potential grantees to search open grant opportunities by these key components, such as by the type of activity funded (e.g., education or the environment) and the agency providing funds. Grantees also can request notification of grant opportunities that meet certain parameters that they identify. Grant opportunities were initially provided on the system in February 2003, and in November 2003, OMB required that federal agencies post information on all discretionary grant-funding opportunities at the Web site. The Grants.gov Program Management Office reports that since October 2003 all 26 grant-making agencies have listed their discretionary grant opportunities. They also report high growth in usage of the portal; Grants.gov reports that in November 2004, the “find” activity on the site received about 2.2 million page requests, up from about 633,000 in November 2003, and applicant e-mail notifications have averaged 600,000 to 700,000 weekly. More recently, Grants.gov has provided the capability to apply for grants electronically at a common portal and, to some extent, use common forms across agencies. Applicants can download an application package; complete the application off-line; and submit it electronically to Grants.gov, which transmits the application to the funding agency. Grant-making agencies work with the program management office staff to identify the forms needed, sometimes using the same forms as other programs and other agencies use. Grant applicants are notified electronically when agencies receive their applications. In some cases, agencies can download the grant application data directly to their own internal systems, thus eliminating the need for staff to input data. Use of the online applications, however, has been slow to grow. As of April 6, 2005, 6 of the 26 key grant- making agencies had not yet posted “apply” packages, and about 2,600 electronic applications had been received. Use of the system requires agencies to set up internal systems and, to some extent, have their forms loaded onto the site. Grantees must also complete a registration process, which we were told is time-consuming and might be viewed by some applicants as intimidating but is necessary, according to OMB officials, to ensure privacy and to maintain the security of the system. Funding for Grants.gov has shifted from obtaining contributions from key partners to obtaining a set amount from grant-making agencies. For fiscal years 2002 through 2004, Grants.gov was funded by contributions totaling about $29.4 million. Beginning with fiscal year 2005, it will be funded with payments from 26 grant-making agencies, based on an agency’s total grant dollars awarded. For 2005 and 2006, the 6 large agencies will be assessed $754,467, the 10 medium agencies will be assessed $452,680, and the 10 small agencies will be assessed $226,340, for a total of about $11,300,000 each year. Appendix II provides more detailed information on Grants.gov and individual agency information on progress toward implementing its “apply” component. Several reforms are partially under way but have not yet completed the approval process or been implemented, as shown in table 2. For example, a separate standard application form for research (and related) grants has been proposed, which will ensure that multiple agencies will be able to use the same application. This should simplify applications for grantees who apply for grants at multiple agencies, but this form is not yet approved. Similarly, the Post-Award Work Group has developed a common Performance Progress Report for nonresearch grants and has received agency comments on the proposed form. The group expects that this will reduce the concern that too many different progress reports are used, which poses a substantial administrative burden for grantees. The work group also developed several common forms, such as a Real Property Report (which addresses real property built with grant funds) and a federal financial report, which, as of December 22, 2004, was with OMB for approval. The Mandatory Work Group is developing a set of core data elements that could be used to post mandatory awards to the Grants.gov Web site, which an OMB official commented would enable potential contractors to be aware of funds that states and other entities were receiving. Additionally, based on an initiative begun by the Pre-Award Work Group, OMB has moved one of its circulars, which provides guidance, to a newly created Title 2 of the Code of Federal Regulations and plans that agencies will eventually colocate their grant regulations in the same title. Although the Grants.gov portal has provided a common, electronic system for helping grantees identify and apply for grants, development of common, electronic systems for managing later stages of the grant life cycle has not progressed. When originally planned, the Grants.gov portal was envisioned as providing a common face to grantees for managing all phases of grants, from grantees’ identification of appropriate grant opportunities through application, awarding, and management of the grants. However, in early 2004, OMB instructed Grants.gov officials to cease their efforts to develop common systems for the grant phases beyond application and to concentrate on ensuring that electronic applications were fully implemented at all grant-making agencies, since some agencies still were not participating or were participating at minimal levels. In March 2004, OMB initiated a governmentwide analysis of five lines of business that would support the President’s Management Agenda goal of expanding electronic government, with one of them focusing on grants management. The team was to draft and finalize common solutions and a target architecture and present them for the fiscal year 2006 budget review. The grants management initiative was headed by representatives from the Department of Education and the National Science Foundation. The Grants Management Line of Business initiative has the specific objective of developing a governmentwide solution to support end-to-end grants management activities that promote citizen access, customer service, and agency financial and technical stewardship. To provide information, the team requested and analyzed information from interested parties on possible solutions and approaches. The team also surveyed grant-making agencies on their internal grant-making systems and found that about 40 different internal agency systems were operating, ranging from systems operating with almost no automation to systems that are fully automated. In evaluating the information, the team did not identify any end- to-end business or technical solution for grants management that would be able to meet the needs of all 26 agencies without large investments in configuring and customization. Further, it found that while the early stages of the grant life cycle (i.e., connecting potential grantees with grant opportunities and the application process) were already handled consistently across grantor agencies, postaward activities are handled less consistently across agencies and would require flexibility in business rules. As a result, the team is proposing a consortia-based approach to continue streamlining and consolidating the end-to-end grant management process, but development of this system is not yet under way. It would use Grants.gov as a “storefront” to support grantees and would expand it beyond the current processes to include additional functions that interface with the grantees. Rather than develop one system that all agencies would use to manage grants internally, consortia of agencies with similar systems, such as agencies that primarily fund research grants, would be formed. Government, industry, or both will provide information technology service centers for agencies throughout the grant life cycle, an approach that is expected to reduce or eliminate the costs of multiple agencies developing and maintaining grants management systems. As P.L. 106-107 and the common plan emphasized, coordination among the agencies and with grantees in the planning and implementation of grant- streamlining initiatives can increase the likelihood that the standard processes and policies developed will meet the diverse needs of all the stakeholder groups. While the agencies have established cross-agency processes to facilitate coordination activities, progress has been hampered by frequent changes in the groups that are implementing and overseeing the implementation of P.L. 106-107. The various grant-streamlining initiatives have had different levels of coordination activities with grantees. The P.L. 106-107 work groups solicited input from the grantee community during their early planning stages, but do not have ongoing coordination activities. The Grants.gov initiative solicits ongoing input from grantees in a variety of ways. It is not yet clear if the Grants Management Line of Business initiative will include coordination activities with grantee groups. P.L. 106-107 requires OMB to direct and coordinate the federal agencies in establishing an interagency process for achieving grant streamlining and simplification. Furthermore, the act directs the federal agencies to actively participate in this interagency grant-streamlining and simplification process. Because the agencies are developing common policies and processes to meet their diverse grants management needs, a well- implemented interagency process can improve the likelihood of success of the grant-streamlining initiatives. In examining coordination issues, we have identified key practices that affect the likelihood for success of cross- organizational initiatives. These practices include establishing a collaborative organizational structure, maintaining collaborative relationships, and facilitating communication and outreach. A collaborative organizational structure, characterized by strong leadership and a comprehensive structure of participants’ roles and responsibilities, can facilitate coordination activities. As shown in figure 2, OMB established several groups to lead and coordinate the effort to implement P.L. 106-107. The act allows OMB to designate a lead agency and establish interagency work groups to assist OMB in implementing the requirements of the act. OMB designated HHS as the lead agency for the implementation of P.L. 106-107. In the spring of 2000, OMB charged the Grants Management Committee of the Chief Financial Officers Council with coordinating and overseeing the governmentwide implementation of P.L. 106-107. The Grants Management Committee included two representatives from each of the grant-making agencies. The committee established four working subcommittees: the Pre-Award Work Group, the Post-Award Work Group, the Audit Oversight Work Group, and the Electronic Work Group. In addition, the committee established the General Policy and Oversight Team, which was co-chaired by OMB and HHS, and included the chairs of each of the work groups. The team was intended to oversee the progress of the work groups and examine issues that cut across the responsibilities of the individual work groups. According to officials involved with P.L. 106-107 implementation, the Grants Management Committee was ineffective, creating a stumbling block for the initiative. In May 2004, the Grants Executive Board assumed the responsibility for the coordination and oversight of P.L. 106-107 initiatives. In an update to its charter, the Grants Executive Board (previously the Grants.gov Executive Board) expanded its oversight to include both the Grants.gov initiative and the P.L. 106-107 initiative. The Grants Executive Board has 13 members, one representative from each of the 11 larger grant- making agencies and two seats that rotate among the other 15 grant-making agencies. The Grants Executive Board meets monthly and, with the assistance of the HHS-led grant streamlining Program Management Office, oversees the work of the interagency grant streamlining work groups. The board’s oversight duties include reviewing work group recommendations to determine if they should be referred to OMB for governmentwide implementation, defining accountability and reporting requirements to be met by the work groups, and preparing the annual progress reports for Congress. The Grants Executive Board also oversees the Grants.gov initiative, which is charged with implementing the grant-streamlining policies in the preaward phase of grants administration. The P.L. 106-107 Planning and Oversight Committee is the coordinating body for the grant-streamlining work groups and advises the Grants Executive Board. Its membership consists of the chairs of each of the work groups, a representative of the Grants.gov Program Management Office, the P.L. 106-107 Program Manager, and an OMB representative. Agency volunteers staff the work groups. Volunteer staffing is a challenge for the work groups because the volunteers maintain their regular agency responsibilities. According to work group chairs, the volunteer staff members are dedicated, knowledgeable, and experienced in grants policy and processes. HHS selects the chair of each work group, but does not limit the size of the work groups so that all interested agencies may participate. According to the P.L. 106-107 Program Manager, not all agencies are participating in the work groups. Agencies that do not participate will not have input into the design of governmentwide grant policies, increasing the risk that the new policies will not meet the needs of all grant-making agencies. Interagency efforts toward a second key element of coordination— maintaining collaborative relationships—have been mixed. The major elements of maintaining collaborative relationships include a shared vision among participants and formal agreements with a clear purpose, common performance outputs, and realistic performance measures. The agencies helped to establish a cooperative, shared vision by jointly developing the initial implementation plan, which establishes goals and objectives to meet the requirements of P.L. 106-107. However, while the plan outlines preliminary steps toward achieving its objectives, it does not outline a comprehensive plan beyond those first steps. Furthermore, the time targets in the plan are primarily short-term targets related to preliminary steps. The annual cross-agency progress report can be a tool to maintain the shared vision established in the initial plan. According to work group leaders, the work group volunteers from the agencies are committed to the goals of grant streamlining and simplification. In addition to the cross- agency progress report, each agency is required to submit an annual agency progress report. This requirement has the potential to be an effective management tool for monitoring the compliance and progress of individual agencies. However, because the reports do not frame annual achievements in the context of a comprehensive plan and use performance measures to track progress, they are not an effective management tool. Furthermore, not all the agencies have submitted their annual reports, and OMB’s position is that it is not their role to police agency compliance with this requirement. Because the agencies have not developed a comprehensive plan and are not reporting on their progress using common performance measures, they are less likely to maintain the shared vision that was established with the common plan. Implementation of a third key element of coordination practices, communication and outreach, has not always been effective. Leaders of the initiatives hold regular meetings to share information with one another. For example, the P. L. 106-107 Planning and Oversight Committee meets monthly to facilitate coordination between the work groups. However, the Audit Oversight Work Group Chair position has been vacant for the past 18 months, so although the audit subgroups continue their work, they have little contact with the other grant-streamlining groups. Informal coordination between the various grant-streamlining initiatives occurs because often the same people serve on multiple committees. Outreach from the initiatives to the agencies has also not always been effective. For example, the Post-Award Work Group sends proposals or draft reports to the agencies, but they do not always reach the necessary people because some agencies are very large and have complex organizational structures. The future relationship between the Grants Management Line of Business, P.L. 106-107 work groups, and the Grants.gov Program Management Office is unclear. This management situation appears to have hampered progress. OMB plans to form a Grants Governance Committee to oversee three program management offices working on grant streamlining and simplification. The Grants Governance Committee will oversee the Grants.gov initiative, the P.L. 106-107 initiative, and the Grants Management Line of Business initiative. However, there will be a separate program management office for each initiative, and there appears to be overlap between the responsibilities of the three initiatives. Representatives of two of the work groups reported that there has been little communication between the Line of Business initiative and the P.L. 106-107 work groups. Work group members said they are reluctant to go forward with new projects because they do not know if their priorities will be consistent with those of the Line of Business initiative. For example, the Line of Business initiative appears to be planning to rely on Grants.gov for its “find” and “apply” functions, but it is not yet clear if Grants.gov will be the portal used by the grantee in the later stages of the grant life cycle. In anticipation of the start of the Line of Business initiative, OMB has directed Grants.gov to focus its efforts on the functionality of the “find” and “apply” functions. The Grants.gov Program Manager reported that, accordingly, the Grants.gov office is holding off on efforts to incorporate processes related to the later stages of the grants life cycle. Because grant management and reporting rely on information gathered in the “apply” stage, there should be some integration between these functions. P.L. 106-107 obligates OMB and the agencies to consult with representatives of nonfederal entities during the development and implementation of grant-streamlining plans, policies, and systems. In addition to its general directive to consult and coordinate with grantees, the act requires the agencies to publish the implementation plan in the Federal Register for public comment; hold public forums on the plan; and cooperate with grantees to define goals, objectives, and performance measures related to the objectives of the act. In prior work, we have found that collaborative activities include communication strategies that facilitate two-way communication among the project team, partners, and other stakeholders, and that outreach programs keep those affected by the initiative informed of new developments and provide structured means for feedback and questions. By failing to involve important stakeholders, the initiatives increase the risk that they will not fully achieve the objectives defined in P.L. 106-107 and the common plan. In its early work, the groups established by OMB and its lead grant streamlining agency, HHS, undertook efforts to coordinate and consult with the grantee communities. The Grants Management Committee created a Web site that provided information about the work groups’ activities in implementing the act and invited public input. Individual agencies also sought input through invitations to comment posted on their Web sites. In the fall of 2000, the Grants Management Committee held a series of five interagency public consultation meetings with (1) states, (2) local governments, (3) Native American tribes and tribal organizations, (4) universities and nonprofit organizations that conduct research, and (5) other nonprofit organizations. Throughout this process, the teams built a database of the public comments and used them to develop the common plan. The plan considers those comments and, in large part, is based on them. In January 2001, the agencies jointly published the interim/draft plan in the Federal Register and requested public comment. The common plan outlines two processes for maintaining ongoing communication with grantee groups. First, it envisions the establishment of an ombudsman, a third party operating apart from the individual grant- making agencies and OMB that could provide grantees with an avenue for making their concerns known if agency requirements appear to exceed the standards adopted. Second, the agencies planned to establish performance measures related to the purposes and requirements of the act and a process for assessing the extent to which specified goals and objectives have been achieved. In developing the performance measures, the agencies were to consider input from applicants, recipients, and other stakeholders. The agencies planned to develop multiple measures to assess performance, including progress as perceived by the public and federal staff as well as objective process and outcome measures. The agencies expected to use these performance measures to evaluate their performance in meeting the plan’s goals and objectives and report annually on their progress as required by P.L. 106-107. As the streamlining reforms have been developed and implemented, the agencies and work groups have not fulfilled the envisioned processes for soliciting ongoing input from grantees. By failing to involve important stakeholders, the initiatives increase the risk that they will not fully achieve the objectives defined in P.L. 106-107 and the common plan. The plan envisioned the establishment of an ombudsman that could provide applicants/recipients an avenue for making their concerns known if agency requirements appear to deviate from the common systems or standard processes. The common plan set a target date of March 31, 2002, for finalizing the job description of the ombudsman. The agencies have not established the ombudsman position and do not currently plan to establish one due to changing priorities. In addition, the agencies have neither set specific annual goals and objectives nor used concrete performance measures in the annual progress reports, as was required by P.L. 106-107 and envisioned in the common plan. However, the P.L. 106-107 Program Manager is currently conducting an analysis of progress to date in meeting the requirements of P.L. 106-107 and an analysis of how the reforms have addressed the concerns expressed in the public comments. Furthermore, only one of the four active cross-agency work groups consistently uses the public comments during the development of its initiatives. The Pre-Award Work Group, which addresses the streamlining of announcements, applications, and award processes, has continued to use the public comments to inform its work. The other work groups informally vet their proposals with selected grantee groups. Grantees are not formally involved in the development of grant-streamlining proposals. The grant-streamlining teams solicit public comment only once a proposal is posted in the Federal Register. Representatives from a group of research grantees told us that this one-way communication is not sufficient to produce reforms that simplify the grant process for recipients. For example, they commented that the reform of the cost principles focused only on reducing the discrepancies in definitions used by the three different cost principles circulars and actually increased the administrative burden for the research community. The work groups have expressed concern that in seeking public input, they must take care not to violate the Federal Advisory Committee Act of 1972 (FACA), which establishes requirements pertaining to the creation, operation, duration, and review of covered advisory committees. However, because nonfederal participants do not act as full members, the work groups should not be subject to the FACA requirements. Furthermore, FACA would not limit the work groups’ ability to widely publicize their initiatives and invite public comment on an ongoing basis. The Grants.gov initiative has been more active in soliciting grantee input, but it is unclear if the Line of Business initiative will include activities to coordinate with grantees. In contrast to the P.L. 106-107 initiative, the Grants.gov initiative has institutionalized processes to inform the grantee community about its plans and activities and to gather ongoing input from the grantee community. Throughout development and implementation of Grants.gov, users’ comments from pilots and actual systems have been used to identify and address problems. Grants.gov has also conducted three user satisfaction surveys and maintains a Web portal for user comments. The Web site of the grant-streamlining teams was recently integrated into the Grants.gov Web site. The site invites public comment on both the Grants.gov system and broader grant-streamlining issues and initiatives. In addition, the Grants.gov Program Management Office conducted training and outreach to the various applicant constituencies and to agency staff to increase awareness of the Grants.gov initiative. Outreach efforts included monthly stakeholder meetings, train-the-trainer workshops, and grantor workshops. A help desk was established to address federal staff and applicants’ questions and provide assistance. At this time, it is unclear if the Grants Management Line of Business initiative will include a process for consultation and coordination with grantee groups. Several initiatives to simplify and streamline the administration of grants have been proposed in response to P.L. 106-107. Some of these have been implemented and likely will help grantees to identify and apply for grants and meet the needs of federal grant-making agencies when they receive grants. The Grants.gov common portal is clearly used by many to identify grants and undoubtedly has simplified that process for grantees. As more agencies allow for electronic application through Grants.gov and more grantees begin to use the system, it should also simplify grant management. However, other initiatives that have been proposed have not yet been completed. Some have languished in the approval process. Others have not yet been adequately developed to even reach the approval stage. The lack of clear goals and timelines for the cross-agency work groups to complete tasks and for agencies to implement systems undoubtedly has contributed to the lack of progress in implementing these proposals. Further, agencies need to be held accountable internally for implementing these programs and should have performance measures and clear deadlines on which they report. To date, agencies have not even been held accountable for submitting annual reports required by P.L. 106-107, which may indicate to agencies that moving forward quickly on grant administration streamlining is not a high priority. In addition, the lack of continuity toward meeting P.L. 106-107’s requirement to develop a common reporting system (including electronic processes) for similar programs administered by different agencies may potentially prevent agencies from reaching the act’s goals before it sunsets in November 2007. As overarching committees have evolved and management of the cross-agency programs have been moved around among various parties, progress has been slowed. Clearer governance is needed to ensure that each group sunderstands its roles and coordinates with the others to prevent overlap and collaborate on common initiatives. The various initiatives that are implementing P.L. 106-107 have a mixed record of coordinating with grantees. Grants.gov publicizes its plans and meeting minutes on its Web site and solicits ongoing grantee input through its Web site, regular satisfaction surveys, and outreach meetings with grantees. In planning for the implementation of the act, the cross-agency work groups also solicited and used grantee input. In addition, they incorporated several means for soliciting ongoing grantee input in the plan. However, they did not implement the portions of the initial plan that would have provided for ongoing coordination with grantees. Unlike Grants.gov, the work groups have neither made information about their work public nor solicited ongoing grantee input, and approaches outlined in the common plan, such as establishing an ombudsman position, have not been implemented. Without ongoing grantee input, the reforms are less likely to meet the needs of the grantees and achieve the purposes of the act. In order to augment the progress toward meeting the goals of P.L. 106-107 for streamlining grant administration, we recommend that the Director, OMB, take the following five actions: ensure that individual agency and cross-agency initiatives have clear goals for completion of their initiatives; ensure that agency annual progress reports to Congress and OMB on implementation of P.L. 106-107 are prepared and contain information on their progress toward goals; ensure that efforts to develop common grant-reporting systems are undertaken on a schedule that will result in significant progress by the time P.L. 106-107 sunsets in November 2007; ensure that OMB’s strategy for addressing P.L. 106-107 integrates the three individual initiatives: HHS’s overarching P.L. 106-107 efforts, the Grants.gov program, and the Grants Management Line of Business initiative; and solicit grantee input and provide for coordination with grantees on an ongoing basis. We provided a draft of this report to OMB for comment. OMB’s formal comments are reprinted in appendix III. In addition to written comments, OMB provided us with technical comments verbally, which we incorporated as appropriate. In its formal comments, OMB stated that it agreed with many of the report’s recommendations and provided comments on the status of grant reform efforts. OMB stated it will continue to work aggressively with agencies to meet their annual reporting responsibilities and is committed to achieving E-Gov solutions and deploying technical solutions for streamlining policies and practices. Further, OMB commented that it will continue to facilitate the integration of the three grants initiatives related to P.L. 106-107 requirements and will continue to seek grantee input on an ongoing basis. We believe that these steps constitute progress toward ensuring that the goals of P.L. 106-107 are attained, although OMB needs to aggressively push forward. For example, while it has established a new grants committee, it needs to ensure that progress does not slow while this transition occurs. Although the Grants Management Line of Business initiative is under way, OMB needs to ensure that efforts to address P.L. 106-107 requirements, such as the development of common electronic systems to manage and report on the use of funding from similar federal grant programs administered by different agencies, move forward. Similarly, while public input was sought heavily during the development of the common plan and is sought once proposals are developed, the grantee community’s views need to be solicited throughout these processes and as new initiatives are selected. We are sending copies of this report to the Director of OMB. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you have any questions about this report, please contact me at (202) 512-6806 or Thomas James, Assistant Director, at (202) 512-2996. We can also be reached by e-mail at [email protected] and [email protected], respectively. Additional key contributors to this report are listed in appendix IV. P.L. 106-107 requires each agency to report annually on its progress implementing the plan, although not all agencies have regularly submitted these reports. The annual agency progress report summarizes agency efforts in meeting the goals and objectives of the common plan. The annual governmentwide progress reports describe the collaborative efforts of 26 federal agencies to streamline and simplify the award and administration of federal grants. (See table 3.) As cross-agency teams identified the need for streamlining, agency representatives and the Office of Management and Budget (OMB) recognized that potential grantees needed a simpler and more consistent way to identify and apply for federal grant opportunities. The process in place for identifying grant opportunities resulted in applicants searching for applications from many different agencies and then having to apply to the various agencies using different application forms and processes. Public comments from the grantee community identified the lack of a central source for obtaining information about all federal agencies’ current funding opportunities and the variation in the way agencies’ grant announcements were organized. P.L. 106-107 required that OMB coordinate grant-making agencies in establishing an interagency process to streamline and simplify these procedures for nonfederal entities. Further, it required that the agencies allow applicants to electronically apply and report on the use of funds from grant programs they administer. The E- grant initiative, along with other E-government approaches, was undertaken to meet these needs. It was implemented initially by the E- Grants Program Management Office based in the Department of Health and Human Services, which was the lead agency for P.L. 106-107 implementation. More recently, it has been referred to as Grants.gov, the Internet portal through which it is accessed. The first service that Grants.gov implemented was the “find” capability, which established a single Web site to provide information on federal grant- funding opportunities. This enabled applicants to search these opportunities by several components, such as the type of activity funded (e.g., the arts and humanities, education, and the environment) and the agency providing funds. Further, it provided the capability of notifying potential fund recipients by e-mail of new opportunities that met parameters they identified. In addition, descriptions of funding opportunities were organized uniformly to simplify finding key information. Agencies began posting summaries in February 2003. A key aspect of its full implementation was OMB’s requirement that by November 7, 2003, all federal agencies were to electronically post information on funding opportunities that award discretionary grants and cooperative agreements at the Grant.gov Web site, using a standard set of data elements. Grants.gov’s program management office reports that since October 2003, all 26 grant-making agencies have listed grant opportunities in the “find” activity of Grants.gov. The public’s use of the portal has grown significantly; according to the Program Management Office, the “find” activity on Grants.gov received about 2.2 million page requests in November 2004 and applicant e-mail notifications have averaged 600,000 to 700,000 weekly. More recently, Grants.gov has provided the capability to apply for grants electronically through the portal. The “apply” activity allows an applicant to download an application package from Grants.gov and complete the application off-line. After an applicant completes the required forms, they can be submitted electronically to Grants.gov, which transmits the application to the funding agency. Grant-making agencies must take several steps to provide the capability to apply electronically. They work with Grants.gov Program Management Office staff to identify the forms needed and make them accessible. Previous forms that grant-making agencies have used for similar application packages are readily available as are forms that other agencies have used that might be appropriate, thus simplifying the process of adding new applications. The agencies identify how long they would like the application packages to be retained on the site after they close; after that, they are archived on the site. While some agencies have enabled applicants to apply electronically directly on Grants.gov, some announcements link to a grant announcement in the Federal Register or link to more detail on the “find” site, which the applicant completes in hard copy. To apply for grants electronically, the applicant must download specific free software—Pure Edge Viewer. After an application is submitted, the Grants.gov system checks the application to ensure all the required forms are included and sends the applicant an e-mail saying that it has been accepted, or rejected if a problem has been identified. If accepted, the application is then forwarded from Grants.gov to the grantor agency; when that agency downloads the data, it informs the Grants.gov system and the applicant is informed by Grants.gov that data have been downloaded to the agency. In some cases, agencies can download data directly to their own grant management systems, thus eliminating the need for staff time to input data. Usage of the electronic “apply” component has been slower to grow than the use of the “find” component for a number of reasons. As shown in table 4, as of April 6, 2005, 20 of the 26 key federal grant-making agencies have posted “apply” packages, 723 electronic application packages were available, and 2,621 electronic applications have been received. For agencies, forms must be uploaded to the system. Further, some are struggling with setting up their systems to handle the data from Grants.gov. For grantees, some necessary registration steps require lead time—an estimated 6 days that must be allowed for the entire registration process the first time. This verifies that the grantee point of contact is the appropriate person to submit an application. Grant.gov’s surveys to determine users’ satisfaction with the system have also identified dissatisfaction on other aspects, such as the adequacy of the status page and the ease of submitting the applications. Grants.gov staff members have reached out to both agencies and the grantee community, sometimes through the use of a contractor, to solicit input and to increase its usage. They have provided training and workshops to grant-making agencies and have hosted monthly stakeholder meetings to update users on changes. The Grants.gov Program Manager meets monthly with the Grants.gov Executive Board, comprising senior executives of partner agencies, to update them on activities and get guidance on strategic issues. As outreach to the grantee community, staff members have given presentations and provide resources to agencies to inform their grantee communities. Also, a “contact center” is available for grant applicants to assist with the electronic applications. With the growth of its services, the operations of the Grants.gov office Program Management Office have evolved. As of December 2004, the Program Management Office has several full-time employees, including a Program Manager and a Deputy Program Manager, and additional detailees from grantor agencies. It has not received direct appropriations but was funded during the period from 2002 to 2004 by contributions from 13 grant- making agencies, the Chief Financial Officers Council, and the General Services Administration (for maintenance of the Grants.gov “find” mechanism). Funding for those 3 years totaled about $29.4 million. Beginning with fiscal year 2005, Grants.gov has moved to a fee-for-service model. Funding will be from 26 grant-making agencies, with payments based on an agency’s total grant dollars awarded. Based on natural break points in data on funds that the agencies award, the grant-making agencies were divided into three categories. For 2005 and 2006, the 6 large agencies will be assessed $754,467, the 10 medium agencies will be assessed $452,680, and the 10 small agencies will be assessed $226,340, for a total of about $11,300,000 each year. In addition to the above contacts, Jack Burriesci, Martin De Alteriis, Patricia Dalton, Susan Etzel, Ronald La Due Lake, Hannah Laufe, Donna Miller, Melissa Mink, and Carol Patey also made key contributions. | The federal government distributed about $400 billion in federal grants in fiscal year 2003 through about 1,000 different federal grant programs administered by several federal agencies with different administrative requirements. Congress, concerned that some of these requirements may be duplicative, burdensome, or conflicting--and could impede cost-effective delivery of services--passed the Federal Financial Assistance Management Improvement Act of 1999, commonly called P.L. 106-107, and mandated that GAO assess the act's effectiveness. This report addresses (1) progress made to streamline and develop common processes for grantees and (2) the coordination among the Office of Management and Budget (OMB), the agencies, and potential grant recipients. More than 5 years after passage of P.L. 106-107, grant agencies have made progress in some areas of grant administration, but in other areas, particularly the development of common reporting systems, progress is just beginning. Grant-making agencies together developed a common plan for streamlining processes. Several cross-agency teams identified changes that should be made, and these plans are in various stages of completion. For example, a Web-based system, Grants.gov, is now available to help potential grantees identify grant opportunities and apply for them electronically. Common forms are being developed to eliminate duplication and unnecessary differences among agencies. However, efforts toward common electronic systems for reporting financial and performance information have not been developed, although the law requiring them sunsets in 2007. Further, individual agencies have not all reported on their progress annually, as required. The individual agencies and the cross-agency work groups have a mixed record of coordinating with grantees. For example, the cross-agency work groups solicited public input to their early plan. Grants.gov publicizes its plans and solicits ongoing grantee input through its Web site and user surveys. However, the work groups generally have not made information about their work public nor solicited ongoing grantee input. Without such input, reforms are less likely to meet the needs of grantees. In general, the oversight of streamlining initiatives has shifted, potentially contributing to the lack of progress on all aspects of grant management. |
The FBI is the primary investigative agency within the Department of Justice. Its missions include investigating serious federal crimes, protecting the nation from foreign intelligence and terrorist threats, and assisting other law enforcement agencies. Approximately 12,000 special agents and 16,000 mission support personnel are located in the bureau’s Washington, D.C., headquarters and in more than 450 offices in the United States and 45 offices in foreign countries. Mission responsibilities at the bureau are divided among the following five major organizational components. Criminal Investigations: investigates serious federal crimes and probes federal statutory violations involving exploitation of the Internet and computer systems. Law Enforcement Services: provides law enforcement information and forensic services to federal, state, local, and international agencies. Counterterrorism and Counterintelligence: identifies, assesses, investigates, and responds to national security threats. Intelligence: collects, analyzes, and disseminates information on evolving threats to the United States. Administration: manages the bureau’s personnel programs, budgetary and financial services, records, information resources, and information security. Each component is headed by an executive assistant director who reports to the Deputy Director, who, in turn, reports to the Director. The components are further organized into subcomponents, such as divisions, offices, and other groups (hereafter referred to as “divisions”). Table 1 lists the components and briefly describes their respective divisions. Supporting the divisions are various staff offices, including the Office of the CIO. The CIO’s responsibilities include, for example, development of the bureau’s IT strategic plan and operating budget; development of IT investment management policies, processes, and procedures; and development and maintenance of the bureau’s enterprise architecture. The CIO reports directly to the Director. Figure 1 shows a simplified organizational chart of the components, divisions, Office of the CIO, and respective reporting relationships. To execute its mission responsibilities, the FBI relies extensively on IT. For example, the Criminal Justice Information Services (CJIS) division uses the National Crime Information Center 2000 to process approximately 4 million criminal identification inquiries and other related transactions for civilian, homeland security, and law enforcement agencies each day. Similarly, the Laboratory division stores records of known criminals on the Combined DNA Index System to compare with DNA evidence submitted by federal, state, and local law enforcement agencies. The FBI reports that it collectively manages hundreds of systems, networks, databases, applications, and associated IT tools at an average annual cost of about $800 million. As we have previously reported, the FBI’s IT environment is composed of outdated, nonintegrated systems that do not optimally support mission operations. To address its strategic IT needs, the bureau began modernizing its systems environment in the mid-1990s. Currently, the FBI reports that eight divisions will spend approximately $1 billion on 18 major IT modernization initiatives between fiscal years 2003 and 2005. These initiatives, such as Trilogy and the Investigative Data Warehouse, are to introduce new systems infrastructure and applications. For example, Trilogy is to establish an enterprise network to enable communications among hundreds of domestic and foreign FBI locations. According to the FBI, the first two segments of the project—the Transportation Network Component and the Information Presentation Component—were implemented as of April 2004. The third segment—the User Applications Component, commonly called the Virtual Case File—has been delayed and a new schedule is being determined. In addition, the Investigative Data Warehouse initiative is to provide the capability to search and share counterterrorism and criminal investigative information across the bureau; the FBI reports it is in the process of acquiring the warehouse and has plans for full deployment by the end of fiscal year 2004. Some divisions—such as CJIS, Cyber, and Investigative Technology—plan to spend over $70 million each on IT modernization in fiscal year 2005 alone. For instance, the Investigative Technology Division plans to spend approximately $83 million in fiscal year 2005 on three major IT initiatives: Digital Collection, Electronic Surveillance Data Management System, and the Computer Analysis Response Team. Table 2 shows, by FBI division, the major initiatives and their anticipated modernization spending. A description of each initiative is provided in appendix II. Integrated planning across related IT projects and effective policies and procedures for managing IT human capital, systems acquisitions, and investment activities are recognized hallmarks of successful public and private organizations, and they are essential ingredients for effectively managing large modernization efforts. Our research and experience with federal agencies has shown that executing modernization projects without these and other IT management controls increases the chances of implementing systems that are not well integrated and do not provide promised capabilities on time and within budget. The Congress and the Office of Management and Budget (OMB) have recognized the importance of these and other IT management controls. The Clinger-Cohen Act, for example, provides a framework for effective IT management that includes systems integration planning, human capital management, acquisition management, and investment selection and control. In addition, OMB has issued guidance on integrated IT modernization planning and effective IT human capital, acquisition, and investment management. Further, organizations such as Carnegie Mellon University’s Software Engineering Institute have also issued guidance on effective acquisition management practices for areas such as configuration management, project management, quality assurance, requirements development and management, and risk management. Over the past several years, reviews of the FBI’s efforts to leverage IT to support transformation efforts have identified management weaknesses. In particular, a December 2001 report initiated by the Department of Justice identified weaknesses with, for example, the bureau’s systems acquisition and human capital management processes. The weaknesses included not having (1) a policy that ensures consistent implementation of configuration management activities, (2) processes to ensure adequate definition of system requirements, and (3) an agencywide systems life cycle methodology. The report also noted that the FBI had not assessed the current skills of its employees on an ongoing basis, and it did not have a systematic approach for identifying the skills and abilities needed for the future. In December 2002, Justice’s Office of the Inspector General reported that the FBI was not effectively managing its IT investments. Specifically, the Inspector General reported that the bureau did not have a complete process for selecting new IT investments and was not following a disciplined process for controlling ongoing projects. To address this, the Inspector General made a series of recommendations aimed at implementing the processes and practices defined in our IT investment management framework. In a January 2004 follow-on report, the Inspector General stated that, while the bureau had developed plans to address these recommendations, full development and implementation of the plans—and thus the establishment of effective investment management processes—remained to be completed. More recently, between September 2003 and March 2004, we reported on the challenges the FBI faced in establishing effective IT modernization management. For example, we reported in September 2003 (and again in November) that the bureau had not yet developed a modernization blueprint—commonly referred to as an enterprise architecture—to guide and constrain modernization efforts. Accordingly, we made recommendations to help the bureau establish the architecture management capabilities needed to develop, implement, and maintain an enterprise architecture. The FBI agreed with our recommendations and is in the process of implementing them. In addition, in March 2004, we reported that the FBI has not benefited from having sustained IT management leadership with bureauwide authority. Specifically, the bureau’s key leadership and management positions, including the position of the CIO, had experienced frequent turnover, and the position of the CIO lacked bureauwide authority over IT. We found that historically much of the responsibility and authority for managing IT—including modernization planning, human capital management, systems acquisition management, and investment selection and control—was dispersed among the bureau’s divisions. We did not make recommendations in these areas at that time because our work to fully evaluate these areas had not yet been completed. Reviews of the bureau’s centerpiece systems modernization project, Trilogy, have identified management weaknesses as the cause for cost, schedule, and performance shortfalls that have been experienced by the project. For example, over the past several years, the Justice Inspector General issued several reports on the FBI’s management of Trilogy. According to the Inspector General’s September 2003 report, Trilogy funding grew from an original estimate of $379.8 million to $596 million, due in part to the lack of integration planning for one of the three components of Trilogy. In addition, the Inspector General reported that the original delivery date for Trilogy’s first two components (Transportation Network Component and Information Presentation Component) slipped 8 months, in part due to inadequately defined requirements. In March 2004, the Inspector General testified that the continued series of missed completion estimates and associated cost growth were due to, among other things, poorly defined requirements, project management deficiencies, frequent turnover of FBI IT managers, and the FBI’s focus on its other important law enforcement challenges. In addition, in September 2003, we reported that the bureau lacked an enterprise architecture—a key component in developing and modernizing systems. We found that the absence of the architecture contributed to unnecessary rework to integrate several modernization initiatives, including Trilogy. In March 2004, we testified that the bureau’s weaknesses in IT management controls, such as investment management and enterprise architecture, contributed to Trilogy schedule delays of at least 21 months and cost increases of about $120 million. Moreover, the National Research Council reported in May 2004 that the bureau was experiencing significant challenges in developing and implementing Trilogy. For example, the council found that the bureau did not have a permanent CIO with the technical knowledge to provide the strong direction needed for the Trilogy program. In addition, it found that modernization initiatives, such as Trilogy, were not closely linked to a coherent view of the bureau’s mission and operational needs. Based on its findings, the council concluded that the bureau was not on the path to success in its IT modernization program. In a follow-on letter, the council cited substantial progress on these fronts. In particular, it said that the bureau had hired a permanent CIO, and the CIO had identified the development of an enterprise architecture as a high priority. The Clinger-Cohen Act requires the use of effective IT management practices such as organizationwide planning for the integration of interrelated systems. In addition, OMB provides guidance to federal agencies on such planning. As part of this planning, agencies are supposed to identify, understand, and manage interdependencies within and across individual IT systems modernization projects. Key elements of effective integrated project planning include linking all IT projects to the organization’s mission and related strategic identifying and demonstrating gaps in mission performance due to, among other things, weak or nonexistent integration among existing projects, services, systems, databases, networks, or tools; defining interdependencies among IT projects, including the business processes to be supported and technical system interface requirements; assigning responsibilities and management structures for coordinating and overseeing IT project interdependencies; identifying the risks associated with project interdependencies and developing strategies to mitigate the risks; and ensuring that affected organizations provide input and commitment to plan development and implementation. Addressing these elements, among other things, identifies the points where systems are to be integrated and establishes common ground for interproject planning and management, which is essential to ensuring that project plans—and thus system solutions—are effectively integrated. Our prior reviews at federal agencies and research on IT management have shown that attempting to modernize IT systems without performing such planning increases the risk of investing in system solutions that are duplicative, are not well integrated, are unnecessarily costly to maintain and interface, and do not effectively optimize mission performance. Accordingly, until agencies develop integrated approaches, we have recommended limiting IT spending to cost-effective efforts that are congressionally directed; are near-term, relatively small, and low-risk opportunities to leverage technology in satisfying a compelling agency need; support operations and maintenance of existing mission-critical systems; involve deploying an already developed and fully tested system; or support establishing integrated planning and other modernization management controls and capabilities. The FBI does not have a bureauwide integrated plan or set of plans for its many systems modernization projects. Instead, divisions have developed modernization plans covering solely those IT projects that are within their respective lines of authority. These plans include (1) division plans that describe to varying degrees how IT projects are to be executed to support the accomplishment of division-specific objectives and (2) capital asset plans and business cases—commonly referred to as budget Exhibit 300s— that justify the resources needed for the division’s major IT projects. However, these plans are not integrated and do not consistently demonstrate the elements of integrated IT project planning. Specifically, of the six FBI divisions we examined, two divisions—Cyber and CJIS— included the majority of the elements of integrated project planning, while the other four divisions each incorporated two or fewer of the elements. Table 3 summarizes our analysis. More specifically, our analysis for each of the modernization planning elements showed the following: With respect to the first element, two divisions—Cyber and the Program Management Office—consistently linked their projects to either the bureau’s strategic plan or its top 10 priorities. The other divisions linked at least some of their individual projects to bureau-level strategy. Linking individual projects to the FBI’s strategic plan is an essential step to ensuring that the bureau IT initiatives do not overlap or leave gaps in mission functions and goals. Only two divisions (CJIS and Security) identified and demonstrated gaps in existing capabilities. CJIS undertook an analysis of system deficiencies and technology trends to identify and specify improvements to its law enforcement systems. Security relied on prior reviews of security incidents and comparisons of existing practices with best practices to identify needed improvements in system security requirements. Other divisions largely stated the need for improvements in system capabilities and capacity without corresponding data on current or projected mission shortfalls. This is crucial because without supporting data to derive performance gaps, proposed improvements may be unnecessary, insufficient, or not identified at all. In addition, our research and experience with federal IT modernizations show that projects with inadequately defined improvements are likely to require more resources to plan and manage—including planning and management of interdependencies—than those that have been based on reliable performance data and thorough analysis. All of the divisions addressed the third element, in part, but only two divisions—Cyber and CJIS—fully identified interdependencies for all of their projects. For example, CJIS identified interrelationships among business processes, systems, databases, networks, components, and tools. The Investigative Technology Division, on the other hand, did not consistently identify interdependencies for tools, networks, or security. In addition, Security did not fully identify technical and programmatic interdependencies. Identifying project interdependencies is essential for recognizing the points of integration of projects and systems and for establishing common ground for interproject planning and management. The CJIS and Security divisions had the most robust mechanisms for coordinating their project interdependencies with other parts of the bureau and with external organizations. CJIS relies on its Advisory Policy Board to identify needed improvements, assess impacts to customers and their systems, and coordinate schedules and interfaces. Security collaborates with system owners and managers through division configuration and change control boards, the security certification and accreditation process, and other mechanisms to integrate its security projects and information assurance objectives. Both divisions have well-defined responsibilities for their project team members. Other divisions focused on coordination within individual project teams or a single division, leaving mechanisms for interacting with other divisions, systems, and technologies poorly defined. This is important because vague responsibilities and processes for managing project integration efforts can lead to omissions and conflicts in system interfaces and project activities. The fifth element was satisfied by four of the six divisions. Specifically, Cyber, CJIS, Investigative Technology, and the Program Management Office consistently addressed integration risks in their capital asset plans and business cases. Doing this is important because it allows for the systematic identification of risks associated with project interdependencies and management action to mitigate those risks. Finally, the CJIS and Cyber divisions enlisted participation and commitment from organizations affected by their projects and related system improvements. For instance, CJIS partnered with the advisory boards and councils, the vendor community, and the nation’s criminal justice community in successfully developing its systems. Other divisions, such as Investigative Technology and the Program Management Office, fell short of meeting this criterion because they did not consistently specify a means for project personnel to collaborate with other stakeholders on the development of integrated project plans. Establishing such a means for knowledgeable personnel to contribute to planning for interdependencies in areas such as project requirements, interfaces, and timetables is key to ensuring stakeholder commitment to project integration plans and their execution. FBI officials from each of the divisions agreed with the results of our analyses of their respective planning efforts and attributed the state of their planning to several factors. First, as we previously reported, the FBI does not have an enterprise architecture, and thus business processes and IT systems have been viewed parochially, rather than as corporate resources that must be planned and managed on a bureauwide basis. Second, no bureau policy exists for divisions to develop integrated IT project plans. Instead, existing policy assigns responsibility for IT planning, including planning for modernization projects, to divisions. Third, the bureau has not assigned responsibility and authority for ensuring that integrated bureauwide planning occurs. While the divisions are responsible for project planning, no organization is responsible for reviewing and approving the divisions’ plans to ensure that mission gaps across the bureau are fully addressed and project dependencies and overlap are minimized. According to the CIO, several efforts are underway and planned to address these underlying weaknesses and strengthen modernization planning. Consistent with our prior recommendations, the FBI has established a program to develop an enterprise architecture. In doing so, the bureau has, among other things, (1) established a program office to manage the effort, (2) assigned a chief architect and supporting personnel, (3) established an architecture governance board that includes representatives from all divisions to review and identify projects that are inconsistent with the existing IT environment and inhibit internal and external information sharing, and (4) hired a contractor to assist with developing the architecture. The bureau plans to issue the first version of the architecture by the end of September 2004. This version is to document the bureau’s current IT environment. The bureau plans to issue the other key parts of the architecture—namely, the future IT operating environment and transition plan—in fiscal year 2005. Also, the CIO is in the process of merging agencywide authority and responsibility for IT, including systems modernization planning, under the CIO in time to be reflected in the bureau’s fiscal year 2006 budget and associated capital investment plans and business cases. Further, the CIO’s office intends to hire a contractor to facilitate bureauwide integrated planning, including the formulation of integrated plans for systems modernization projects. Until the FBI completes these and other efforts to introduce an integrated approach to IT project planning, there is increased risk that the bureau’s IT systems will be unnecessarily duplicative, will later require expensive rework to be integrated, and will thus hamper organizational transformation efforts. According to the FBI, this risk has already become reality in the case of five key infrastructure projects (including Trilogy and the Integrated Data Warehouse) that were launched independently between May 2001 and June 2003 and later found to have significant areas of overlap. The FBI attributed the redundancy in part to the lack of integrated planning. Establishing effective corporate policies and procedures for managing IT human capital, acquiring systems, and making investment decisions are examples of key best practices that leading organizations use to modernize their IT systems and facilitate organizational transformation. The FBI has such policies and procedures for managing IT human capital; however, it does not yet have a documented and consistent approach for acquisition and investment management. Specifically, adoption of best practices for acquisition management policies and procedures in such areas as configuration management and quality assurance varies among divisions, and bureau investment management policies and procedures, including selection and control processes, are still under development. The state of the FBI’s acquisition and investment management policies and procedures is due to a number of factors, including diffused and decentralized IT management authority, past inattention to IT management, and lack of sustained IT leadership. The CIO has recently taken steps to strengthen policies and procedures in each of these areas. Until this is completed, the bureau will be challenged in its ability to effectively manage all of its systems modernization projects, and thus is at increased risk of acquiring systems that do not adequately satisfy mission needs on schedule and within budget, which could hamper the bureau’s systems modernization and organizational transformation. As we have previously reported, strategic human capital management includes viewing people as assets whose value to an organization can be enhanced by investing in them. As the value of people increases, so does the performance capacity of the organization. In March 2002, GAO, based on our experience with leading organizations, issued a model with four cornerstones encompassing strategic human capital management. One of the cornerstones, strategic workforce planning (also called strategic human capital planning), enables organizations to remain aware of and be prepared for current and future needs as an organization, ensuring that they have the knowledge, skills, and abilities needed to pursue their missions. In December 2003, GAO issued a set of key principles, or practices, for effective strategic human capital planning. These practices include involving top management, employees, and other stakeholders in developing, communicating, and implementing a strategic workforce plan; determining the critical skills and competencies that will be needed to achieve current and future programmatic results; developing strategies that are tailored to address gaps between the current workforce and future needs; building the capability to support workforce strategies; and monitoring and evaluating an agency’s progress toward its human capital goals and the contribution that human capital results have made to achieving programmatic goals. These practices are generic and apply to any organization or organizational component, such as an agency’s IT organization. The bureau has developed IT human capital policies and procedures and incorporated them into the bureau’s enterprisewide strategic human capital plan issued in March 2004. These IT policies and procedures are in alignment with the key best practices discussed above. For example, they call for top management stakeholders (e.g., the CIO, the head of the Office of Strategic Planning, and the head of Administration) and other stakeholders (e.g., section and unit chiefs) to be involved with the development, communication, and implementation of these policies and procedures. Further, the policies and procedures provide for the development of a detailed data bank to store critical skills needed in the development and selection of personnel, including IT staff. They also define strategies to address workforce gaps, including recruiting programs that provide for tuition assistance and cooperative education. In addition, the policies and procedures call for establishing an IT center to support workforce strategies and train existing personnel for future competencies and skills that will be needed. Further, the policies and procedures require monitoring and evaluating the agency’s progress by tracking implementation plans to ensure that results are achieved on schedule. The FBI will face challenges as it implements its strategic IT human capital policies and procedures. As we have previously reported, when implementing new human capital policies and procedures, how it is done, when it is done, and the basis on which it is done can make all the difference in whether such efforts are successful. With successful implementation, the bureau can better position itself to ensure it has the right people, in the right place, at the right time to effectively modernize IT and transform the organization. The Clinger-Cohen Act requires, among other things, the establishment of effective IT management policies and procedures. The Software Engineering Institute’s Capability Maturity Models™ provide for 30 best practice policies and procedures for five key systems acquisition management areas—configuration management, project management, quality assurance, requirements development and management, and risk management. Collectively, these management areas and associated best practices provide a foundation for acquiring systems that allow organizations to manage changes to the tracking project cost, schedule, and performance; defining standards to ensure integrity in products; establishing clearly defined and managed requirements; and identifying and mitigating risks. Each management area has five to seven best practices associated with it that, when properly defined and implemented, assist organizations in performing effectively in that area. A detailed list of the practices, by management area, is in appendix III. The acquisition management policies and procedures currently in place at the FBI for these five areas vary widely by division. While each of the six divisions we examined has policies and procedures that incorporate many best practices, these divisions’ policies and procedures also do not address important practices. For example, in project management, the divisions’ policies and procedures generally addressed all of the best practices. Conversely, in requirements development and management, four of the six divisions’ policies and procedures addressed fewer than half of the best practices for that area. See figure 2 for a summary of our analysis. The FBI attributed the variance among divisions and the lack of alignment with best practices to, among other things, the bureau’s decentralized approach to managing IT and past inattention given to IT management. Until recently, authority for managing IT, along with budget control, was diffused and decentralized among the divisions. In addition, the FBI did not establish bureauwide policies and guidance for developing systems acquisition policies and procedures consistently and in accordance with best practices. As such, the divisions defined policies and procedures independently from one another, contributing to different sets of policies and procedures. To strengthen the FBI’s systems acquisition capabilities, the CIO has efforts planned and under way to define and implement bureauwide systems acquisition policies and procedures that are to incorporate best practices. Until this is accomplished, the bureau will be challenged in its ability to manage all of its systems modernization projects and thus is at increased risk that it will be unable to deliver promised capabilities on time and within budget. The analyses in the following sections show the variance among divisions in their use of best practices for the five acquisition management areas: configuration management, project management, quality assurance, requirements development and management, and risk management. An analysis of each division is in appendix III. Configuration management involves identifying the configuration (i.e., descriptive characteristics of a system) at a given point in time, systematically controlling changes to that configuration, and maintaining the integrity of the configuration throughout the system’s life cycle. Effective policies and procedures for configuration management include the following practices: 1. defining roles and responsibilities, including identifying a person or group with authority for managing a system’s baselines and approving changes to the baselines; 2. developing a plan that defines the activities to be performed, the schedule of the activities, and the resources required (e.g., staff); 3. establishing a repository (also called a library), using tools and procedures to store and retrieve the configuration and to maintain control over changes to it; 4. identifying, documenting, managing, and controlling configuration items and their associated baselines; 5. managing system change requests and problem reports by ensuring that configuration changes are initiated, recorded, reviewed, approved, and tracked; 6. periodically reporting status of the configuration; and 7. periodically auditing baselines, including assessing the integrity and correctness of baselines, reporting audit results, and tracking audit action items to closure. The policies and procedures for three of the six divisions addressed these seven best practices, while policies and procedures for two divisions addressed all but one or two of the practices. The remaining division’s policies and procedures addressed just one of the seven practices. See figure 3 for a summary of our analysis. The key practices that are not addressed in division policies and procedures are important and their absence can negatively impact the divisions’ ability to effectively manage the configuration of their respective systems and thus their systems’ ability to efficiently and effectively support division objectives. In particular, Investigative Technology’s policies and procedures did not identify configuration management roles and responsibilities. This is important because project teams need to have a responsible party for approving and controlling changes. To do otherwise would allow anyone to make random changes to the configuration, potentially causing unnecessary rework and reconfiguration. As another example, this division’s policies and procedures did not establish a library system. This is also critical to successful configuration management because the library system stores the initial configuration of the system as well as any subsequent changes. Without the library system, the project team would be unable to ensure the correctness of the current configuration. In addition, the Program Management Office’s policies and procedures did not provide for periodic baseline auditing and periodic management review of the status of configuration management activities. These practices are important because they verify that projects are in compliance with applicable configuration management standards and procedures, and they provide awareness of and insight into systems process activities at the appropriate level and in a timely manner. The purpose of project management is to manage the activities of the project office and supporting organization to ensure a timely, efficient, and effective acquisition. Effective policies and procedures for project management include the following practices: 1. identifying project management roles and responsibilities; 2. developing a project management plan; 3. baselining and tracking the status of project cost, schedule, and performance, including associated risks; 4. establishing a process to identify, record, track, and correct problems discovered during the acquisition; and 5. periodically reviewing and communicating the status of project management activities and commitments with management and affected groups. The policies and procedures for five of the six divisions addressed all five of these project management practices; one division did not address two practices. Specifically, Cyber’s policies and procedures did not identify processes for baselining and tracking project cost, schedule, performance status, and associated risks. See figure 4 for a summary of our analysis. This practice is important because it provides measurable benchmarks against which to gauge progress, identify deviations from expectations, and permit timely corrective action to be taken. Without this practice, the chances of system projects costing more than budgeted, taking longer than envisioned, and not performing as intended are greatly increased. The division’s policies and procedures also did not provide for a process to identify, record, track, and correct problems. This practice is important because it provides for systematically managing and controlling issues that impact cost, schedule, or performance. Quality assurance describes processes for providing independent assessments of whether management process requirements are being followed and whether product standards and requirements are being satisfied. Effective quality assurance policies and procedures include the following practices: 1. identifying quality assurance roles and responsibilities; 2. having a quality assurance plan; 3. participating in the development and review of plans, standards, and 4. reviewing work activities and products; 5. documenting and handling deviations from standards and procedures that are found in activities and work products; and 6. periodically reporting and reviewing the results and findings of quality assurance activities with management. One division has incorporated these six quality assurance practices in its policies and procedures; the remaining five divisions included all but one or two. See figure 5 for a summary of our analysis. For example, the policies and procedures for Counterterrorism and Information Resources do not address participating in the development and review of plans, standards, and procedures, which is key to ensuring that they are aligned with relevant systems acquisition policies, are appropriately tailored to meet project needs, and are usable for performing quality reviews and audits. In addition, the policies and procedures for Cyber, Investigative Technology, and the Program Management Office do not include periodic reporting and reviews of the results and findings of quality assurance activities. This practice is important to ensuring that issues and concerns that could impede quality outcomes are disclosed so that appropriate corrective action can be taken. If they are not disclosed, the chances of system cost, schedule, and performance shortfalls are increased. Requirements development and management involves establishing and maintaining agreement on what the system is to do (functionality), how well it is to do it (performance), and how it is to interact with other systems (interfaces). Effective policies and procedures for requirements development and management include the following practices: 1. identifying requirements development and management roles and responsibilities; 2. involving end users in development of and changes to requirements; 3. having a requirements management plan; 4. developing and baselining requirements, and controlling changes to 5. appraising changes to requirements for their impact on the project or IT 6. maintaining traceability among requirements and other project 7. periodically reviewing the status of requirements activities with management. With one exception (CJIS), the policies and procedures for the divisions generally did not address the above practices. See figure 6 for a summary of our analysis. For instance, while the Program Management Office’s policies and procedures met four of the seven practices, such as involving end users in development of and changes to the requirements and reviewing the status of project requirements activities with management, they did not address maintaining traceability among requirements and other project deliverables. This practice is important because it ensures that project deliverables used to acquire systems are consistent with end user needs, which is critical to delivering systems that perform as intended and thus meet mission needs. Moreover, the policies and procedures of four divisions—namely Counterterrorism, Cyber, Information Resources, and Investigative Technology—satisfied three or fewer of the practices. For example, none of the four divisions’ policies and procedures addressed appraising changes to requirements for their impact on the project or the IT environment. Appraising changes is important because it allows management and the project team to determine whether changes to the requirements, along with their associated effect on the existing IT environment as well as project cost and schedule estimates, would be worthwhile. Additionally, Investigative Technology was missing six of seven practices, including developing and baselining requirements and maintaining them under change control. These practices are essential to ensuring that requirements are completely and correctly defined and that uncontrolled changes, commonly referred to as “requirements creep,” are mitigated. The actual consequences of not having effective requirements development and management policies and procedures can be seen in the performance of the bureau’s Trilogy project, which is to replace aging systems infrastructure and consolidate and modernize key investigative case management applications. The FBI reported that, as of August 2004, Trilogy has experienced a delay of at least 21 months and a cost increase of $201 million. According to the CIO, the project’s added time and cost were due in large part to requirements development and management process weaknesses. Managing risks means proactively identifying facts and circumstances that increase the probability of failing to meet system expectations and commitments and taking steps to prevent failures from occurring. Effective policies and procedures for risk management include the following practices: 1. identifying risk management roles and responsibilities; 2. having a risk management plan; 3. integrating risk management with other management and planning functions; 4. identifying, analyzing, controlling, and mitigating project risks; and 5. periodically reviewing the status of project risks and risk mitigation activities with management. The policies and procedures of all six divisions incorporate two or more of the five risk management best practices. See figure 7 for a summary of our analysis. However, key practices were not addressed. For example, all of the divisions’ policies and procedures do not provide for integrating risk management with other planning and management functions. This practice is important because it ensures that possible risks and mitigation strategies are adequately provided for in project planning schedule estimates and identified risks are assessed for impact to the organization’s IT environment. In addition, the policies and procedures of Counterterrorism, Cyber, and Information Resources do not provide for periodically reviewing the status of project risks and risk mitigation activities with management, a process that is key to ensuring that management is aware of risks to the project, plans to mitigate these risks, and the status and progress of mitigation activities. The Clinger-Cohen Act of 1996 provides an important framework for effective investment management. It requires federal agencies to focus on the results they achieve through IT investments while concurrently improving their acquisition processes. It also requires discipline and structure in how agencies select and control investments. In May 2000, we issued a framework (which we updated in March 2004) that encompasses IT investment management best practices, including investment selection and control policies and procedures, and is based on our research at successful private and public sector organizations. This framework is consistent with the Clinger-Cohen Act and identifies, among other things, effective policies and procedures for developing an enterprisewide collection—or portfolio—of investments to enable an organization to determine priorities and make decisions across investment categories based on analyses of the relative organizational value and risks of all investments. These portfolios include three types of IT investments— planned (proposed systems or system enhancements), under way (systems under development), and completed (existing systems). The framework also calls for integrating and overseeing these investments to manage the complete portfolio of investments. The bureau’s efforts to define IT investment policies and procedures are evolving slowly toward alignment with best practices. Specifically, according to officials from the CIO’s office, the bureau has had three separate and sequential efforts to develop its investment management process. The first effort started in December 2001, when the bureau developed an investment management and transition plan. This plan called for establishing and defining bureau policies and procedures for the select, control, and evaluate steps set forth in GAO’s framework. In March 2002, the FBI completed the definition of select phase procedures and began pilot testing them in developing its fiscal year 2004 IT budget request for new investments and legacy (existing) system enhancements bureauwide. The bureau completed the pilot in May 2002, but efforts to further define policies and procedures for the control and evaluate phases stalled and were not fully completed. In early 2003, the bureau began its second effort—shifting focus on its investment management process by initiating development of a new process for investing in IT and other non-IT assets such as buildings and plant equipment. According to officials from the CIO’s office, development of the process stalled at the end of 2003, before it could be fully implemented. In early 2004, the bureau started its third and current effort. The FBI decided to have separate policies and procedures for IT due to the differences in IT and non-IT investments. According to the CIO, the bureau’s current processes for IT investment management include one for investments that are planned and under way and another for maintenance of existing systems. The process for investments that are planned and under way is still being defined. The CIO has established a program office and has allocated staff, but the work is just beginning and is not planned to be completed until the second quarter of fiscal year 2005. For existing systems, the bureau developed a set of policies and procedures that define a process to allocate operations and maintenance resources against competing needs by assessing the performance of existing systems. The bureau is piloting the process on different types of systems (e.g., application, infrastructure) with the goal of enterprisewide implementation by April 2005. Between June and December 2003, the program office tested the procedures on Information Resources application systems. A second pilot was recently initiated in April 2004 on Information Resources infrastructure systems, with the goal of completing the test by November 2004. According to the CIO, the bureau has hired a contractor to assist with enterprisewide rollout, which began in June, and is also in the process of acquiring a tool to manage its IT investment portfolio. According to bureau officials, including the current CIO, the slowly evolving state of investment management is due in part to the fact that the bureau CIO position, which is responsible for developing the requisite policies and procedures, has had a high rate of turnover. Specifically, the CIO has changed five times in the past 2 1/2 years. As a result, development of investment management policies and procedures has not benefited from sustained management attention and leadership, and thus has shifted focus repeatedly and lagged. Until planned and ongoing improvements are completed, the FBI will lack effective controls over its IT investments and thus will be unable to ensure that the mix of investments it is pursuing is the best to meet the bureau’s goals for modernizing IT and transforming the organization. The CIO has acknowledged the weaknesses in systems acquisition management and investment management and has improvements planned to strengthen them. For example, according to the CIO, the FBI is establishing a strategic planning process as part of a bureauwide IT management effort. The CIO also said that the results of the strategic planning process will be used to guide the enterprise architecture and IT investment management. In putting this process in place, the FBI has drafted an IT strategic plan (to be issued in September 2004) that outlines ongoing and planned efforts to strengthen both investment management and systems acquisition policies and procedures by standardizing them across the bureau and incorporating best practices such as GAO’s investment management model and best practices in configuration management and quality assurance. In addition, the CIO has begun efforts to establish bureauwide requirements development and management policies and procedures by developing a process for requirements definition—the first step in developing requirements. The CIO has also drafted a life cycle management process that is to integrate systems acquisition management, investment management, and other key IT domain areas, such as IT strategic planning and enterprise architecture. According to the CIO, this integration is to be completed by the end of 2006. These improvements, if properly defined and implemented, will increase the FBI’s modernization management capabilities. However, we remain concerned about their completion for several reasons. First, the improvements have yet to be completely defined and implemented. In addition, other key ingredients to effective IT management—development of a modernization blueprint and the establishment of integrated project planning—are not yet in place. Further, as discussed earlier, the FBI has had problems sustaining leadership and management attention for similar IT improvements. The FBI is beginning to lay the management foundation needed for comprehensive improvements in its systems modernization management approach and capabilities. The foundational steps are in appropriate areas, such as development of a modernization blueprint (enterprise architecture), initiation of integrated project planning, and establishment of IT management policies and procedures for human capital, systems acquisition, and investment selection and control. However, the steps still need to be fully defined and properly implemented across the bureau to produce the integrated systems environment needed to optimally support mission needs and produce system investments that deliver expected capabilities and mission benefits on time and within budget and thus support the organizational transformation. This will require senior executive leadership and commitment and provision of sufficient CIO authority to fully define and institutionalize effective IT management approaches and capabilities bureauwide. Such commitment includes vesting accountability and responsibility for managing IT under the CIO— including budget management control and oversight of IT programs and initiatives—and aligning modernization planning and management policies and procedures with the best practices of leading organizations. Until this occurs, the bureau will remain challenged in its ability to effectively and efficiently manage its systems modernization efforts, and thus its near-term investments in modernized systems will remain at risk. Until the bureau’s IT management foundation is completed and available to effectively guide and constrain the hundreds of millions of dollars it is spending on IT investments, we recommend that the Director direct the heads of the divisions to limit spending on their respective IT investments to cost-effective efforts that take advantage of near-term, relatively small, low-risk opportunities to leverage technology in satisfying a compelling bureau need; support operations and maintenance of existing systems critical to the FBI’s mission; or support establishment of the FBI’s IT management foundation, including the development of a modernization blueprint (enterprise architecture), initiation of integrated project planning, and development of IT management policies and procedures for systems acquisition and investment selection and control. In establishing the management foundation, we recommend that the FBI Director provide the CIO with the responsibility and authority for managing IT bureauwide, including budget management control and oversight of IT programs and initiatives. In addition, we recommend that the FBI Director, with assistance from the CIO, ensure that future and ongoing modernization plans and efforts are effectively integrated by taking five actions: (1) establishing a bureauwide requirement (policy) to develop an integrated plan (or set of plans) for modernization investments, (2) developing corresponding guidance on plan contents and scope, (3) ensuring the appropriate resources and training are available to implement policy and guidance, (4) assigning responsibility and accountability for developing the plans, and (5) assigning responsibility and accountability to the CIO for reviewing the plans to ensure adherence to the policy and guidance, including alignment with the bureau’s enterprise architecture. We also recommend that the FBI Director, with the CIO’s assistance, take four actions to ensure that the bureau establishes effective policies and procedures for systems acquisition and investment management selection and control. With regard to systems acquisition, we recommend (1) correcting the weaknesses in configuration management, project management, quality assurance, requirements development and management, and risk management policies and procedures described in this report’s body and detailed in appendix III and implementing the resulting changes accordingly; and (2) assessing the other divisions that manage IT investments to determine whether their policies and procedures align with best practices and, to the extent there are gaps, correcting them. With regard to IT investment management, we recommend (3) developing the bureau’s investment management processes in accordance with key IT investment decision-making best practices, such as GAO’s IT investment management framework; and (4) identifying, and acting on, options for speeding up their implementation. In its written comments on a draft of this report, which were signed by the CIO and are reprinted in appendix IV, the FBI agreed that the bureau is taking steps to lay the management foundation for improving IT operations. The FBI also agreed that, while progress is being made, much work remains to implement and institutionalize planned and ongoing IT management improvements. It stated that our recommendations are consistent with the FBI’s internal reviews and with those of other oversight entities. In addition, the FBI described actions planned and under way to address our recommendations and provided technical comments, which we have incorporated, as appropriate, in the report. We are sending copies of this report to the Chairman and Vice Chairman of the Senate Select Committee on Intelligence, and the Chairman and Vice Chairman of the House Permanent Select Committee on Intelligence. We are also sending copies to the Attorney General; the Director, FBI; the Director, Office of Management and Budget; and other interested parties. The report will also be available without charge on GAO’s Web site at http://www.gao.gov. Should you have any questions about matters discussed in this report, please contact me at (202) 512-3439 or by e-mail at [email protected]. Key contributors to this report are listed in appendix V. As agreed with your offices, our objectives were to examine whether the FBI has (1) an integrated plan for modernizing its IT systems, and (2) effective policies and procedures governing management of IT human capital, systems acquisition, and investment selection and control. For the first objective, we focused on the bureau’s IT modernization plan and supporting documents. In light of the FBI’s response that its divisions were responsible for modernization planning, we included six divisions in our scope of work—Criminal Justice Information Services (CJIS), Cyber, Information Resources, Investigative Technology, the Program Management Office, and Security—because they had the largest planned or ongoing IT modernization investments. For the second objective, we focused on the bureau’s policies and procedures for IT human capital, systems acquisition, and investment selection and control. In response to this request, bureau officials told us that systems acquisition policies and procedures were developed within each division. To obtain a crosscutting sample, we analyzed the systems acquisition policies and procedures of at least one division with major IT modernization investments from each of the components, based on funding for fiscal years 2003 through 2005; thus, the scope for systems acquisition included Counterterrorism, CJIS, Cyber, Information Resources, Investigative Technology, and the Program Management Office. To address the first objective—determining whether the FBI had an integrated plan or set of plans for modernizing its IT systems—we reviewed program plans, IT capital asset plans and business cases (commonly called Exhibit 300s), and other supporting documentation from each of the six divisions, as well as the bureau’s strategic plan, draft IT strategic plan, and information sharing strategy, and then compared this documentation with Office of Management and Budget (OMB) planning guidance and our research and past experience on federal systems modernizations to determine the extent to which the plans exhibited an integrated approach to managing IT projects, including addressing project interdependencies. We also interviewed FBI officials from these organizations, as well as the Finance Division, Counterterrorism Division, Counterintelligence Division, Office of Intelligence, and the Office of the Chief Information Officer (CIO) to (1) verify and clarify our understanding of headquarters and division modernization planning roles, processes, and products; (2) determine why division plans did not fully satisfy the elements of effective modernization planning; and (3) identify the effects of not having a fully integrated modernization plan (or set of plans). In addressing the second objective—determining whether the bureau has effective policies and procedures governing management of IT human capital, IT systems acquisition, and IT investment selection and control— we assessed whether bureau policies and procedures were fully consistent with the practices of successful private and public IT organizations and, where appropriate, those specified in relevant federal IT management laws and administrative guidance (e.g., OMB circulars and agency-specific rules and regulations) that embody such best practices. A detailed description of our methodology for each of these management controls and capabilities is provided below. To evaluate the bureau’s policies and procedures in IT human capital management, we analyzed the FBI’s strategic human capital plan, specifically those parts addressing IT human capital management. We then compared the results of our analysis with best practices for strategic workforce planning. We chose strategic workforce planning because it is central to strategic human capital management for organizations, like the FBI, that are in the early stages of transformation. In addition, these practices apply to any organization or organizational component, such as the bureau’s IT organization. We also interviewed senior FBI officials, including the CIO and the assistant director responsible for the bureau’s human capital effort, to verify and clarify our understanding of headquarters and division human capital policies and procedures. To determine whether the FBI has effective policies and procedures governing management of IT systems acquisition, we compared division- level policies and procedures with best practices. In doing so, we focused on the following key areas: configuration management, project management, quality assurance, requirements development and management, and risk management. We evaluated these areas because they are used throughout the systems acquisition life cycle and are critical to the success of organizations, like the FBI, that are in the early stages of systems modernization. Best practices for these areas are provided in the Carnegie Mellon University Software Engineering Institute’s Capability Maturity Models. To document division policies and procedures, we reviewed division-level management plans and handbooks, standard operating procedures, common software processes, systems development life cycle guidance, management group charters, and management plan templates. We then compared the policies and procedures with best practices for the five key management areas. In addition, we interviewed the CIO and FBI division officials who were responsible for IT systems acquisition management to (1) verify and clarify our understanding of division-level policies and procedures in each of the five control areas; (2) identify planned and ongoing initiatives to, among other things, improve systems acquisition management across the bureau, including the definition and implementation of a bureauwide systems life cycle management process that is to include systems acquisition management policies and procedures consistent with best practices; (3) determine why divisions varied in their use of best practices; and (4) determine the effects of not having these practices in place on ongoing and planned systems modernization initiatives. To evaluate the bureau’s IT investment management, including selection and control, we reviewed the Inspector General’s December 2002 report and audit follow-up memoranda on the bureau’s efforts to develop and implement effective investment management processes. We also reviewed bureau documents, including the draft IT strategic plan, on steps taken since the Inspector General’s 2002 report. Further, we interviewed the CIO and officials from the CIO’s office responsible for investment and portfolio management to understand improvements under way and planned, why progress has been slow, and the effect of not having effective policies and procedures in place and operating while the bureau continues to make large investments in modernized systems. Finally, to verify our findings and validate our assessments, we met and discussed with the CIO and the affected division officials our analysis of the state of integration plans and IT management policies and procedures. We performed our work at FBI headquarters in Washington, D.C., and at field locations in Clarksburg, West Virginia, and Quantico, Virginia, from November 2003 through July 2004, in accordance with generally accepted government auditing standards. Description of intended functions and services Provide system architectural, engineering, development, integration, and test services to complete the modernization of FBI information technology. Provide direct access to law enforcement and intelligence databases from a collection of personal computers connected through a common unclassified FBI local area network. Enable federal, state, and local crime laboratories to exchange and compare DNA profiles electronically, including the capability to link serial violent crimes to each other and to convicted offenders. Ensure the ability of the FBI to collect, preserve, examine, and present computer evidence in support of FBI investigative programs, including developing technical capabilities that provide timely and accurate forensic information and preserving evidence to be analyzed by counterintelligence and counterterrorism experts. Ensure the ability of the FBI to collect evidence and intelligence (for example, from telephone calls and modem transmissions) through the acquisition, deployment, and support of communications interception techniques and systems to facilitate and support national security, domestic counterterrorism, and criminal investigative efforts. Implement a system architecture that increases the FBI’s ability to manage, analyze, and share electronic surveillance and other types of collected data, and integrates data analysis capabilities to improve the efficiency with which investigators can develop leads and intelligence. Manage data for end-to-end decision making that contributes to the mission of keeping foreign terrorists and their supporters out of the United States or leads to their exclusion, denial of benefits, surveillance, or prosecution. Provide the local, state, federal, and international law enforcement community and homeland security organizations with criminal history services and the capability to search the FBI fingerprint repository for matches to ten-print and latent fingerprints. Provide the capability to easily and rapidly search and share counterterrorism and criminal investigative information—including text, photographs, video, and audio material—across the FBI and with federal, state, and local organizations. Provide a foundation for safeguarding the FBI’s information, including developing a comprehensive and proactive security program, improving security awareness, monitoring FBI systems, conducting vulnerability assessments, and establishing a critical incident response capability. Provide the IT infrastructure required to support the task force’s efforts to capture the cumulative knowledge of area law enforcement agencies and the federal government in a systematic and ongoing manner so as to produce regional counterterrorism and crime strategies and cooperative investigations. Provide IT support and services to the FBI’s foreign locations, including reducing vulnerabilities to accessing and sharing critical, time-sensitive information internationally. Provide an online computerized index of crime information—including information about individuals, vehicles, and property—to local, state, federal, and international law enforcement and criminal justice agencies. Conduct name searches and provide criminal history records on individuals purchasing firearms or transferring ownership of firearms. Description of intended functions and services Security Management Information System Support all activities and functions within the bureau’s Security division, including replacing manual work processes with efficient streamlined automation, consolidating existing security applications, and enhancing electronic information sharing with other FBI divisions, the law enforcement community, and the intelligence community. Provide a backup system for the top secret/sensitive compartmented information local area network and expand the user base of this network within FBI headquarters, field offices, and other facilities. Special Technologies Applications Section Provide IT resources and services for investigations of federal violations in which the Internet, computer systems, or networks are exploited as instruments or targets of terrorist organizations, foreign government-sponsored intelligence operations, or criminal activity. Introduce new systems infrastructure and upgrade existing investigative and intelligence applications, including establishing an enterprise network to enable communications among hundreds of domestic and foreign FBI locations. Addressed by division policy? Addressed by division policy? Addressed by division policy? Addressed by division policy? In addition to the individual named above, key contributors to this report included Nabajyoti Barkakarti, Katherine Chu-Hickman, Lester Diamond, Elena Epps, Nancy Glover, Paula Moore, and Megan Secrest. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | The Federal Bureau of Investigation (FBI) is investing more than a billion dollars over 3 years to modernize its information technology (IT) systems. The modernization is central to the bureau's ongoing efforts to transform the organization. GAO was asked to determine whether the FBI has (1) an integrated plan for modernizing its IT systems and (2) effective policies and procedures governing management of IT human capital, systems acquisition, and investment selection and control. Although improvements are under way and planned, the FBI does not currently have an integrated plan for modernizing its IT systems. Each of the bureau's divisions and other organizational units that manage IT projects performs integrated planning for its respective IT projects. However, the plans do not provide a common, authoritative, and integrated view of how IT investments will help optimize mission performance, and they do not consistently contain the elements expected to be found in effective systems modernization plans. FBI officials attributed the state of modernization planning to, among other things, the bureau's lack of a policy requiring such activities, which is due in part to the fact that the responsibility for managing IT--including modernization planning--has historically been diffused and decentralized. The FBI's CIO recognizes these planning shortfalls and has initiated efforts to address them. Until they are addressed, the bureau risks acquiring systems that require expensive rework to be effectively integrated, thus hampering organizational transformation. The FBI has established policies and procedures governing IT human capital that are consistent with best practices used by leading private and public organizations. However, the bureau's policies and procedures governing systems acquisition, which are developed on a decentralized basis by the divisions and other units that manage IT projects, include some but not all best practices. In addition, the bureau's investment management policies and procedures, which started in 2001, have been evolving and progressing slowly toward alignment with best practices. According to FBI officials, the state of the bureau's acquisition and investment management policies and procedures is due to a number of factors, including diffused and decentralized IT management authority. The CIO recognizes these problems and has efforts planned and under way to strengthen policies and procedures. Until these efforts are completed, the bureau increases the risk that it will experience problems delivering promised IT investments on time and within budget, which, in turn, could adversely affect systems modernization and organizational transformation. |
Beginning with tax year 2014, individuals are to report on their health care coverage, report exemptions to the coverage requirement, pay the shared responsibility tax penalty when they file their tax returns, or do some combination of the above.reporting process. In January 2015, IRS began processing tax year 2014 tax returns, on which taxpayers were first required to report health care coverage information. IRS also started verifying taxpayers’ PTC claims using data from the marketplaces. However, IRS had limited information with which to verify the information taxpayers reported because complete marketplace data for most states were not submitted by the due date and the requirement for health issuers and applicable large employers to submit information returns was delayed until tax year 2015. IRS has limited information with which to verify the 2014 health coverage, exemption, and SRP information taxpayers reported. Therefore, according to IRS officials, IRS is using its standard examination processes to check the information taxpayers report. According to IRS officials, based on available resources, they will balance their compliance efforts related to the health coverage requirements issue with compliance efforts on other issues. For tax year 2015 tax returns, IRS plans to begin implementing additional PPACA provisions that will expand its capability to verify compliance with the health care coverage requirements by using information reported by health issuers and employers. To help prepare for these compliance checks, IRS plans to analyze filing behaviors and patterns for 2014 tax returns, such as the characteristics of taxpayers who reported an SRP. Health care coverage verification. IRS could verify the coverage information reported by marketplace customers on their 2014 tax returns using the Form 1095-A information provided by the marketplaces. However, IRS could not verify the coverage information reported by other taxpayers for 2014 because IRS did not require issuers of coverage (e.g., health insurers) and certain large employers to file Forms 1095-B and 1095-C until 2016 (for tax year 2015). In future filing seasons, IRS plans to use information reported on these forms during the post-filing compliance process to verify taxpayers’ claims of health coverage from sources other than marketplaces, such as an employer, Medicare, or Medicaid. Verification of exemption information. Verification of whether a taxpayer received approval for a marketplace-granted exemption is dependent on the marketplaces reporting this information to IRS. Marketplaces were required to include 2014 exemption data in their monthly EPDs due January 15, 2015. However, according to IRS officials, as of May 31, 2015, IRS had exemption data available for return processing for 13 of the 51 marketplace states and did not have exemption data for the remaining 38 states. Some marketplaces experienced delays in completing the review and approval process for applicants who applied for an exemption. The delay in marketplace approval process for exemption applications left affected taxpayers without the information they needed to complete Form 8965, Health Coverage Exemptions, which taxpayers claiming a marketplace-granted exemption must attach to their tax return. In light of the potential for pending marketplace exemption applications, in the instructions to Form 8965, IRS instructed affected taxpayers to write “pending” on the Form 8965; the instructions did not direct taxpayers to file amended tax returns once the marketplaces processed their exemption applications. In addition to delays in marketplace processing of exemption applications, there were delays in IRS receiving exemption data from the marketplaces. IRS experienced technical problems processing exemption data from the marketplaces. According to IRS officials, IRS systems wrongly returned an error code when the marketplaces transmitted exemption data to IRS. Initially, IRS advised the marketplaces that they could wait to submit the exemption data until IRS addressed this technical problem. In mid-March 2015 IRS advised the marketplaces to submit the data despite the problem. Verification of the shared responsibility payment. Verification of whether taxpayers owe an SRP, as well as how much they owe, is dependent on IRS having complete data on whether taxpayers had qualifying health care coverage for the full year, and whether they had an exemption for part or all of the year. Without information on health care coverage from issuers and large employers—on Forms 1095-B and 1095- C, respectively—and without information on exemptions granted by the marketplaces, IRS is unable to perform automated post-filing compliance procedures through comparing this third-party information to individual income tax return reporting. For tax year 2014 returns, IRS plans to perform limited post-filing compliance procedures for SRP in specific circumstances. For example, IRS may pursue collection procedures for taxpayers who do not pay the SRP amounts they report on their tax returns. In addition, IRS may review and recalculate self-reported SRP amounts for tax returns selected for examination for reasons other than the SRP. Silent returns. Some taxpayers filed 2014 tax returns that did not report health coverage information, claim an exemption, or report an SRP. IRS refers to these cases as silent returns. According to IRS officials, one reason for silent returns is that some taxpayers could be claimed as a dependent on another return. According to preliminary filing season data, as of May 28, 2015, more than 11 million taxpayers filed silent returns (see table 2). IRS officials estimated that about 58 percent of the silent returns posted as of the end of May 2015 were from filers who reported that they could be claimed as dependents on another return. One reason a taxpayer might file a silent return could be the first-year challenge of understanding the complex new shared responsibility reporting requirement. IRS goals include encouraging voluntary compliance with tax responsibilities and its mission includes enforcing the law with integrity and fairness to all. IRS has various tools that it can use to enhance taxpayer understanding and encourage compliance with new reporting requirements. According to IRS officials, IRS has solicited feedback from paid preparers and tax software companies and will consider their input to help determine changes to the Form 8965, Health Coverage Exemptions instructions that could facilitate more accurate reporting. Additional tools are also available. Most tax returns are filed using tax software and IRS provides guidance to tax software companies regarding changes to filing requirements. Officials from a tax software company told us that with clear policy instruction from IRS and sufficient time, they could design their tax software to prevent taxpayers from omitting required health coverage information on their returns. For tax year 2014 the company’s tax software guided, but did not require, taxpayers to report health coverage information. IRS can also issue soft notices—IRS letters to taxpayers that are designed to serve as an educational tool and improve voluntary compliance, and generally do not require taxpayers to respond to IRS. In the past, IRS has used soft notices to correct errors and collect funds without initiating an examination. According to IRS officials, certain tools used to encourage compliance with other requirements are not available to address silent returns. Specifically, IRS has determined that it does not have the authority to reject tax returns at filing because they are silent, although IRS can reject returns that do not include other required information. preparers do not need to meet due diligence requirements that are specific to the shared responsibility provision.practices for practitioners to gather necessary health coverage information to use in preparing 2014 tax returns for their clients, which stated that tax preparers are expected to exercise the same due diligence for reporting client’s health care coverage information as they would for nearly all other entries on a tax return. For example, for paper tax returns, IRS staff review each return to ensure that all forms and data needed to process the return are included, and IRS may correspond with taxpayers to obtain missing information. IRS estimated that more than 40 percent of the silent returns posted as of the end of May 2015 were from filers who did not report that they could be claimed as dependents on another return. This estimate raises questions about compliance with the shared responsibility reporting requirements. Assessing the costs and benefits of options for addressing taxpayer noncompliance with these requirements would help IRS determine what compliance actions would best support its goals to encourage voluntary compliance while enforcing the law with integrity and fairness to all. Although IRS created a new system to verify PTC claims, incomplete and delayed marketplace data limited IRS’s ability to use the system to verify claims at the time of return filing. To verify PTC claims on individual income tax returns, IRS relies on marketplace data to confirm a taxpayer was enrolled in a qualified plan. As illustrated in figure 3, at the time of filing (i.e., before any refund is issued), IRS’s new Affordable Care Act Verification Service (AVS) system compares the information taxpayers reported on their tax return to information the marketplaces provided, potentially identifying math errors or discrepancies with the marketplace data. For example, AVS may identify cases where a marketplace reported that a taxpayer received the advanced PTC, but the taxpayer did not report it on his or her tax return. In another example, AVS may detect cases where taxpayers claim the PTC on their tax returns, but are ineligible because they were not enrolled in a marketplace plan. If IRS is unable to resolve such discrepancies through its initial process of corresponding with taxpayers, it may freeze all or part of the PTC-related refund, and refer the cases to examination for further review. The monthly EPD files serve as the principal source of information IRS uses to verify taxpayers’ PTC claims with the AVS system. To reconcile PTC information taxpayers report during the filing season, including advance PTC payments that were issued to insurers, IRS is dependent on receiving complete and accurate coverage data from the health insurance marketplaces in a timely manner. However, some health insurance marketplaces did not meet the deadlines for providing IRS with complete tax year 2014 health care coverage information and some of the information submitted was inaccurate. Timeliness of EPD submissions. Although the EPD submissions with full year 2014 data were due to IRS on January 15, 2015, the FFM and four SBMs did not transmit full year 2014 data until February 2015. By the end of the filing season on April 15, 2015, two SBMs had not transmitted full year 2014 data to IRS. Timeliness of 1095-A information. Several marketplace states also did not meet the February 2, 2015, deadline for submitting the annual 1095-A data to IRS. As of February 9, 2015, according to IRS data all but seven marketplace states had successfully transmitted 1095-A data to IRS. As of May 11, 2015, four SBMs had not successfully transmitted 1095-A data to IRS. One of these four states successfully transmitted the information during the week of May 12, 2015, almost a full month after the end of the filing season. As of June 8, 2015, the remaining three states had not successfully transmitted 1095-A information to IRS. Accuracy of marketplace data. Some marketplaces issued incorrect Form 1095-A information to taxpayers. CMS announced it had issued approximately 800,000 Forms 1095-A that included incorrect premium information used to calculate PTC. One SBM reported that it had issued approximately 100,000 Forms 1095-A with inaccuracies, including dates of coverage and number of family members enrolled, which are also used in calculating the amount of PTC. While IRS uses the more comprehensive EPD data transmission as the primary source of marketplace data for pre-refund compliance checks on PTC claims, in the absence of EPD information, according to IRS procedures, IRS may use the 1095-A data transmitted by the marketplaces to verify taxpayer PTC claims. According to IRS officials, IRS checks that the marketplace data meet formatting standards, but it does not check for accuracy of the data. State marketplace and CMS officials cited multiple factors that contributed to the delayed transmission of complete 2014 marketplace data to IRS. Some state officials acknowledged that as they approached the end of December 2014, they placed a higher priority on processing consumer applications for marketplace coverage for 2015, as opposed to ensuring they were ready to transmit the January EPD or 1095-A files to IRS. State and CMS officials also noted that issuing paper Forms 1095-A to taxpayers by the February 2, 2015, deadline was also a higher priority than the January data transmissions to IRS. In addition to these issues, some state marketplace officials noted that they were impacted by what they described as IRS’s limited windows of availability for testing the data exchange process in advance of the filing season. However, CMS officials responsible for the FFM told us that the availability of IRS systems for testing was not a problem for them. They stated that CMS did not have the resources to send both monthly and annual reporting in January. The CMS officials explained that CMS may experience similar challenges next year. They said that, although the resource constraints were in part related to the first-year development of the information technology systems, the volume of processing needed to transmit both monthly and annual reports in January presents a longer-term challenge. Delays in IRS processing of marketplace EPD submissions also contributed to IRS having incomplete marketplace information available for AVS matching with tax returns for most of the filing season. According to IRS officials, once IRS receives the marketplace submissions, IRS must process the data to make them available for matching. IRS officials told us that IRS processing of the data submissions it received in February was delayed because IRS needed to implement an internal technical system change during that time. Therefore, complete data from the majority of marketplaces was not available for matching until the last week in March 2015. Specifically, as of March 21, 2015, according to IRS documentation, IRS had processed and made available for verification of taxpayer PTC claims complete coverage data for the entire January 2014 to December 2014 tax year for 4 of the 51 marketplace states. Incomplete data, which did not include data for the month of December 2014, were processed and available for 38 marketplace states. For the remaining 9 marketplace states, no usable 2014 EPD information was processed and available for PTC verification. The marketplace data that were processed and available to IRS for matching as of March 21, 2015, provided PTC information on approximately 3.1 million tax households; IRS expected that about 4.4 million tax households would be represented once the marketplace information was complete. In late March 2015, according to IRS documentation, it had processed additional EPD files to make them available for matching so that, as of March 29, 2015, IRS had complete 2014 EPD data available for 46 of the 51 marketplace states, but it had incomplete data for two states and no EPD data for the other three states. As of July 7, 2015, IRS reported it had complete 2014 data available for 50 marketplace states and incomplete data for one state. Even though IRS did not have complete tax year 2014 EPD information available for verification of taxpayer PTC claims, it was able to begin enforcing PTC compliance at the start of the 2015 filing season in January 2015. IRS implemented contingency plans—such as using other available data and corresponding with taxpayers—to compensate for missing marketplace data. Although these contingency processes enabled IRS to process tax returns with PTC claims without complete marketplace data, the processes did not fully mitigate the impacts on taxpayers. In the absence of complete EPD from all marketplaces, early in the filing season IRS suspended processing more than 24,000 tax returns for a week or more. In prior work we found that using the correspondence process for resolving discrepancy cases imposes a burden on taxpayers as it requires them to provide documentation in response to IRS’s post-filing notices and then wait for IRS to communicate the results of its review. We also previously found that this often results in a lengthy process during which, in a substantial portion of cases, refunds are delayed until the audit issues are resolved. To help mitigate challenges faced by taxpayers affected by delayed or inaccurate Forms 1095-A, Treasury issued guidance providing various types of penalty relief. In addition, IRS issued guidance to taxpayers affected by delayed and inaccurate Forms 1095-A. For example, taxpayers who filed their returns based on the first Form 1095-A they received (i.e., prior to notification from the marketplace that the form was incorrect) were informed that they were not required to file an amended return once they received the corrected form, unless the updated marketplace information indicated that they were not actually enrolled in a qualified plan or were otherwise ineligible for the PTC. Taxpayers who had received the initial Form 1095-A but not the corrected one they were expecting were advised to file by April 15, 2015, using either the initial form or the corrected Form 1095-A, if available. Taxpayers who had not received any Form 1095-A at all were instructed to request an automatic filing extension by April 15, 2015, and to then file their returns as soon as they received the form from the marketplace. IRS’s goals include effectively enforcing compliance with tax laws, reducing taxpayer burden, and encouraging voluntary compliance. As discussed above, it is unclear whether the challenges IRS faced in getting complete and accurate data from the marketplaces in time to conduct pre- refund verification of taxpayer PTC claims were a 1-year problem or if they will be ongoing. Without understanding the challenges that may be ongoing, the effects of the problem and options for correcting it, IRS is missing information that could help it design contingency plans to efficiently and effectively process tax returns with PTC claims while mitigating the burden on taxpayers. Even with complete marketplace information, IRS is faced with contacting taxpayers for additional documentation or opening an examination to resolve discrepancies between PTC information on tax returns and the marketplace data. In cases of a discrepancy with the marketplace data, IRS does not have the authority to automatically correct the tax return and simply notify the taxpayer of the change, as it does in other circumstances where math error authority applies.submitted legislative proposals for fiscal years 2015 and 2016 that, among other things, would establish a category of correctable errors. Under the proposals, Treasury would be granted regulatory authority to The administration permit IRS to correct errors in cases where information provided by a taxpayer does not match corresponding information provided in government databases. Congress has not granted this broad authority. IRS’s goals include effectively enforcing compliance with tax laws while reducing taxpayer burden and encouraging voluntary compliance. Correctable error authority could help IRS meet its goals for timely processing of tax returns, reduce the burden on taxpayers of responding to IRS correspondence, and reduce the need for IRS to resolve discrepancies in post-refund compliance, which, as we previously concluded, is less effective and more costly than at-filing compliance. IRS has not sought correctable error authority specific to correcting errors in cases where information provided by a taxpayer does not match corresponding marketplace information. Correcting tax returns at-filing based on marketplace data would not depend on IRS accelerating deadlines for information returns because—unlike some other information returns, such as the Form W-2—marketplace data are due to IRS early in the filing season. Furthermore, IRS is already using the marketplace information for pre-refund matching. However, given the problems with the completeness and accuracy of the marketplace data IRS had available to verify PTC information on 2014 tax returns, if IRS were granted correctable error authority, it would be important that the third- party data used for matching are complete and accurate. IRS and CMS each have oversight responsibilities for the administration of the advance PTC. IRS is responsible for enforcing tax laws and determines filing requirements. IRS requires recipients of advance PTC payments to file tax returns and to accurately report those payments on their tax return. According to IRS officials, IRS and CMS are part of an interagency working group assessing the risk for improper payments from the PTC account and CMS is responsible for overseeing the marketplaces and for the risk assessment for the advance PTC payments. However, IRS does not have complete information that would help support its oversight responsibilities. Because marketplace data are incomplete, CMS has not provided IRS the total amount of advance PTC payments made for 2014 marketplace policies. Unless CMS shares complete and accurate information on advance PTC payments made to health insurers, IRS will not know the size of the gap between the payments made and payments reported. Without knowing the size of this gap, IRS does not know the extent of noncompliance with the requirement for recipients of advance PTC payments to accurately report those payments on their tax return, a measure that could help IRS assess the effectiveness of its education, outreach, and compliance efforts. Although IRS does not know the size of the gap, it has preliminary information on the total advance PTC payments made and total advance PTC payments reported by taxpayers, as follows. Total advance PTC payments made. According to CMS data, in 2014, advance PTC payments totaled almost $15.5 billion. Because some advance PTC payments made in 2014 could be for 2015 policies and some payments made in 2015 could be for adjustments to 2014 policies, the amount paid during 2014 does not indicate the precise amount of advance PTC that was paid for 2014 policies and should be reconciled on 2015 tax returns. CMS and the marketplaces are required to provide these data to IRS through EPD and 1095-A data. But, as discussed above, they have not yet provided complete and accurate data to IRS for advance PTC payments made for coverage year 2014. Until it receives complete and accurate marketplace data, IRS does not know the baseline for the total amount of advance PTC that taxpayers should have reported on 2014 tax returns. Total advance PTC payments reported by taxpayers. IRS expected 4.4 million taxpayers to file PTC claims for the 2014 tax year. As of May 28, 2015, 2.8 million filers had reported advance PTC payments totaling almost $10.1 billion on their tax returns (see table 3). Approximately 2.6 million filers had calculated and claimed PTC based on their actual income and family size; these PTC claims totaled $8.9 billion. The number of filers claiming PTC was smaller than the number of filers reporting advance PTC because some taxpayers who reported advance PTC payments did not claim PTC. These taxpayers may have been eligible for advance PTC based on their estimated income, but ineligible for PTC based on the actual income reported on their tax return. According to preliminary IRS filing season data, as of May 28, 2015, more than half of taxpayers who received advance PTC had advance PTC greater than the PTC they claimed when they calculated it on their tax returns. Working with CMS to obtain the total amount of advance PTC paid for tax year 2014 would better position IRS to determine the amount of the gap between advance PTC paid and advance PTC reported. Continuing to obtain this information and tracking such gaps in the future could also better position IRS to evaluate the effectiveness of its PTC education and compliance efforts. IRS issued most, but not all, of the forms and guidance related to the shared responsibility and PTC provisions in time for the filing season. Leading up to the 2015 filing season IRS also issued several other types of guidance and information for taxpayers and tax preparers on the PTC and shared responsibility provision filing and payment requirements. These included publications, brochures, e-mail bulletins, extensive website information, videos, and webcasts. During the filing season IRS also issued multiple updates to guidance for claiming coverage exemptions related to the shared responsibility provision. Although IRS finalized Form 8965, Health Coverage Exemptions, and the Form 8965 instructions by December 2014, IRS subsequently issued additional updates to the instructions, first in late January 2015 and then again in late February 2015. The January update clarified whether certain types of exemptions could be claimed on a tax return or must be granted by the marketplace. The February update clarified the requirements for claiming a coverage exemption for a gap in marketplace coverage during early 2014. With respect to PTC guidance, IRS did not finalize some key PTC-related instructions in time for tax software developers, taxpayers, and tax preparers to have complete information at the beginning of the 2015 filing season. Specifically, IRS did not release Publication 974, Premium Tax Credit, until late February 2015. The publication released in February included several worksheets some taxpayers needed to compute their PTC, but did not cover certain other situations taxpayers may face. To incorporate guidance for those additional scenarios, IRS released an updated version of Publication 974 on March 30, 2015. According to IRS, this publication was delayed due to the high-level coordination with Treasury, Chief Counsel, and business units and IRS’s Services and Enforcement Affordable Care Act Office. Stakeholders representing tax software developers and return preparation services told us that the delay was problematic because software developers were unable to finalize programming for the worksheets that taxpayers would need to calculate both the self-employed health insurance deduction and PTC. IRS officials told us that developing the guidance as part of the first year of implementing PTC requirements was very complicated and they do not expect subsequent annual updates to take as much time. IRS’s preliminary efforts to evaluate performance for the shared responsibility and PTC provisions have involved tracking statistics and developing a high-level strategic roadmap to align strategic goals, objectives, and measures. Consistent with performance management principles related to using actual performance information to establish a performance measurement system, IRS is tracking key statistical measures related to reporting on compliance with health care coverage requirements and processing tax returns claiming PTC. IRS plans to analyze these data and other performance information following the 2015 filing season. Consistent with performance management leading practices, IRS’s strategic roadmap aims to ensure alignment of Patient Protection and Affordable Care Act (PPACA) strategic goals, objectives, and measures at the program and agency-wide levels. Collection and analysis of performance data. IRS considers the 2015 filing season—when taxpayers file their 2014 returns—a baseline in terms of identifying tax return processing issues and taxpayer behaviors associated with the new individual PPACA provisions. According to IRS officials, analyzing statistics for first-year implementation in these areas may help IRS establish the basis for defining specific performance goals and measures for assessing the effectiveness of SRP and PTC implementation, as appropriate, in the future. Some examples of statistics IRS is tracking include the following: The volume of returns claiming health care coverage, an exemption from coverage, and silent returns. The volume and dollar amounts associated with tax returns claiming PTC, including whether or not the taxpayer reported receiving an advance PTC. The extent to which repayments of excess advance PTC are limited by the statutory cap. The volume of different types of math errors and third-party discrepancies identified through matching of PTC claims to marketplace data. IRS also identified 32 different research priorities, which may help it develop performance goals and measures for the shared responsibility and PTC provisions in the future, if IRS determines that provision-specific measures are appropriate. These research priorities include examining taxpayers’ behavior related to the advanced PTC reconciliation process, as well as characteristics of taxpayers who self-assessed the SRP or filed silent returns. IRS plans to use the results of its evaluations of taxpayer behavior to help inform IRS compliance initiatives such as IRS’s taxpayer education efforts. According to IRS officials, IRS has not typically defined performance goals specific to new legislative requirements such as the shared responsibility and PTC provisions, but instead incorporated monitoring of new requirements into overall performance goals. Agency officials stated that they will evaluate whether this would be appropriate for SRP and PTC once they have analyzed operational baseline data from the first year of implementing these provisions. IRS’s initial efforts to build toward defining specific goals and measures for implementing the SRP and PTC provisions are consistent with performance management principles related to using actual performance information to establish a performance measurement system. We have reported on the importance of agencies using performance information to help identify problems in existing programs, as well as the causes of problems and potential corrective actions. The principles described in our prior work and the Government Performance and Results Act Modernization Act of 2010 (GPRAMA) can serve as a framework of leading practices that may be applied at the PPACA program level, including implementation of the individual shared responsibility and PTC provisions. Alignment of goals, objectives, and measures. IRS articulated the early vision for implementing PPACA requirements through its Strategic Roadmap, issued in January 2012. The Roadmap described a high-level vision for implementation of all PPACA provisions and included broad goals and objectives that were aligned with IRS’s strategic goals and objectives. It outlined a phased approach for continued refinement of the core capabilities and specific initiatives that would help IRS achieve its goals, specifically: defining mission, vision, and guiding principles; defining goals, objectives, and capabilities; conducting gap analysis and identifying next steps; developing the roadmap; reviewing and refining capabilities; and prioritizing capabilities. Following the issuance of its Strategic Roadmap, IRS moved forward in preparation for implementing PPACA by focusing on five core capability areas that would enable it to meet its objectives: governance and planning, pre- and at-filing operations, compliance, customer service, and stakeholder relations. IRS expanded its planning for PPACA implementation in each core area through the development of high-level action plans and cross-cutting collaborative efforts that involved the IRS divisions and units that would be responsible for implementing specific PPACA-related processes. The high-level action plans generally addressed implementation of multiple PPACA provisions, although some portions focused on defining processes related to PTC, such as the annual redetermination of eligibility for the advanced PTC. More recently, IRS demonstrated its continued recognition of the importance of aligning performance goals and measures for PPACA. While IRS has not yet defined specific performance goals and measures related to the shared responsibility and PTC provisions, in its preliminary 2015 ACA Filing Season Status Reports, IRS identified tentative linkages between agency-wide strategic goals and the preliminary shared responsibility and PTC-related high-level statistics that are being monitored. In planning for implementation of PPACA requirements, IRS followed performance management leading practices related to aligning agency- wide strategic goals, objectives, and measures with program-level goals and measures. For example, IRS’s Strategic Roadmap described the linkages between the strategic goal of supporting voluntary compliance while protecting the tax system from fraud and other noncompliance and objectives including the use of data-driven strategies to continuously enhance prevention, detection and treatment of fraud and abuse. In our prior work, we found that aligning agency-wide goals, objectives, and measures is another leading practice that can enhance or facilitate the use of performance information for management decision making.alignment should reflect a cascading or hierarchical linkage moving from top management down to the operational level. Successful implementation of the PTC and individual shared responsibility tax provisions requires IRS collaboration with CMS and the marketplaces. It also requires communication with other stakeholders, such as tax software companies, employers, and health insurers. IRS worked to collaborate and communicate with external stakeholders to implement PPACA requirements for tax year 2014. For example, IRS and CMS developed written guidance and agreements that describe common goals and lay out standards and timelines for IRS and the marketplaces to provide key information to one another, including the EPD transmissions and Form 1095-A information. Further, SBM officials we talked to reported that, overall, IRS provided excellent technical support. For example, marketplace officials told us that IRS coordinated with them through regular, frequent meetings, webinars, training sessions, and office hours that covered security, technical, and policy issues. IRS also provided the marketplaces with documentation of technical requirements for the submission of EPD and Form 1095-A information and project plans with timelines. Officials from two marketplaces said IRS was flexible and worked with them regarding timelines for testing data transmissions. Despite these efforts, external stakeholders we spoke with reported some challenges with IRS collaboration and coordination efforts related to implementation of the PPACA provisions for tax year 2014. As previously discussed, IRS experienced challenges in obtaining timely, complete, and accurate information from the marketplaces and in providing timely guidance to taxpayers and tax preparers. Three of the five SBMs we spoke with told us that limited availability of IRS systems for marketplaces to test monthly data transmissions was a challenge. Other stakeholders we spoke with, including associations representing tax preparers and tax software developers, also said that late forms, instructions, and guidance created challenges in helping their clients with the new PPACA requirements. IRS initially planned to assess the effectiveness of its efforts to collaborate and communicate with key external stakeholders for the implementation of the 2014 PPACA requirements. This assessment was to help inform IRS efforts for the implementation of the 2015 requirements. However, IRS has not made progress on that assessment. Initial IRS performance plans included plans to assess IRS coordination with different types of external stakeholders. Also, IRS officials told us that an important area for post-filing analysis involves reviewing the effectiveness of key external stakeholder relationships, such as coordination with the state health insurance marketplaces. In prior work we identified key practices that can help sustain collaboration among agencies, including developing mechanisms to evaluate the results of IRS received feedback on collaboration efforts on collaborative efforts.an ad hoc basis from at least one external stakeholder group. However, IRS did not solicit such feedback in a comprehensive way and IRS does not have a documented plan, with timelines, for assessing the effectiveness of its efforts to collaborate with key external stakeholders. In March 2015, officials from IRS’s Services and Enforcement Affordable Care Act Office told us that they plan to discuss with SBM officials which aspects of the first year implementation of PPACA requirements went well and which processes could be improved. IRS identified this as part of its plan to catalog lessons learned during the first year of implementation of the shared responsibility and PTC provisions to help improve future coordination with the states. However, as of May 1, 2015, IRS had not initiated any analysis in this area and IRS officials did not indicate when they expect to initiate this analysis. Without an assessment of its efforts to collaborate and communicate with key external stakeholders, challenges in implementing the 2014 PPACA requirements that relied on these groups could also affect new requirements taking effect in 2015.opportunities for improving return processing and taxpayer experience related to the shared responsibility and PTC provisions, it is not evaluating its collaboration efforts. The new requirements—listed below— affect a broader group of stakeholders, including health insurers and certain employers, and IRS is developing new systems and guidance to implement the requirements. Some external stakeholders we spoke with raised concerns about the readiness of IRS and stakeholders to implement the new requirements smoothly. New health coverage reporting requirements. As of April 3, 2015, IRS’s ACA Enterprise Risk Register listed delays to the information return program as significant, ongoing risks based on the readiness of reporting entities and delays in the IRS guidance for software developers. According to IRS officials, although larger software developers will be able to accommodate changes on shorter time frames, smaller software developers may have more challenges. Several stakeholders told us that they need finalized forms and guidance at least 12 months in advance to be able to develop the software they will need to track and report information. However, IRS has not yet finalized guidance for Forms 1095-B and 1095-C that insurers and large employers, respectively, will be required to file starting with the 2016 filing season. IRS issued instructions for these new information returns in early February 2015. However, the instructions for these forms are incomplete. The instructions for insurer reporting do not list the due date for insurers to provide the form to covered individuals. Furthermore, the instructions for both the insurer and employer returns lack information on the process to be used for issuing corrected forms if needed, indicating that this information is “Reserved.” IRS officials told us that they are updating the forms and instructions for tax year 2015 and that the updated instructions will list the due dates. As of June 2015, Publication 5165, Guide for Electronically Filing Affordable Care Act Information Returns for Software Developers and Transmitters, was also still under development. This guidance is to provide software developers and electronic transmitters of these returns specific protocols, formats, business rules, and validation procedures. IRS issued a draft version of Publication 5165 on April 25, 2015, and an updated draft on June 9, 2015. The draft version of the publication includes information on issuing corrected forms. As of the end of May, IRS expected to begin testing the electronic submission process for the information returns with external organizations in July 2015. Marketplace exemption reporting. If the marketplaces are unable to process exemption applications and report exemption information to IRS in a timely way, IRS will not have some of the information it needs to verify taxpayer exemption claims for tax year 2015. Stakeholders told us that the paper-based marketplace exemption process is complex and cumbersome. Furthermore, SBMs that had the FFM process their exemption applications and submit exemption information to IRS for the 2014 coverage year are required to process exemptions and submit the information on their own beginning in 2016. Officials from one of the five SBMs we spoke with said that marketplace would establish a system for processing exemptions once more information was available from CMS. But at the time we spoke with the officials in January 2015, CMS had not provided this guidance. As of late April 2015, IRS was also unable to provide this information. In July 2015, HHS officials told us that HHS plans to release guidance relieving the SBMs of exemption reporting for 2016. Information on advance PTC recipients who fail to file a tax return. Marketplace officials we interviewed said that IRS needs to clarify how it will provide information to the marketplaces about whether or not an individual who received advance PTC in the past filed a tax return, as required. The marketplaces will need this information because individuals who receive advance PTC and fail to file a tax return reporting their advance PTC are ineligible for future advance PTC payments. In May 2015, IRS officials told us that they began discussing this issue with CMS officials in early March 2015, and that they also plan to meet with the SBMs. IRS officials said they expect to be able to provide information to the marketplaces to implement this provision by August 31, 2015, in time for the 2016 coverage year marketplace open season. IRS has implemented 13 recommendations we previously made to IRS related to the implementation of PPACA provisions. Table 4 summarizes the recommendations implemented by IRS. Two of our other PPACA-related recommendations to IRS remain open, though IRS has made some progress in implementing them. As shown in table 5, IRS implementation of these recommendations would foster accountability and provide information to help budget decision makers. Implementation of the new PPACA tax provisions for tax year 2014 was a broad and complex undertaking. IRS developed new systems, processes, forms, instructions, and educational materials and worked closely with external stakeholders to facilitate implementation of the shared responsibility and PTC provisions. However, IRS experienced various challenges related to the availability of complete and accurate marketplace data and taxpayer compliance with the new requirements. Additional IRS actions could help improve IRS oversight of these tax provisions. Assessing the costs and benefits of options for addressing taxpayer noncompliance with the requirements of the shared responsibility provision in the 2016 filing season would help IRS determine what compliance actions would best support its goals to encourage voluntary compliance while enforcing the law with integrity and fairness to all. Determining whether problems with the marketplaces providing—and IRS using—complete, timely, and accurate marketplace data was a first-year challenge that is unlikely to recur or if those problems are a longer-term challenge would help IRS plan for processing tax returns with PTC claims for future tax years. If the problems are expected to be an ongoing challenge, assessing their effects and correction options would help IRS better target contingency plans and assess the trade-offs among any correction options. Assessing whether or not the data received from the marketplaces are sufficiently complete and reliable would help IRS better understand the potential benefits and risks of using the data to correct tax returns at filing. The administration has requested that Congress give IRS broad authority to correct tax returns based on matching with government databases. However, Congress has not granted that authority. If an IRS assessment of the marketplace data determined that corrections based on matching with those data would be effective, seeking more specific authority to make such corrections at the time of filing for tax returns with PTC claims could reduce the cost and burden of needing to use the correspondence process to resolve discrepancies. It also would be an effective first step in demonstrating the benefits of correctible error authority. Establishing as a baseline the aggregate amount of the gap between the advance PTC paid and advance PTC reported for the 2014 tax year, and tracking this statistic in future years, would help IRS evaluate the overall effectiveness of the measures it takes to educate taxpayers and tax preparers about the requirements of the PTC tax provision, as well as the effectiveness of its PTC compliance strategy. Evaluating its efforts to collaborate and communicate with key external stakeholders, such as CMS, the marketplaces, tax software companies, and employers would help IRS assess whether changes to its collaboration and communication practices could help it avoid the kinds of challenges it faced in tax year 2014 as it implements the new PPACA provisions in tax year 2015. Without such an evaluation, IRS increases the risk of problems implementing the 2015 requirements that rely on IRS coordination and communication with key stakeholders. To strengthen oversight of the individual shared responsibility and premium tax credit provisions, we recommend that the Commissioner of Internal Revenue take the following five actions: Assess the costs and benefits of compliance options, such as soft notices, that could be used beginning in the 2016 filing season to address the problem of tax returns that do not include at least one of the following: indication of full-year health care coverage, claim of an exemption from the requirement to have coverage, or report of a shared responsibility payment, as required. Assess whether the challenges in getting complete and accurate marketplace data in time to conduct pre-refund verification of taxpayer PTC claims are a single year or an ongoing problem and, if they are an ongoing problem, assess the effects of the problem and options for correcting it. Assess whether or not the data received from the health insurance marketplaces are sufficiently complete and accurate to enable effective correction of tax returns at-filing based on matching with the marketplace data and, if the assessment determines that such corrections would be effective, seek legislative authority to correct tax returns at-filing based on the marketplace data. Work with CMS to get the total amount of advance PTC paid for the 2014 tax year and establish, as a baseline, the aggregate amount of the gap between advance PTC paid and advance PTC reported for the 2014 tax year, and track this aggregate gap for future tax years to help in evaluating the effectiveness of IRS’s PTC education and compliance efforts. Evaluate IRS efforts to collaborate and communicate with key external stakeholders to inform efforts related to implementation of the new 2015 PPACA requirements. We provided a draft of this report to the Commissioner of Internal Revenue and the Secretary of Health and Human Services for comment. IRS provided written comments on a draft of the report, which are reprinted in appendix II. IRS and HHS also provided technical comments, which we incorporated, as appropriate. IRS generally agreed with our recommendations. IRS agreed with four of our five recommendations and agreed in part with our recommendation to work with CMS to get the total amount of advance PTC paid for the 2014 tax year and establish, as a baseline, the aggregate amount of the gap between advance PTC paid and advance PTC reported for the 2014 tax year, and track this aggregate gap for future tax years. IRS agreed to analyze reporting of advance payments of the PTC by the marketplaces. According to IRS, the results of this analysis and other efforts will help inform the IRS of potential areas for improvement in education, tax filing, and compliance activities. We continue to believe IRS should track the aggregate gap between advance PTC paid and advance PTC reported. We are sending copies of this report to the appropriate congressional committees, the Commissioner of Internal Revenue, the Secretary of the Treasury, the Secretary of Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions or wish to discuss the material in this report further, please contact me at (202) 512-9110 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of this report are to (1) assess what the Internal Revenue Service (IRS) has done and plans to do to implement the premium tax credit (PTC) and individual shared responsibility tax provisions; (2) determine the extent to which IRS goals for these tax provisions are linked to performance measures to evaluate program success; (3) assess IRS collaboration with government and private sector entities to implement and enforce these tax provisions; and (4) describe IRS’s progress in implementing our past recommendations on Patient Protection and Affordable Care Act (PPACA) implementation. To address these objectives, we reviewed IRS plans and processes, including tax forms, instructions, and guidance; the Services and Enforcement Affordable Care Act Office Project Management Plan and Affordable Care Act Strategic Roadmap; and relevant laws and Department of the Treasury regulations. For information on key IRS information technology systems, we drew on information from our recent We also reviewed Department of Health and Human work on this issue.Services’ Centers for Medicare & Medicaid Services (CMS) reports and planning documents on marketplace enrollment and disbursements of advance PTCs. We interviewed officials from IRS, CMS, the Information Reporting Program Advisory Committee—an IRS advisory group made up of tax professionals—and nongeneralizable samples of five State- based Marketplaces (SBM) and eight external stakeholder groups. We selected marketplaces and external stakeholder groups to get a variety of perspectives. The SBMs we selected varied by number of enrollees and by the percentage of enrollees with financial assistance. We also selected the only SBM that processed its own 2014 exemption applications, one SBM for which the Federally-facilitated Marketplace (FFM) reported 2014 marketplace enrollment information to IRS, and one SBM that reported its own enrollment information for 2014 but switched to using the FFM for 2015. To get a variety of perspectives from external stakeholders, we selected groups representing tax software companies, tax preparers, employers, and taxpayers—including a group representing taxpayers broadly and a group focused on serving low-income taxpayers. We analyzed preliminary summary IRS 2015 filing season data related to the premium tax credit and individual shared responsibility provisions. We assessed the reliability of the data by reviewing related documentation, testing the data for errors, and interviewing IRS officials. We determined that the data were sufficiently reliable for the purposes of this report. We assessed IRS implementation activities and plans using IRS goals as criteria. We assessed IRS efforts to collaborate with partner agencies and key external stakeholder groups using criteria from IRS goals and plans as well as criteria on interagency collaboration reported in our prior work. These criteria include establishing (1) mutually reinforcing or joint strategies, (2) compatible policies and procedures to operate across agency boundaries, and (3) written guidance and agreements. We assessed IRS performance goals and measures for implementation of the PPACA individual shared responsibility mandate using criteria on leading practices for performance management described in our prior work and drawing from the broad agency performance requirements under the Government Performance and Results Act Modernization Act of 2010. We conducted this performance audit from July 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Jeff Arkin, Assistant Director; Susan E. Murphy, Ellen Rominger, Amy Bowser, Stephanie Chen, Nina Crocker, John E. Dicken, Raheem Hanifa, Maria Hasan, Sherice Nelson, Robert Robinson, and Cynthia Saunders made key contributions to this report. | Tax year 2014 marked the first time individual taxpayers were required by the Patient Protection and Affordable Care Act (PPACA) to report health care coverage information on their tax returns. Taxpayers reported on whether they had health care coverage, had an exemption from the coverage requirement, or owed a tax penalty (the SRP). Most taxpayers who received coverage through a health insurance marketplace were also eligible for an advance PTC to make their coverage more affordable. Marketplace customers can choose to have the PTC paid in advance to their insurance company or may claim all of the credit when they file their tax returns. GAO was asked to review IRS implementation of the individual shared responsibility and PTC tax provisions. Among other objectives, this report examines (1) IRS's implementation of these PPACA requirements; and (2) IRS efforts to collaborate with key external stakeholders. To address these objectives, GAO reviewed documents from IRS and CMS; analyzed preliminary 2014 tax year data; and interviewed officials from IRS, CMS, marketplaces and other key external stakeholders, such as tax preparers and tax software companies. In January 2015, the Internal Revenue Service (IRS) began verifying taxpayers' premium tax credit (PTC) claims using marketplace data on enrollments and advance payments of the PTC. IRS is using its standard examination processes to check the coverage, exemption, or shared responsibility payment (SRP) information taxpayers report. IRS's overall goals are to efficiently and effectively enforce compliance with tax laws, reduce taxpayer burden, and encourage voluntary compliance. Incomplete and delayed marketplace data limited IRS's ability to match taxpayer PTC claims to marketplace data at the time of return filing. Complete marketplace data for the 2014 coverage year were due to IRS in January, but due to marketplace delays in transmitting the data and IRS technical difficulties with processing the data for matching, as of March 21, 2015, IRS had complete data available for verification of taxpayer PTC claims for 4 of the 51 marketplace states (i.e., the 50 states and the District of Columbia). IRS does not know whether these challenges are a single year or an ongoing problem. According to IRS officials, IRS checks the formatting, but not the accuracy of the data. Although IRS implemented contingency plans to compensate for missing and inaccurate data, those processes were more burdensome for taxpayers. Assessing whether the problems with the timeliness and reliability of the marketplace data are expected to be an ongoing challenge, rather than just a first-year problem, would help IRS understand how it can use the data effectively and better target contingency plans. IRS does not know the total amount of advance PTC payments made to insurers for 2014 marketplace policies because marketplace data are incomplete. Without this information, IRS does not know the aggregate amount of advance PTC that taxpayers should have reported on 2014 tax returns. Thus, IRS does not know the size of the gap between advance PTC paid and reported or the extent of noncompliance with the requirement for recipients of advance PTC payments to accurately report those payments on their tax return, a measure that could help IRS assess the effectiveness of its education, outreach, and compliance efforts. Successful implementation of the PTC and individual shared responsibility tax provisions requires IRS collaboration with the Centers for Medicare & Medicaid Services (CMS)—which is responsible for overseeing the marketplaces—and the marketplaces, and communication with other stakeholders, such as tax software companies, employers, and health insurers. IRS worked to collaborate and communicate with external stakeholders to implement PPACA requirements for tax year 2014. However, several external stakeholders GAO spoke with reported challenges with IRS collaboration efforts, such as not receiving certain IRS guidance in time for stakeholders to have complete information at the beginning of the filing season. IRS is evaluating opportunities for improving return processing and the taxpayer experience, but is not evaluating its collaboration efforts. Without an assessment of its efforts to collaborate and communicate with key external stakeholders, challenges in implementing the 2014 PPACA requirements that relied on these groups could also affect new requirements taking effect in 2015, including new information reporting requirements for the State-based Marketplaces, issuers of coverage, and applicable large employers. GAO recommendations include that IRS (1) assess whether marketplace data delays are an ongoing problem, (2) assess the reliability of the data for IRS matching, (3) work with CMS to get complete data and track the aggregate gap between advance PTC paid and reported, and (4) evaluate its collaboration efforts. IRS generally agreed with GAO's recommendations. |
The Small Business Act created SBA to aid, counsel, assist, and protect the interests of small business concerns. The first version of Section 7(a) of the act empowered SBA to make loans to small businesses with the restriction that “no financial assistance shall be extended … unless the financial assistance applied for is not otherwise available on reasonable terms.” While there have been numerous amendments to Section 7(a), the credit elsewhere restriction has remained, with slight modifications. For instance, the phrase “credit elsewhere” was introduced in 1981 and the provision was changed to read that “o financial assistance shall be extended pursuant to this subsection if the applicant can obtain credit elsewhere.” At the same time, a definition of credit elsewhere was added. The 7(a) program’s legislative history emphasizes the program’s role in meeting the credit needs of certain small businesses. The legislative basis for the program recognizes that the conventional lending market is the principal source of financing for small businesses and that the loan assistance that SBA provides is intended to supplement rather than compete with that market. As the legislative history suggests, conventional lending may not be a feasible financing option for some small businesses under certain circumstances. The design of the 7(a) program is consistent with the statute and its legislative history. First, the loan guarantee limits the lender’s risk in extending credit to a small firm that may not have met the lender’s own requirements for a conventional loan. Second, the credit elsewhere requirement is intended to provide some assurance that guaranteed loans are offered only to firms that are unable to access credit on reasonable terms and conditions in the conventional lending markets. Third, an active secondary market for the guaranteed portion of a 7(a) loan allows lenders to sell the guaranteed portion of the loan to investors, providing additional liquidity that lenders can use for additional loans. Under the 7(a) program, SBA guarantees loans made by commercial lenders to small businesses for working capital and other general business purposes. These lenders are mostly banks, but some are nondepository lenders, including small business lending companies (SBLC)— nondepository lenders previously chartered by SBA to provide 7(a) loans to qualified small businesses. The guarantee assures the lender that if a borrower defaults on a loan, the lender will receive an agreed-upon portion (generally between 50 percent and 85 percent) of the outstanding balance. For a majority of 7(a) loans, SBA relies on lenders with delegated authority to process and service 7(a) loans and to ensure that borrowers meet the program’s eligibility requirements. To be eligible for the 7(a) program, a business must be an operating for-profit small firm (according to SBA’s size standards) located in the United States and meet the credit elsewhere requirement, including the personal resources test. SBA is not authorized to extend credit to businesses if the financial strength of the individual owners or the firm itself is sufficient to provide or obtain all or part of the financing or if the business can access conventional credit. To assess whether borrowers can obtain credit elsewhere, lenders must determine that the desired credit, for a similar purpose and period of time, is unavailable to the firm on reasonable terms and conditions from nonfederal sources without SBA assistance, taking into consideration prevailing rates and terms in the community or locale where the firm conducts business. Nonfederal sources may include any lending institutions. In addition, lenders must determine that the firm’s owners are unable to provide the desired funds from their personal resources. When applying this personal resources test, the lender must assess the liquid assets of each owner of 20 percent or more of the equity of the applicant company to determine the overall dollar value of the allowable exemption, which is defined as the amount of personal resources that do not have to be injected into the business. The allowable exemption is determined on the basis of the “total financing package.” The total financing package includes any SBA loans, together with any other loans, equity injection, or business funds used or arranged for at the same general time for the same project as the SBA loan. If the total financing package is $250,000 or less, the exemption is two times the total financing package or $100,000, whichever is greater; is between $250,001 and $500,000, the exemption is one and one-half times the total financing package or $500,000, whichever is greater; or exceeds $500,000, the exemption equals the total financing package or $750,000, whichever is greater. Once the exemption is determined, it is subtracted from the liquid assets. If the result is positive, that amount must be injected into the project. When the 7(a) program was first implemented, borrowers were generally required to show proof of credit denials (rejection documentation) from no fewer than two banks that documented, among other things, the reasons for not granting the desired credit. Similar requirements remained in effect until 1985, when SBA amended the rule to permit a lender’s certification made in its application for an SBA guarantee to be sufficient documentation. This certification requirement remained when the rule was rewritten in 1996. SBA stated that requiring proof of loan denials was demoralizing to small businesses and unenforceable by SBA. Within the 7(a) program, there are several delivery methods—including regular 7(a), the preferred lender program (PLP), and SBAExpress. Under the regular (nondelegated) 7(a) program, SBA makes the loan approval decision, including the credit determination. Under PLP and SBAExpress, SBA delegates to the lender the authority to make loan approval decisions, including credit determinations, without prior review by SBA. The maximum loan amount under the SBAExpress program is $350,000 (as opposed to $2 million for other 7(a) loans). The program allows lenders to utilize, to the maximum extent possible, their respective loan analyses, procedures, and documentation. In return for the expanded authority and autonomy provided by the program, SBAExpress lenders agree to accept a maximum SBA guarantee of 50 percent. (Other 7(a) loans have a maximum guarantee of 75 or 85 percent, depending on the loan amount.) According to SBA, as of December 31, 2007, there were 672 PLP and 1,889 SBAExpress lenders. Of these, 603 lenders were approved for both programs. In the federal budget, the 7(a) program is currently a “zero subsidy” program, meaning that the program does not require annual appropriations of budget authority for new loan guarantees. To offset some of the costs of the program, such as default costs, SBA assesses lenders two fees on each 7(a) loan. The guarantee fee must be paid by the lender at the time of loan application or within 90 days of the loan being approved, depending upon the loan term. This fee is based on the amount of the loan and the level of the guarantee, and lenders can pass the fee on to the borrower. The ongoing servicing fee must be paid annually by the lender and is based on the outstanding balance of the guaranteed portion of the loan. SBA’s Office of Credit Risk Management (OCRM) is responsible for overseeing 7(a) lenders, including those with delegated authority. SBA created this office in fiscal year 1999 to ensure consistent and appropriate supervision of SBA’s lending partners. The office is responsible for managing all activities regarding lender reviews, preparing written reports, evaluating new programs, and recommending changes to existing programs to assess risk potential. The Small Business Act and 7(a) program regulations give lenders discretion to determine which borrowers cannot obtain credit elsewhere and thus require an SBA guarantee. SBA’s primary guidance for the 7(a) program outlines six reasons lenders can use to substantiate that a borrower cannot obtain credit elsewhere. Together, the statute, regulations, and guidance allow lenders to use their own conventional lending policies to determine which borrowers need an SBA guarantee. Our file reviews showed that lenders most often cited the borrower’s need for a longer maturity, lack of collateral, and the age or type of business as reasons for requiring an SBA guarantee. “the availability of credit from non-Federal sources on reasonable terms and conditions taking into consideration the prevailing rates and terms in the community in or near where the concern transacts business, or the homeowner resides, for similar purposes and periods of time.” Consistent with the statute, the governing regulations note that SBA will guarantee loans only for applicants for whom the desired credit is not otherwise available on reasonable terms from a nonfederal source. According to SBA, the credit elsewhere requirement was specifically designed to be broad in order not to limit lenders’ discretion and to allow for differences in geographic regions, economic conditions, and types of businesses. SBA’s primary operational guidance for the 7(a) program—SOP 50-10— builds upon the statute and regulations by outlining six reasons lenders can use to substantiate that a borrower cannot obtain credit elsewhere on reasonable terms. These reasons can be divided into two groups: those that are specific to the borrower’s creditworthiness or business and those that are specific to the lender’s financial position and unrelated to a borrower’s creditworthiness or the availability of loans from other sources. Reasons related to the borrower are that The business needs a longer maturity than the lender’s policy permits. The collateral does not meet the requirements of the lender’s policies. The lender’s policies normally do not allow loans to new businesses or businesses in the applicant’s industry. Any other factors relating to the credit that, in the lender’s opinion, cannot be overcome without the guarantee. The lender may also use one of the following lender-related reasons: The requested loan amount exceeds the lender’s legal lending limit or policy limit on the amount it can lend to one customer, or the lender’s liquidity depends upon selling the guaranteed portion of the loan on the secondary market. On the basis of interviews with a sample of lenders and reviews of a sample of 7(a) loan files, we found that lenders evaluate a borrower’s ability to obtain credit elsewhere on reasonable terms against their own conventional lending policies. This finding was generally consistent with those of a recent Urban Institute report. Lenders we visited most often cited the borrower’s need for a longer maturity, lack of collateral, and the age or type of business as reasons for requiring an SBA guarantee. In practice, lenders evaluate a borrower’s ability to obtain credit elsewhere against their own conventional lending policies. That is, if a borrower does not meet the requirements of the lender’s conventional loan policy, the lender will require an SBA guarantee (or in some cases, deny the loan request). The criteria or thresholds established in the lender’s underwriting policies are representative of the level of risk the lender is willing to assume on a loan. Many factors influence lenders’ risk tolerance levels, including the size of the institution, its location, and its financial position. As a result, lenders may focus on different types of lending or see certain types of lending as being more central to their operations than others. Our findings from interviews with a small, nongeneralizable sample of 18 lenders suggest that differences in lending practices could affect how the credit elsewhere requirement was applied. Some of the lenders said that they relied on automated underwriting systems that primarily considered quantitative factors such as credit scores and financial ratios to determine whether a borrower qualified for a conventional loan or required a guarantee. But some other lenders said that they also considered qualitative factors such as the borrower’s relationship with the bank—for example, the amount on deposit or a prior lending history—when determining whether to extend conventional or guaranteed credit. An Urban Institute report on lenders’ implementation of the credit elsewhere requirement reached similar conclusions. On the basis of interviews with 23 banks that originated both SBA and conventional loans, the Urban Institute concluded that lenders that employed small business credit-scoring models often had relatively straightforward rules regarding the types of borrowers that were eligible for conventional and guaranteed loans. However, it also noted that lenders (in particular smaller lenders) that continued to use relationship underwriting were less likely to have objective thresholds borrowers had to meet in order to qualify for conventional financing. Using information collected from 238 recently approved 7(a) loan files from 18 lenders, we found that the most common reasons lenders cited to substantiate that borrowers could not obtain credit elsewhere were (1) that the business needed a longer maturity than the lender’s policy permitted, (2) that the borrower’s collateral did not meet the lender’s policies, and (3) that the lender’s policies did not normally allow loans to new businesses or businesses in the applicant’s industry (see table 1). The results of our file reviews generally were consistent with the findings of the Urban Institute’s report on lenders’ implementation of the credit elsewhere requirement. The Urban Institute concluded that the most common reasons lenders cited to substantiate that borrowers could not obtain credit elsewhere were that the business needed a longer maturity than the lender’s policy permitted and that the borrower’s collateral did not meet the lender’s underwriting requirements. As table 1 shows, the most common reason that lenders we visited cited to substantiate that a borrower could not obtain credit elsewhere was that the borrower needed a longer term (maturity) than the lender could provide with a conventional loan. In general, SBA-guaranteed loans provide more generous terms to borrowers than conventional loans. In 2007, we found that almost 80 percent of 7(a) loans had maturities of more than 5 years, compared with 5 years or less for an estimated 83 percent of conventional loans (see fig. 1). In general, longer terms mean lower payments, which allow borrowers to service debt with a lower net operating income (see table 2). Many lenders with whom we spoke said that they generally required a business to have an actual or projected debt service coverage ratio (DSCR) of at least 1.10 to 1.25 to obtain a conventional or guaranteed loan. DSCR is the ratio of net operating income (or cash flow) to debt payments, with a lower ratio indicating less ability to meet debt service payments. Our analysis of DSCRs showed that both the average and the mean ratios for all borrowers in our sample were higher than the 1.10 to 1.25 lender requirement (see fig. 2). We also found that lenders sometimes deviated quite substantially from their required minimums. In some instances, lenders provided loans to businesses with low or negative ratios, suggesting that those borrowers compensated for a lack of cash flow in the short term with collateral, for example. In other instances, lenders provided loans to businesses with ratios well above the minimum requirement, suggesting that factors other than cash flow were behind the reasons for requiring an SBA guarantee. Of the loans that we reviewed, 46 percent were cited as having insufficient collateral (because of its low value or uniqueness) for a conventional loan. SOP 50-10 stipulates that a 7(a) loan request cannot be denied on the basis of inadequate collateral, noting that one of the primary reasons lenders used the 7(a) program was to provide credit to small businesses that could repay a loan but lacked the collateral needed to cover it in case of default. One lender that we interviewed required all conventional loans to be fully securitized. If a borrower was unable to provide 100 percent collateral against the value of the loan, the lender would require an SBA guarantee. Other lenders had more lenient policies relating to collateral, allowing borrowers to obtain conventional financing with more limited collateral. Finally, as shown in table 3, our review of lender files showed that all but 2 of the 18 lenders made at least one 7(a) loan to a new business, but that some made significantly more of these loans than others. Many lenders we interviewed said that their conventional lending policies prohibited them from making conventional loans to new businesses. A lack of guidance to lenders on how to document compliance with the credit elsewhere requirement impedes SBA’s oversight of compliance with the requirement. SBA requires lenders to explain in a borrower’s loan file why the borrower could not obtain credit elsewhere on reasonable terms, but its guidance does not provide specific information on what lenders should include in their explanations. Our review of on-site review reports completed during a recent six-quarter period found that SBA determined that 31 of 97 lenders reviewed had not consistently documented that borrowers met either the credit elsewhere requirement or personal resources test. Although all but 1 of the 18 lenders with delegated authority that we interviewed documented their credit elsewhere decisions in some way, the explanations in the files we reviewed were generally not specific enough to reasonably support the lender’s conclusion that borrowers could not actually obtain credit elsewhere. Internal control standards for federal agencies and programs state that good guidance (information and communication) is a key component of a strong internal control framework. Internal controls are an integral component of an organization’s management that provides reasonable assurance that the organization is meeting its objective of ensuring compliance with applicable laws and regulations. For an entity to run and control its operations, it must have relevant, reliable, and timely communications relating to external as well as internal events. Therefore, management should ensure that there are adequate means of communicating with, and obtaining information from, external stakeholders. Although SBA’s guidance requires lenders to document the reasons that borrowers cannot obtain credit elsewhere, it does not specify what exactly lenders should include in their explanations. The only guidance in SOP 50- 10 on documenting compliance with the requirement is a sentence stating that the lender is required to substantiate the factors that prevent the borrower from obtaining credit elsewhere and retain the explanation in the small business applicant’s file. SBA recently revised SOP 50-10 but did not change the guidance on documenting compliance with the credit elsewhere requirement. According to SBA, SOP 50-10 provides thorough guidance on what is required of lenders for making the credit elsewhere determination and documenting it in the file. To supplement the revised guidance, SBA has issued on its Web site some frequently asked questions about the new SOP. To date, no questions on the credit elsewhere requirement have been posted. SBA’s Office of Credit Risk Management is responsible for monitoring and evaluating SBA lenders and implementing corrective actions as necessary. Its primary means of ensuring compliance with the credit elsewhere requirement is on-site reviews of large 7(a) lenders—those lenders with outstanding balances on the SBA-guaranteed portions of their loan portfolio amounting to $10 million or more. SBA also conducts on-site reviews of large certified development companies (CDC) that make 504 loans. According to SBA officials, the 7(a) and 504 lenders that SBA plans to review on site over a 2-year period will account for about 85 percent of all guaranteed dollars. SBA relies on a contractor to perform these on-site lender reviews. According to SOP 51 00—SBA’s guidance on on-site lender reviews—the purpose of the on-site review is threefold: (1) to enhance SBA’s ability to gauge the overall quality of the lender’s 7(a) or 504 portfolio; (2) to identify weaknesses in an SBA lender’s SBA operations before serious problems develop that expose SBA to losses that exceed those inherent in a reasonable and prudent SBA loan portfolio; and (3) to ensure that prompt and effective corrective actions are taken, as appropriate. In prioritizing lenders for review, SBA primarily considers the following factors: portfolio size, risk rating, date of last review, and findings from previous reviews. In addition to assessing performance, SBA also prepares a written report and follows up with the SBA lender to address weaknesses or deficiencies identified during the review. As part of these oversight reviews, the SOP requires SBA to determine whether lender policies and practices adhere to SBA’s credit elsewhere requirement. This includes checking to see whether lenders have applied a personal resources test to confirm that the desired funds were not available from any principal of the business. With respect to the credit elsewhere review, SBA’s contractor explained that it checks to see that the lender documented its credit elsewhere determination and cited one of the six acceptable factors listed in SOP 50-10. However, it does not routinely assess the lender’s support for its credit elsewhere determination. Contract staff performing an on-site review use a checklist that requires the examiner to answer yes or no that “written evidence that credit is not otherwise available on terms not considered unreasonable without guarantee provided by SBA” was in the file and that the “personal resources test was applied and enforced according to SBA policy.” Contractor officials stated that when the documentation standard is not met, the examiner will sometimes look at the factual support in the file to independently determine whether the credit elsewhere requirement or personal resources test was actually met. Our review of a sample of SBA’s on-site review reports showed that SBA determined that nearly a third of lenders had not properly documented that borrowers met either the credit elsewhere requirement or the personal resources test. We analyzed reports from all the on-site lender reviews SBA conducted during a six-quarter period from October 2006 to March 2008 and found that 31 of the 97 lenders reviewed did not consistently document that borrowers met the credit elsewhere requirement or personal resources test. We had to perform this analysis because, until very recently, SBA did not have a system in which it recorded the results of on-site reviews. One on-site review report we analyzed stated that “the credit elsewhere assessment was missing in 24 percent of the cases reviewed.” Another report stated that “the lender failed to properly document the personal resources test in 20 of the 22 cases reviewed.” As shown in figure 3, the percentage of files at each lender that were cited for not including credit elsewhere documentation ranged from a low of 3 percent to 89 percent. Similarly, the percentage of lender files that did not include documentation of the personal resources test ranged from a low of 3 percent to 100 percent. We also found that in each of the 31 cases where there was a finding, SBA required the lender to take corrective action. When conveying the results of an on-site review to a lender, SBA instructs the lender to take corrective actions in response to any findings. The lender then is required to respond to SBA with information on the specific actions it plans to take. In subsequent correspondence with the lender, SBA indicates whether the action taken in response to the finding was satisfactory. As shown in figure 4, corrective actions taken by the 31 lenders with credit elsewhere or personal resources test findings included changes in their procedures to address identified deficiencies, such as updating forms to reflect the credit elsewhere requirement and personal resources test. In all 31 cases, SBA determined that the actions taken by lenders were satisfactory. In contrast to SBA’s on-site review results, our file reviews revealed that all but one of the lenders included some credit elsewhere documentation in their loan files. We found that besides certifying on the application that credit was not available on reasonable terms from other sources, lenders generally summarized their credit elsewhere determinations in the lender’s assessment of the borrower, commonly referred to as a “credit memo.” SBA’s on-site reviews appear to have had some impact on lender documentation. The majority of lenders we visited told us that SBA’s contractor had conducted an on-site review prior to our visit. In addition, representatives of two lenders told us that their on-site reviews had resulted in corrective actions related to the credit elsewhere requirement. An official from one lender stated that it had developed formalized polices and procedures for its 7(a) lending program and revised its forms and the format of the bank’s credit memo to help ensure that their credit elsewhere determinations were documented. Officials representing the other lender stated that they require a checklist for their SBAExpress loans to remind staff to document credit elsewhere decisions. Although all but one of the lenders we visited documented their credit elsewhere decisions in some way, our review of documentation provided to support credit elsewhere decisions in 238 loan files showed that most lenders did not provide detailed information on why borrowers could not obtain credit elsewhere. For instance, a number of lenders we met with used a checklist to document their credit elsewhere decisions. These checklists allow lenders to select one or more of the six acceptable factors outlined in SBA’s SOP 50-10 that “demonstrate an identifiable weakness in the credit of a borrower or exceed the policy limits of the lender.” However, they do not prompt lenders to provide more specific information, such as how a longer maturity would improve a business’s ability to repay a loan or the details on insufficient collateral. Some lenders also documented their credit elsewhere decisions in credit memos, but the information provided was generally not very specific. Some examples of credit elsewhere statements in lender files included the following: “ examined the availability of credit and determined that the desired credit is unavailable to the Applicant on reasonable terms and conditions without SBA assistance taking into consideration prevailing rates and terms in the community in and near where the applicant will be conducting business. Specifically, the Applicant would not be able to obtain the proposed financing for this specific purpose without federal assistance.” “The terms and conditions offered are not available in the marketplace without the assistance of the SBA guarantee.” “Repayment capability requires maturity that exceeds policy; value of available collateral is unacceptable; credit unavailable through conventional loan without a lower loan-to-value ratio.” As evidenced by the above examples, our review of loan files showed that most lenders generally did not provide specific information about the borrower in the statements they included in their explanations, nor did they elaborate upon the items they indicated on the checklist. The lack of details pertaining to the individual borrower or the lender’s financial condition raised questions about the usefulness of the credit elsewhere documentation provided by lenders. Given the broad authority granted to lenders, more information specific to the borrower’s or the lender’s financial condition would help support the lender’s assessment that the borrower could not obtain credit elsewhere. For example, a statement containing both the borrower’s available collateral and the amount of collateral the lender requires for a conventional loan would support the conclusion that the “value of available collateral is unacceptable.” However, one lender we visited provided more detailed information about borrowers’ credit/financial positions and the reasons that 7(a) loans were more suitable than conventional loans, which provided greater assurance that the borrower could not obtain credit elsewhere. In addition to substantiating that the borrower could not obtain credit elsewhere, the lender provided notes documenting why the borrower was denied a conventional loan. For example: Credit elsewhere documentation: “Business is new and does not have sufficient operating history.” Conventional loan denial: “Length of time in business and/or current management; inadequate cash flow; delinquent past or present credit obligations with others; revolving balances to revolving credit limits is high.” Credit elsewhere documentation: “Business and personal scores are below the conventional requirement.” Conventional loan denial: “Foreclosure, repossession, collection judgment, terms and conditions requested are not offered on this product. Inadequate cash flow.” In addition, two other lenders provided more detailed information in some instances. For example: “Credit is not available elsewhere due to the fact that the applicant business is not fully secured by the liquidation value of the collateral being pledged. In addition, there is only 1 year old repayment ability from past operations and the applicant requires a maturity greater than the 12 years that will go out on commercial loans.” “ would not be willing to provide conventional financing on this project at rates and terms acceptable to provide sufficient cash flow for repayment. The fact that the applicants will be injecting 10 percent into the project and have requested a 20 year term does not qualify for conventional financing under our present loan policy. We would not be able to provide funds without the use of the SBA guaranteed loan program.” The results of our file review show that most lenders tend to use generic language to meet credit elsewhere compliance requirements, making it difficult to determine with certainty whether borrowers could not obtain credit elsewhere. When conducting oversight, SBA needs to ensure that lenders are making loans only to borrowers that meet the eligibility requirements of the program. In the absence of detailed guidance on what exactly SBA wants lenders to document or a more prescriptive credit elsewhere requirement, lenders will likely continue to offer limited credit elsewhere statements in their files, making meaningful oversight of compliance with the requirement difficult. For more information on how SBA could create a more prescriptive requirement and the implications of doing so, see appendix III. The 7(a) program is intended to serve creditworthy small business borrowers who cannot obtain credit through a conventional lender at reasonable terms and do not have the personal resources to provide it themselves. In most cases, SBA relies on the lender to determine if a borrower is eligible for a 7(a) loan, including determining whether the borrower could obtain credit elsewhere. Relying on lenders with delegated authority underscores the importance of SBA guidance and oversight. However, SBA’s lack of guidance to lenders on how to document compliance with the credit elsewhere requirement impedes the agency’s ability to oversee compliance with the credit elsewhere requirement. SBA’s guidance to lenders on documenting compliance with the credit elsewhere requirement is limited. SOP 50-10 requires lenders to retain explanations of their credit elsewhere determinations in borrowers’ loan files but does not specify the amount of detail lenders should include in their explanations. Even with the lack of detail required, the results of SBA’s on-site reviews of 7(a) lenders—the agency’s primary means of ensuring compliance with the credit elsewhere requirement—indicated that documenting credit elsewhere determinations was a problem for some lenders. Our review of a recent six-quarter period found that SBA had determined that nearly a third of the lenders reviewed had not consistently documented that borrowers met the credit elsewhere requirement or personal resources test. Further, the results of file reviews we conducted at 18 lenders raise questions about the usefulness of credit elsewhere documentation provided by lenders in assessing compliance with the credit elsewhere requirement. We found that lenders tended to provide general statements about a borrower’s ability to obtain credit elsewhere, generally citing just one of the six acceptable factors listed in SOP 50-10 without customizing the statement to fit the borrower or lender in question. This practice made it difficult to reasonably conclude that borrowers met the credit elsewhere requirement. The statute, regulations, and guidance allow lenders to use their own conventional lending policies to make case-by-case decisions about which borrowers need an SBA guarantee. Given the broad authority granted to lenders, requiring documentation of the analysis supporting their credit elsewhere decisions could help SBA ensure that the eligibility requirement is being met. By collecting and analyzing this additional information on how lenders are applying the credit elsewhere standard, SBA could better ensure that lenders are complying with the standard. Further, identifying promising practices used by lenders to document their credit elsewhere determinations could help SBA develop more specific guidance for lenders. Absent detailed guidance on what exactly SBA wants lenders to document in their credit elsewhere determinations, lenders will likely continue to offer limited information in their files, making meaningful oversight of compliance with the credit elsewhere requirement difficult. To improve SBA’s oversight of lenders’ compliance with the credit elsewhere requirement, we recommend that the SBA Administrator issue more detailed guidance to lenders on how to document their compliance with the credit elsewhere requirement. As part of developing this guidance, SBA could consider (1) requiring lenders to include the analysis that supports their credit elsewhere determinations and (2) identifying promising practices currently being used by lenders. After revising its guidance, SBA also could consider collecting and analyzing any additional information lenders are required to provide on how they apply the credit elsewhere standard. We requested SBA’s comments on a draft of this report, and the Associate Administrator of the Office of Capital Access provided written comments that are presented in appendix IV. SBA stated that it works to establish clear guidelines and standards that ensure documented lender compliance without creating overly burdensome paperwork. It also stated that it would work with its incoming administrative leadership to use our findings to create more specific guidance for lenders on documenting compliance with credit elsewhere standards in a way that achieves this balance. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Chair and Ranking Member, Senate Committee on Small Business and Entrepreneurship; Chairwoman and Ranking Member, House Committee on Small Business; other interested congressional committees; and the Acting Administrator of the Small Business Administration. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Historically, the Small Business Administration’s (SBA) 7(a) program has been thought of as a countercyclical program or stimulus, meaning that when conventional credit is tightened, lenders make broader use of the guarantee program to serve the needs of small businesses. However, during this most recent economic downturn, the number and value of approved loans has decreased significantly (see fig. 5). SBA and others have speculated as to the reasons for this decline. In October 2008, SBA issued a press release citing “a ‘perfect storm’ of tightened credit by commercial lenders, declining creditworthiness, and reduced demand for loans from small business borrowers uncertain about the future” as the primary reasons for the substantial decrease in 7(a) lending. Some lenders have said that the decline in premiums for selling guaranteed portions of loans on the secondary market has reduced incentives for those lenders who depend on premium income to make 7(a) loans. By some estimates, premium rates have declined as much as 60 percent, resulting in significant reductions in income for SBA lenders. Others cited SBA’s new oversight fees as a disincentive to begin or continue 7(a) lending. According to SBA, lenders with 7(a) portfolios of at least $10 million likely will pay oversight fees of between $22,000 and $27,000. Finally, some lenders have said that the increased frequency with which SBA has denied or reduced guarantees on defaulted loans for failure to follow the rules for documenting loans has created uncertainty among lenders over whether SBA will honor guarantees, creating disincentives for lenders to participate in the program. Although the degree to which these reasons may account for the steep decline in 7(a) lending is unclear, SBA and Members of Congress have implemented or proposed measures to stimulate lender and borrower participation in the program. For example: SBA issued an interim rule allowing new SBA loans to be made with an alternative base interest rate, the 1-month London Interbank Offered Rate (LIBOR), in addition to the prime rate, which was previously allowed. According to SBA, the prime and LIBOR rates have fluctuated away from their historical relationship, typically 300 basis points, squeezing SBA lenders out of the lending market because their costs are based on the LIBOR rate. SBA has allowed a new structure for assembling SBA loans into pools for sale in the secondary market. According to SBA, the enhanced flexibility in loan pool structures can help affect profitability and liquidity in the secondary market for SBA-guaranteed loans. Because the average interest rate is used, these pools are easier for pool assemblers to create, thus providing incentives for more investors to bid on the loans. On November 24, 2008, the U.S. Department of the Treasury announced that it would allocate $20 billion to back the creation of a $200 billion Term Asset-Backed Securities Loan Facility (TALF) at the Federal Reserve Bank of New York. TALF will make loans to investors who purchase asset- backed securities made up of small business loans guaranteed by SBA, auto loans, student loans, or credit card loans. According to SBA, this will make it easier for lenders to sell the loans they make and use the proceeds of those sales to make new loans. Introduced on February 7, 2008, the Small Business Lending Stimulus Act of 2008 (S. 2612) proposed to reduce 7(a) loan fees and authorize appropriations to cover such fee reductions. Specifically, the bill proposed to reduce fees on loans of less than $150,000 from 2 percent to 1 percent, on loans between $150,000 and $700,000 from 3 percent to 2.5 percent, and on loans of over $700,000 from 3.5 percent to 3 percent. Introduced on November 19, 2008, the 10 Steps for a Main Street Economic Recovery Act (S. 3705) proposed several measures to protect and expand small business lending, including increasing the amount of financing available to businesses under the 7(a) program from $2 million to $3 million, temporarily reducing fees to defray the cost of borrowing for small businesses, and providing tax breaks to small businesses. In this report, we (1) describe SBA’s criteria for determining that borrowers cannot obtain credit elsewhere and practices lenders employ to determine that borrowers cannot obtain credit elsewhere and (2) examine SBA’s efforts to ensure that lenders are complying with the credit elsewhere provision. To determine SBA’s criteria for determining that borrowers cannot obtain credit elsewhere, we reviewed applicable statutes, regulations, and program guidance. For background on the 7(a) program and the credit elsewhere provision, we reviewed the legislative history of the 7(a) program, our previous reports, and studies of the program conducted by the SBA Inspector General and external organizations. We also interviewed officials from SBA’s Office of Financial Assistance on guidance provided to 7(a) lenders. To determine the practices that lenders employ to meet the credit elsewhere requirement, we visited 7(a) lenders located in and around the following cities: Atlanta, Georgia; Chicago, Illinois; Houston, Texas; Los Angeles, California; New York City, New York; San Francisco, California; and Washington, D.C. During these site visits, we interviewed officials at 18 lenders and reviewed 238 of their approved 7(a) loan applications. We selected these lenders to obtain a variety in the types of 7(a) loans they made (preferred lender program and SBAExpress loans) and the size of their SBA loan portfolios. We also considered geographic diversity. In addition, we interviewed representatives of the National Association of Government Guaranteed Lenders, the trade association for 7(a) lenders. Using data obtained from SBA, we ranked 7(a) lenders in each of the selected cities from largest to smallest based on the number of active or outstanding 7(a) loans for each lender. With the exception of Chicago and Washington, D.C., we contacted lenders in each city in descending order until we achieved our quota (at least three in each city). (See table 4.) For purposes of this report, we considered small lenders to be those with 50 or fewer active or outstanding 7(a) loans. We made three attempts to contact a lender before moving on to the next lender on the list. The applications we reviewed generally covered loans that were approved within calendar years 2007 and 2008. The number of files we reviewed at each lender was based on the size of the lender’s portfolio and GAO staff resources. On the basis of a pilot test at one lender, we determined that we could review approximately 20 files during one site visit. As a result, we reviewed between 5 and 25 of each lender’s most recently approved loans, depending on the size of the lender. To strengthen the accuracy of the collection of information from lender files, we validated our data entry for 20 percent of all files reviewed at each lender. The sample of lenders we visited was not designed to be generalizable to the population of 7(a) lenders, nor was the sample of loans at each lender designed to be generalizable to the population of 7(a) borrowers at each lender. To assess SBA’s efforts to ensure lender compliance with the credit elsewhere requirement, we reviewed excerpts from all 97 on-site lender review reports completed from October 1, 2006, through March 31, 2008. These excerpts documented SBA’s assessment of the lenders’ compliance with the credit elsewhere requirement and personal resources test. We also reviewed correspondence between SBA and the 31 lenders that had credit elsewhere or personal resources test findings. This correspondence detailed the corrective actions that the lenders had implemented or agreed to implement to address the identified deficiencies. We interviewed staff from SBA’s Office of Credit Risk Management on their oversight of 7(a) lenders and the SBA contractor that performs the on-site reviews on steps taken during on-site reviews to assess compliance with the credit elsewhere requirement and personal resources test. We conducted our work in Atlanta, Georgia; Chicago, Illinois; Houston, Texas; Los Angeles, California; New York City, New York; San Francisco, California; and Washington, D.C., between February 2008 and February 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. SBA could revise the implementing regulations and guidance for the 7(a) program to create a more prescriptive credit elsewhere requirement. For example, SBA could establish some general guidelines describing quantitative thresholds or ranges for cash flow and credit score that lenders could consider when making 7(a) loans. The agency could require lenders who made loans to borrowers who fell outside of the recommended ranges to document the reasons for their decisions. Further, additional guidelines could help lenders determine when it was acceptable to cite reasons associated with the lending institution itself to substantiate that a borrower could not obtain credit elsewhere. In such a scenario, some of these reasons might apply only to certain institutions. For example, only nondepository institutions, such as small business lending companies (SBLC), might be allowed to cite as a reason for extending 7(a) credit the need to sell the guaranteed portion of the loan on the secondary market. A more prescriptive credit elsewhere requirement would address some of the challenges with the requirement as it is currently written. For example, as previously discussed, a broad requirement allows lenders to make case- by-base decisions about the types of borrowers that cannot obtain credit elsewhere. Because different lenders have different lending policies, the types of borrowers they serve under the program also differ. The absence of well-defined eligibility criteria makes it difficult for SBA to determine whether borrowers receiving 7(a) loans actually could not obtain conventional credit. In addition, these variations in lending policies and the types of borrowers served make it challenging for SBA to collect relevant information for evaluations of how well the 7(a) program is serving its intended mission. In a July 2007 report on the 7(a) program, we highlighted the need for SBA to improve upon its current efforts to collect and evaluate performance data for the 7(a) program. According to SBA, it is in the process of developing additional performance measures. A more prescriptive standard could be beneficial in light of proposals that, if enacted, would reduce fees for borrowers and temporarily change the program from a zero subsidy program to a positive subsidy program. Such proposals have been made for reasons such as expanding credit for small businesses during the current economic downturn. According to SBA and lenders interviewed as part of an Urban Institute report, the higher fees currently associated with the 7(a) program act as a credit-rationing mechanism. They explained that the higher fees deter borrowers who could obtain credit elsewhere from participating in the program, as the fees associated with conventional loans are significantly lower. However, if the fees associated with the program were lowered or eliminated, the impact of this rationing mechanism would be reduced. With the reduction of fees, a more prescriptive credit elsewhere requirement that clearly outlines which businesses are eligible for the 7(a) program could help to minimize incentives for lenders to make loans to borrowers that have access to credit elsewhere. However, the increased burden on lenders resulting from a more prescriptive credit elsewhere requirement could limit their participation in the program and, as a result, decrease small businesses’ access to credit. In addition, a more prescriptive standard could arbitrarily exclude some borrowers from obtaining guaranteed credit but compel lenders to make loans to borrowers that they and the market did not deem creditworthy. Finally, restricting lenders’ ability to cite their own reasons to substantiate that a borrower could not obtain conventional credit might limit access to credit in some communities, especially those with few lending institutions to supply credit or those that rely on SBLCs—nondepository institutions with delegated authority to make 7(a) loans. In addition to the individual named above, Paige Smith (Assistant Director), Benjamin Bolitzer, Tania Calhoun, Marcia Carlsen, Emily Chalmers, Janet Fong, Marc Molino, Carl Ramirez, Cory Roman, and Jennifer Schwartz made contributions to this report. | The Small Business Administration's (SBA) 7(a) program is intended to provide loan guarantees to small business borrowers who cannot obtain conventional credit at reasonable terms and do not have the personal resources to provide financing themselves. In fiscal year 2008, SBA guaranteed over 69,000 loans valued at about $13 billion. To assist in oversight of the 7(a) program, GAO was asked to (1) describe SBA's criteria and lenders' practices for determining that borrowers cannot obtain credit elsewhere and (2) examine SBA's efforts to ensure that lenders are complying with the credit elsewhere provision. To meet these objectives, GAO reviewed applicable statutes and guidance, visited 18 lenders and reviewed 238 of their loan files, reviewed 97 on-site lender review reports, and interviewed SBA officials. GAO's samples of lenders and loan files were not generalizable. The Small Business Act and 7(a) program regulations and guidance allow lenders to use their conventional lending practices to determine whether borrowers can obtain credit elsewhere at reasonable terms. On the basis of a review of 238 loan files at 18 lenders, GAO observed that the most common reasons these lenders cited to substantiate that borrowers could not obtain credit elsewhere were that the borrower needed a longer maturity than the lender's policy permitted and the borrower's collateral did not meet the lender's requirements. These factors are two of the six listed in SBA's guidance as acceptable to substantiate that a borrower could not obtain conventional credit. SBA has issued little guidance on how lenders should document in their files that borrowers could not obtain credit elsewhere. Internal control standards for federal agencies specify that good guidance (information and communication) is necessary to help ensure the proper implementation of program rules. While SBA's guidance requires lenders to explain why the borrower could not obtain credit elsewhere in the loan file, it does not specify what exactly lenders should include in their explanations. Between October 2006 and March 2008, SBA reviewed 97 lenders and determined that 31 of them had failed to consistently document that borrowers met the credit elsewhere requirement or personal resources test. All but one of the lenders with whom GAO met documented their credit elsewhere decisions in some way; however, given the broad authority granted to lenders, the explanations were generally not specific enough to reasonably support the lender's conclusion that borrowers could not obtain credit elsewhere. A number of these lenders used a checklist that simply listed the six acceptable reasons cited in SBA's guidance for substantiating that a borrower could not obtain credit elsewhere and did not prompt them to provide more information specific to the borrower--for example, details on insufficient collateral. Absent detailed guidance on what exactly SBA wants lenders to document in their credit elsewhere determinations, lenders likely will continue to offer limited information in their files, making meaningful oversight of compliance with the credit elsewhere requirement difficult. |
FAA’s Aircraft Certification Service (Aircraft Certification) and Flight Standards Service (Flight Standards) issue certificates and approvals for the operators and aviation products used in the national airspace system based on standards set forth in federal aviation regulations. FAA inspectors and engineers working in Aircraft Certification and Flight Standards interpret and implement the regulations governing certificates and approvals via FAA policies and guidance, such as orders, notices, and advisory circulars. (See fig. 1.) Aircraft Certification’s approximately 950 engineers and inspectors in 42 field offices issue approvals to the designers and manufacturers of aircraft and aircraft engines, propellers, parts, and equipment. Since 2005, Aircraft Certification has used project sequencing to prioritize certification submissions on the basis of available resources. Projects are evaluated against several criteria, including safety attributes and their impact on the air transportation system. Figure 2 outlines the key phases in Aircraft Certification’s approval process. In Flight Standards, approximately 4,000 inspectors issue certificates allowing individuals and entities to operate in the National Airspace System (NAS). These include certificates to commercial air carriers, operators of smaller commercial aircraft, repair stations, and pilot schools and training centers. Flight Standards also issues approvals for programs, such as training. Flight Standards field office managers in over 100 field offices use the Certification Services Oversight Process to initiate certification projects within their offices. Delays occur when FAA wait-lists certification submissions because it does not have the resources to begin work on them. Once FAA determines that it has the resources to oversee an additional new certificate holder, accepted projects are processed on a first-in, first-out basis within each office. Figure 3 illustrates the key steps in the Flight Standards certification process. Responsibility for the continued operational safety of the NAS is shared by Aircraft Certification and Flight Standards, which oversee certificate holders, monitor operators’ and air agencies’ operation and maintenance of aircraft, and oversee designees and delegated organizations (known as organization designation authorizations or ODA). In 2010, we reported that many of FAA’s certification and approval processes contribute positively to the safety of the NAS, according to industry stakeholders and experts.and approval processes work well most of the time because of FAA’s long-standing collaboration with industry, flexibility within the processes, and committed, competent FAA staff. Industry stakeholders and experts noted that negative certification and approval experiences, such as duplication of approvals, although infrequent, can result in costly delays for them, which can disproportionately affect smaller operators. We made two recommendations to improve the efficiency of the certification and approval processes. FAA addressed one recommendation and partially addressed the other. We found that while FAA had taken actions to improve the efficiency of its certification and approval processes, it lacked outcome-based performance measures and a continuous evaluative process to determine if these actions were having the intended effects. To address these issues, we recommended that FAA develop a continuous evaluative process and use it to create measurable performance goals for the actions, track performance toward those goals, and determine appropriate process changes. To the extent that this evaluation of agency actions identifies effective practices, we further recommended that FAA consider instituting those practices agency wide, i.e., in Aircraft Certification and Flight Standards. In response to our recommendation, FAA implemented new metrics that provide the ability to track process performance and product conformity to standards. These metrics would allow FAA to set measurable performance goals necessary to determine the effectiveness of the certification and approval processes and assist FAA in deciding on necessary and appropriate actions to address systemic issues that could negatively impact agency processes and their outcomes. These actions addressed the intent of our recommendation. We also recommended that FAA develop and implement a process in They also noted that the certification Flight Standards to track how long certification and approval submissions are wait-listed, the reasons for wait-listing them, and the factors that eventually allowed initiation of the certification process. As of October 2013, FAA had partially addressed this recommendation by altering the software in its Flight Standards’ Certification Service Oversight Process database to designate when certification submissions are wait-listed. The database now tracks how long certification submissions are wait-listed. As a result, FAA now has the capability to track how long certification submissions are wait-listed and reallocate resources, if appropriate, to better meet demand. In April 2012, as required by Section 312 of the Act, FAA established the Aircraft Certification Process Review and Reform Aviation Rulemaking Committee (certification process committee). Its role is to make recommendations to the director of FAA’s Aircraft Certification Service to streamline and reengineer the certification process. The committee considered guidance and current certification issues—including methods for enhancing the use of delegation and the training of FAA staff in safety management systems—and assessed the certification process. It developed six recommendations, which called for FAA to develop comprehensive implementation plans for certification process improvement initiatives, including measuring the effectiveness of the implementation and benefits of improvements as well as developing a means to track and monitor initiatives and programs; continue to improve the effectiveness of delegation programs; develop an integrated, overarching vision of the future state for certification procedures; update Part 21 certification procedures to reflect a systems approach develop and implement a comprehensive change management plan to prepare the workforce for its new responsibilities in a systems safety approach to certification and oversight; and review continued operational safety and rulemaking processes and implement reforms to improve efficiency. We found these recommendations to be relevant, clear, and actionable. In response to the committee’s recommendations, FAA developed a plan that includes 14 initiatives to implement the committee’s recommendations and publicly reported the plan in July 2013. We believe that the committee took a reasonable approach in assessing FAA’s aircraft certification process and developing recommendations by assessing the status of previous recommendations from 19 reports related to the certification process, reviewing certification guidance and processes as well as major initiatives, and reviewing other areas that it believed required consideration when making recommendations for improving efficiencies in the certification process. FAA has many initiatives and programs underway that it believes will respond to the committee’s recommendations to improve efficiency and reduce costs related to certifications. For example, FAA and two industry groups had already developed an ODA action plan to address the effectiveness of the ODA process. We found these initiatives were generally relevant to the recommendations and clear and measurable. However, FAA’s initiatives and programs to implement the recommendations do not contain some of the elements essential to a performance measurement process. For example, the certification process committee recommended that FAA develop an integrated roadmap and vision for certification process reforms, including an integrated overarching vision of the future state for certification procedures. While FAA has outlined a vision in AIR: 2018, it has not yet developed a roadmap. FAA is planning to roll out its roadmap, which is to include information on major change initiatives and a scaled change management process, concurrently with or following implementation of many of its certification process improvement initiatives. This calls into question FAA’s ability to use the roadmap to guide the initiatives. FAA has developed milestones for each initiative and deployed a tracking system to track and monitor the implementation of all certification-related initiatives. However, FAA has not yet developed performance measures to track the success of most of the initiatives and programs. The agency plans to develop these measures of effectiveness after it has implemented its initiatives. Without early performance measures, FAA will not be able to gather the appropriate data to evaluate the success of current and future initiatives and programs. In addition, in response to the certification process committee’s recommendation to review rulemaking processes and implement reforms to improve efficiency, FAA plans to expedite the rulemaking process by implementing a new rulemaking prioritization model. However, this model will have no effect on the duration of the rulemaking process since it only prioritizes potential rulemaking projects for submission to the rulemaking process and makes no changes to the rulemaking process per se. In 2010, we reported that variation in FAA’s interpretation of standards for certification and approval decisions is a long-standing issue that can result in delays and higher costs for industry. For example, a 1996 study found that, for air carriers and other operators, FAA’s regulations are often ambiguous; subject to variation in interpretation by FAA inspectors, supervisors, or policy managers; and in need of simplification and consistent implementation.officials we interviewed for our 2010 report indicated that although variation in decisions is a long-standing, widespread problem, it has rarely led to serious certification and approval process problems, and experts on our panel generally noted that serious problems occur less than 10 Experts on our panel and most industry percent of the time. Nonetheless, when such occasions occur, experts on our panel ranked inconsistent interpretation of regulations, which can lead to variation in decisions, as the most significant problem for Flight Standards and as the second most significant problem for Aircraft Certification. Panelists’ concerns about variation in decisions included instances in which approvals are reevaluated and sometimes revised or revoked in FAA jurisdictions other than those in which they were originally granted. Such situations can result in delays and higher costs for industry but also may catch legitimate safety concerns. According to industry stakeholders we spoke with, variation in FAA’s interpretation of standards for certification and approval decisions is a result of factors related to performance-based regulations, which allow for multiple avenues of compliance, and the use of professional judgment by FAA staff. FAA’s Deputy Associate Administrator for Aviation Safety and union officials representing FAA inspectors and engineers acknowledged that variation in certification and approval decisions occurs and that FAA has taken actions to address the issue, including the establishment of a quality management system to standardize processes across offices. A second FAA-industry committee—the Consistency of Regulatory Interpretation Aviation Rulemaking Committee (regulatory consistency committee)—established to respond to Section 313 of the Act, identified three root causes of inconsistent interpretation of regulations—(1) unclear regulatory requirements; (2) inadequate and nonstandard FAA and industry training in developing regulations, applying standards, and resolving disputes; and (3) a culture that includes a general reluctance by both industry and FAA to work issues of inconsistent regulatory application through to a final resolution and a “fear of retribution.” The root causes are consistent with issues raised in our 2010 review and those raised by industry during that review. To address the root causes, the committee made six recommendations to promote clearer regulations and guidance, more standardized application of rules, a consolidation and cross-reference of guidance and rules, and improved communication between FAA and industry. In priority order, those recommendations called for developing a single master source for guidance organized by Title 14 of the Code of Federal Regulations (which covers commercial aviation); developing instructions for FAA staff with policy development responsibilities; reviewing FAA and industry training priorities and curriculums; setting up a board to provide clarification to industry and FAA on improving the clarity in final rules issued by FAA; and creating a communications center to act as a central clearinghouse to assist FAA staff with queries about interpretation of regulations. We found that the committee took a reasonable approach in identifying these root causes and developing its recommendations. It compiled and reviewed case studies involving issues of regulatory application, obtained additional information by surveying industry stakeholders, and reviewed FAA regulatory guidance material. The recommendations are relevant to the root causes, actionable, and clear. The committee also considered the feasibility of the recommendations by identifying modifications to existing efforts and programs and prioritizing the recommendations. FAA reported on July 19, 2013, that it is determining the feasibility of implementing these recommendations. The agency told us that it expected to develop an action plan to address the recommendations and metrics to measure implementation by December 2013. We note that measuring implementation may provide useful information, however, FAA is not intending to measure outcomes. Measuring outcomes can help in understanding if an action is having the intended effect. FAA’s certification and approval processes generally work well. However, when the certification and approval processes do not work well, the result can be costly for industry and FAA. Inconsistent interpretation of regulations can lead to rework by FAA and industry. Likewise, inefficient processes can require extra time and resources. FAA faces challenges in implementing the committees’ recommendations and further improving its certification and approval processes. FAA’s certification and approval workload is expected to grow over the next 10 years because of activities such as the introduction of new technologies and materials, such as composite materials used in airplanes, according to one industry committee report. Additional work will be needed to establish new means of compliance and establish new standards. In addition, FAA’s certification and approval workload is likely to increase substantially as the Next Generation Air Transportation System (NextGen) progresses and operators will need to install additional equipment on their aircraft to take full advantage of NextGen capabilities. Having certification and approval processes that work well will allow FAA to better meet these increasing workload demands and better ensure aviation safety in an era of limited resources. To its credit, FAA has taken steps toward improving the efficiency of its certification and approval processes. It will be critical for FAA to follow through with its plans for implementing the key recommendations to achieve the intended efficiencies and streamlining. However, making fundamental changes to the certification and approval processes can require a cultural change by its workforce and resistance to change can cause delays. Some improvements to the processes, such as those requiring new rulemakings, will likely take years to implement and, therefore, will require a sustained commitment as well as congressional oversight. Chairman LoBiondo, Ranking Member Larsen, and members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions at this time. For further information on this testimony, please contact Gerald L. Dillingham, Ph.D., at (202) 512-2834 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Teresa Spisak (Assistant Director), Pamela Vines, Melissa Bodeau, David Hooper, Sara Ann Moessbauer, Josh Ormond, and Jessica Wintfeld. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Among the agency's responsibilities for aviation safety, FAA issues certificates for new aircraft and parts and grants approvals for changes to air operations and aircraft. In 2010, GAO made recommendations to improve FAA's certification and approval processes. Subsequently, the Act required FAA to work with industry to assess the certification process and address some of the findings in GAO's report. In July 2013, FAA issued reports on its efforts, including those in response to committee recommendations and FAA's implementation plans. This testimony addresses FAA's responses to the recommendations made by GAO in 2010 and the two joint FAA-industry committees concerning (1) the certification and approval processes and (2) the consistency of regulatory interpretation. It also discusses future challenges facing FAA's certification and approval processes. This statement is based in part on GAO's 2010 report. More detailed information on the objectives, scope, and methodology for that work can be found in that report. In addition, for this statement, GAO interviewed industry representatives, reviewed the methodologies used to develop the committees' recommendations, and assessed the recommendations and FAA's planned responses to those recommendations in terms of whether they were relevant, clear, actionable, and feasible. GAO is not making any new recommendations in this testimony. In 2010, GAO reported that industry stakeholders and experts believed that the Federal Aviation Administration's (FAA) certification and approval processes contribute positively to the safety of the national airspace system. However, stakeholders and experts also noted that negative certification and approval experiences--such as duplication of approvals--although infrequent, can result in delays that industry says are costly. GAO made two recommendations requiring, among other things, that FAA develop a continuous evaluative process and a method to track submission approvals. FAA addressed one recommendation and partially addressed the other. An FAA-industry committee established in response to the FAA Modernization and Reform Act of 2012 (the Act) made six recommendations to improve the certification and approval processes, including establishing a performance measurement process. In response to recommendations from the certification process committee, FAA developed an implementation plan with 14 initiatives, but the initiatives do not contain some elements essential to a performance measurement process, such as performance measures. Without performance measures, FAA will be unable to evaluate current and future programs. GAO also reported in 2010 that variation in FAA's interpretation of standards for certification and approval decisions is a long-standing problem. A second FAA-industry committee, established in response to the Act, made recommendations concerning the consistency of regulatory interpretation. FAA reported that it is determining the feasibility of implementing the recommendations and expected to develop an action plan by December 2013. Further, FAA reported it would measure implementation, but not outcomes; measuring outcomes helps to understand if the action is having the intended effect. Among the challenges facing FAA, its certification and approval workload is expected to grow due to the introduction of new technologies and materials and expected progress in the deployment of the Next Generation Air Transportation System. Having efficient and consistent certification and approval processes would allow FAA to better use its resources to meet these increasing workload demands and better ensure aviation safety in an era of limited resources. |
VAMCs conduct an initial review of cases that are identified as possible adverse events to determine how best to respond and which process to use to determine the facts of the case, such as protected peer review, FPPE, or AIB. Because VAMCs generally have discretion in which of these processes they choose to use to respond to an adverse event, different VAMCs may choose different processes in response to experiencing similar adverse events. Based on the nature of the adverse event and the information gleaned through a particular review process, a VAMC may decide to conduct multiple types of reviews, both protected and nonprotected processes, as appropriate. Information collected through protected review processes, including protected peer review, cannot be used to inform adverse actions against a provider; information collected through nonprotected processes, including FPPEs and AIBs, can be used to support a VAMC’s decision to take adverse action against a provider. According to VHA policy, VAMCs can use both protected and nonprotected processes concurrently or consecutively as long as protected and nonprotected processes and data collection are kept separate. According to VHA officials, if a VAMC is using a protected process to review an event and realizes that a nonprotected review may be necessary, the protected process should be stopped and the VAMC should start a nonprotected review. See figure 1 for an illustration of the decision process a VAMC official might use when deciding how to respond to an adverse event. According to VHA’s protected peer review policy, peer review is required under certain circumstances, such as a death that appears related to a hospital-incurred incident or a complication from treatment and a suicide within 30 days of a clinical encounter with a VA health care professional.Peer review may be considered in other circumstances, such as when there is an unexpected or negative outcome. Once VAMC officials decide to conduct a protected peer review, a peer reviewer is assigned to evaluate the care delivered and the actions taken by the provider.peer reviewer makes an initial determination of whether the provider should have taken different action when providing patient care and preliminarily assigns one of the following three levels of care: Level of care 1 – the most experienced, competent providers would have managed the case in a similar manner; Level of care 2 – the most experienced, competent providers might have managed the case differently; or Level of care 3 – the most experienced, competent providers would have managed the case differently. According to VHA’s peer review policy, the initial peer review should be completed within 45 calendar days from determination of the need for peer review. If the peer-reviewed case is assigned a level of care 2 or 3, it must be referred to the VAMC’s peer review committee for further review. After conducting a further review of the facts of the case, and receiving further input from the provider under review, the peer review committee either validates the initial level of care or assigns a higher or lower level of care. The peer review committee’s level of care rating is final and must be completed within 120 calendar days from determination of the need for peer review. The peer review committee can also make recommendations for nonpunitive, nondisciplinary actions, as appropriate, such as reviewing and revising local policy, to improve the quality of care delivered. The final level of care rating and any recommendations for improvement are reported to the provider’s supervisor, who gives the provider feedback that is based on the peer review committee’s findings. According to VHA officials, VAMCs conduct approximately 23,000 protected peer reviews systemwide annually. VHA also has a contract with an external organization that is used to audit protected peer review. VAMCs may request external protected peer review expertise if there are no qualified peers available at the VAMC. According to a VISN official, external review may also be requested if the VAMC needs to ensure an independent peer review, for example, if all the providers in the same clinical specialty were involved in the event. VHA also requires each VAMC to submit quarterly a sample of cases that were recently peer-reviewed, for a secondary peer review by this external organization. According to VHA’s protected peer review policy, each VAMC is required to develop peer review or professional activity triggers to signal the need for further assessment of a provider’s clinical care. The triggers are specific to an individual VAMC. If, after a detailed assessment, concerns arise about a provider’s ability to deliver safe, quality patient care, then an FPPE would be conducted. For example, if a provider meets a VAMC’s peer review triggers by receiving three peer review level of care ratings of 3—meaning that the most experienced, competent providers would have managed the cases differently—within a 12-month period, then VAMC officials would be prompted to conduct a detailed assessment of the provider’s care, and address concerns about the provider’s ability to deliver safe, quality patient care by conducting an FPPE. According to VHA policy, an FPPE may be used when a question arises regarding a provider’s ability to provide safe, quality patient care, such as whether documentation of patient encounters is late or insufficient, or whether diagnoses are accurate. An FPPE may also be used when a provider meets or exceeds a VAMC’s peer review triggers and a detailed assessment indicates the need for further review. According to VHA and VAMC officials, an FPPE begins by providing an opportunity for a provider to improve his or her performance in area(s) of identified concern. The FPPE is a time-limited period during which medical staff leadership assesses the provider’s professional performance and ability to improve. According to VAMC officials, if the provider’s performance improves, VAMC officials may decide that the FPPE process is completed and the provider may be returned to routine monitoring. If a provider’s performance does not improve, then medical staff leaders may decide to continue the FPPE for an additional period of time or determine if a privileging action should be taken, such as reducing or revoking a provider’s privileges. The FPPE may include a review of the provider’s care, either through medical record review, direct observation, or discussions with other individuals involved in the care of patients. FPPEs conducted when a question arises regarding a provider’s ability to provide safe, quality patient care are not conducted often, according to VHA and VAMC officials. According to a VHA official, about 100 FPPEs of this kind are conducted VHA-wide each year. VA typically uses AIBs to examine nonclinical issues for which the facts are in dispute, such as allegations of employee misconduct, according to VHA and VAMC officials. These officials also said VAMCs may use AIBs to investigate issues of an individual provider’s clinical competence, but this is not typical. According to VA’s AIB policy, the convening authority—typically the VAMC director—appoints members to an AIB and defines the scope and authority of the investigation. The AIB collects and analyzes evidence and develops a report, including findings and conclusions. The VAMC director may use an AIB’s findings to inform decisions of whether to take adverse action against a provider and, if so, what type of action to pursue. In our 2012 report on AIBs, we found that VAMCs and VISNs conducted a total of 1,136 investigations nationwide from fiscal year 2009 through 2011, but because VA does not track the types of matters investigated, it is unclear how many of these were related to clinical competence. There are several VA and VHA organizational components that are involved in monitoring VAMCs’ adverse events and related processes. VHA’s Office of the ADUSH for Quality, Safety and Value is responsible for establishing VHA policies for protected peer review and FPPEs and for providing guidance to VAMCs on using those processes. VHA’s Office of the DUSHOM oversees the VISNs and provides the VISNs broad and general operational direction and guidance; VISN directors are tasked with oversight of the protected peer review process. The Office of the Medical Inspector addresses health care problems to monitor and improve the quality of care provided by VHA; veterans may report problems with medical care received at VAMCs directly to the Office of the Medical Inspector. The VA OIG Office of Healthcare Inspections inspects individual health care issues and performs quality program assistance reviews of VAMC operations. See figure 2 for a simplified organizational chart of the relationship among these entities. According to VHA’s protected peer review policy, and supported by federal internal control standards for risk assessment and information and communications, VHA’s protected peer review policy requirements should ensure that identified patient safety risks are mitigated and lead to organizational improvements and optimal patient outcomes. Additionally, federal internal control standards state that agencies should have reliable information relating to internal events to effectively run and control their operations; this information should be identified, captured, and communicated in sufficient detail and at the appropriate time to the right people. VAMC officials from all four sites we visited demonstrated a general understanding of the process as described in the protected peer review policy. For example, officials from each of the four VAMCs knew that peer review was a protected nonpunitive process and officials from three of the four VAMCs were able to describe the steps of the process. However, our analysis of peer review data for fiscal years 2009 through 2011 provided by the four VAMCs showed that none of the VAMCs adhered to all four VHA protected peer review policy elements selected for review: (1) completing the initial peer review within 45 calendar days, (2) completing the final peer review within 120 calendar days, (3) sending all initial level of care 2 and 3 peer reviews to the peer review committee, and (4) developing peer review triggers. Additionally, these peer review data included varying amounts of missing and inaccurate data, which affected our ability to fully analyze these data. See table 2 for a summary of the four VAMCs’ adherence to selected protected peer review policy elements. Completing initial peer reviews within 45 calendar days. In our analysis of the timeliness policy elements for fiscal years 2009 through 2011, we found that VAMCs A and D completed 77 to 79 percent of initial peer reviews within 45 calendar days. VAMC B completed 100 percent of initial peer reviews within 45 calendar days for the 2 fiscal years for which data were available—2010 and 2011. We could not determine the completion percentage for VAMC C because the data provided by the facility did not contain all of the information needed. According to a VAMC C official, the peer review data provided to us was compiled just before our site visit and was based on a review of past records; the missing data elements could not be located during that review. Officials from VAMCs A and D told us that it was difficult to get peer reviewers to abide by the 45-day rule, and, according to officials from VAMC D, the heavy workload of those in charge of peer review tracking during our study period affected their ability to ensure peer reviewers’ timely completion. According to the official in charge of peer review tracking at VAMC D, a routine process is now in place at this facility to remind peer reviewers to complete their reviews after 30 of the 45 calendar days allotted have elapsed. Completing the final peer reviews within 120 calendar days. VAMCs A and D each completed 89 percent of final peer reviews within 120 calendar days for fiscal years 2009 through 2011; VAMC B completed 97 percent for fiscal years 2010 through 2011. As previously noted, we were unable to determine the completion percentage for VAMC C. An official at VAMC A told us that, in addition to the delays in completing the initial peer reviews within 45 calendar days, the peer review committee might not have always been able to review all peer-reviewed cases at its monthly meeting if there had been a particularly large number of cases sent to the committee that month. Sending level of care 2 and 3 peer reviews to the peer review committee. We found that VAMCs A and D sent 96 to 100 percent of their initial level of care 2 and 3 peer reviews to the peer review committee for fiscal years 2009 through 2011; VAMC B sent 99 percent VAMC C sent 79 percent of its initial for fiscal years 2010 through 2011. level of care 2 and 3 peer reviews to the peer review committee. Officials from VAMC C told us that a possible reason for the low adherence rate could be that the peer review committee and one of the VAMC’s service line committees tasked with sending level of care 2 and 3 cases to the peer review committee for further review did not communicate well during this time period. According to these officials, communication and coordination between these two committees has recently improved. Developing peer review triggers. We found that some VAMCs did not develop peer review triggers in a timely way. VHA’s protected peer review policy issued in January 2008 required that VAMCs develop criteria that may define the need for further review or action. VHA’s 2010 update of this policy specified that VAMCs were required to establish peer review or professional activity triggers. The 2010 policy update provides one example of what these triggers could include—three peer review level of care 3 ratings for a provider within 12 consecutive months. Further, federal internal control standards for risk assessment state that management should identify internal risks and undertake a thorough and complete analysis of the possible effects. Additionally, the standards state that risk assessment should include establishment of criteria for determining low, medium, and high risk levels. We found that while all four VAMCs we visited developed peer review triggers, two of the four VAMCs developed peer review triggers approximately two years after the updated policy was issued. (See table 3.) According to some VAMC officials we spoke with, the delay in developing peer review triggers was the result, at least in part, of a lack of guidance from VHA. When asked about peer review triggers, officials from VAMCs B and C told us they did not receive any assistance from VHA in developing triggers and that the guidance provided by VHA was vague. Officials from VAMCs A and C, the two VAMCs that did not initially develop peer review triggers, told us that while they had informal triggers that could prompt the need for a nonprotected review, such as an FPPE, they had not formally documented the triggers. When VAMCs fail to complete peer reviews in a timely manner and send all level of care 2 and 3 initial peer reviews to the peer review committee, they put patients’ safety at risk through potential exposure to substandard care. VAMCs may fail to identify problematic providers in a timely manner and take the appropriate actions. Additionally, by not submitting initial level of care 2 and 3 peer reviews to the peer review committee for further evaluation, VAMCs are not ensuring that the initial ratings assigned were appropriate. Moreover, the delayed establishment of the peer review triggers by some VAMCs may have resulted in missed opportunities to identify providers who posed a risk to patient safety and to conduct an FPPE, which would have allowed any warranted action to be taken against the provider. Within VHA, the VISNs and the Office of Risk Management monitor VAMCs’ protected peer review processes. Officials from the VISNs that oversee the four VAMCs we visited told us they monitor VAMCs’ protected peer review processes through quarterly data monitoring and annual site visits, as required by VHA’s protected peer review policy. Officials from these VISNs said they monitor peer review data that VAMCs are required to submit quarterly.submitted by VAMCs and reviewed by the VISN include, but are not limited to, the number of peer reviews completed, the assigned level of care ratings by the initial peer reviewer and by the peer review committee, Data elements that are to be the number of assigned level of care ratings changed to a higher or lower level by the peer review committee, and the timeliness of the reviews. After completing their review of VAMCs’ quarterly data, VISN officials are to send aggregated peer review data to VHA’s office of the ADUSH for Quality, Safety and Value through a shared electronic database. In addition to quarterly review of the data, officials from all four VISNs said they conduct annual site visits to the VAMCs within the VISN; these site visits include a multifaceted review of VAMCs’ quality management operations. However, we did not review documentation from the VISNs’ site visits; one VISN official told us that the VISN does not keep formal records and that it is required only to attest to their completion. The scope of VISNs’ annual site visits covers a broad variety of topics, including VAMCs’ protected peer review processes, according to VISN officials. All VISN officials we spoke with told us they typically chose which elements of the peer review process to review based on the focus of recent inspections by other entities, such as The Joint Commission and the VA OIG. For example, officials we spoke with from one of the four VISNs said they reviewed in a 2012 site visit whether the VAMCs implemented the protected peer review policy elements for timeliness—specifically, completing final peer reviews within 120 calendar days; VA OIG reported in January 2011 that a VAMC within the VISN was not compliant with this timeliness requirement. VHA’s protected peer review policy requires VHA to conduct analysis of peer review data findings submitted by each VISN and to disseminate those findings to the Under Secretary for Health, VISNs, and other leadership. See VHA Directive 2010-025. rating level of care changed by the peer review committee. The Director of Risk Management said that if an outlier is identified in the aggregated data, it is brought to the attention of the relevant VISN officials. An official from the Office of Risk Management reported that the office produces quarterly reports from analysis of the VHA systemwide aggregated protected peer review data and that these quarterly reports are shared with each VISN. According to VHA’s fourth quarter report for fiscal year 2012, ratings for 22 percent of peer-reviewed cases were changed by the VAMCs’ peer review committees. VHA’s analysis found that the peer review committees were more likely to improve the peer review rating by decreasing the assigned rating level, such as decreasing a level of care 3 to a level of care 1 or 2, or a level of care 2 to a level of care 1. In addition to monitoring VISNs’ aggregated protected peer review data, VHA’s Director of Risk Management said the office communicates regularly with the Office of the DUSHOM, as required by VHA’s protected peer review policy and supported by federal internal control standards for information and communications. According to VHA’s protected peer review policy, the DUSHOM’s responsibilities include establishing and maintaining the peer review program in coordination with the ADUSH for Quality, Safety and Value, and providing direction and guidance on data elements that VAMCs must report through VISNs to VHA. Federal internal control standards state that mechanisms should exist to allow the easy flow of information down, across, and up through the organization, and easy communications should exist between functional activities. These standards also state that responsibility for decision-making should be clearly linked to the assignment of authority, and individuals should be held accountable accordingly.that the office periodically gives brief overviews of the office’s work, including protected peer review findings, to the Under Secretary for Health and to the DUSHOM. The Director of Risk Management said Beyond VHA monitoring of VAMCs’ protected peer review process, the VA OIG also routinely reviews certain policy elements of the process through its Combined Assessment Program, which reviews each VAMC every 2 to 3 years. In the last 5 years, the VA OIG has reviewed VAMCs’ compliance with several requirements, including (1) peer review committee must submit quarterly reports to the medical executive committee, (2) peer review committee must analyze protected peer reviews for trends in follow-up items and recommendations, and (3) protected peer reviews by the initial reviewer and the peer review committee must be completed timely. The VA OIG has conducted at least one Combined Assessment Program review since 2011 at each of the See table 4 for a summary of protected peer four VAMCs we visited.review monitoring activities conducted by VISNs, VHA’s Office of Risk Management, and VA OIG. According to officials from the Office of Risk Management, routine monitoring of certain policy elements of VAMCs’ protected peer review processes has enabled VHA to make changes in policy and improve protected peer review monitoring activities. VHA’s Director of Risk Management said that her office is responsible for making changes to policy in response to VA OIG findings related to the protected peer review process. For example, in a 2008 report on VISNs’ oversight of the protected peer review process, the VA OIG found that VISNs failed to substantially comply with the requirement to conduct periodic inspections; the VA OIG recommended that VHA clarify its peer review policy to define periodic site visits required of VISNs. VHA’s 2008 revision of the protected peer review policy redefined site visits as annual to result in more frequent monitoring by VISNs. VHA also improved peer review data monitoring activities. According to the Director of Risk Management, a national survey of VAMC risk managers was conducted in 2011 to better understand how protected peer review data are used. Based on survey data and a literature review, VHA officials said they determined that VAMCs and VISNs needed additional data on the specific areas of care that were being peer-reviewed as well as information about significant problems identified and frequently cited issues. VHA officials said that in fiscal year 2012, they expanded the required quarterly protected peer review data set that VAMCs report to the VISN to include the commonly peer-reviewed aspects of care, such as the choice of a diagnostic test, performance of a procedure or treatment, and addressing abnormal results of diagnostic tests. According to VHA’s protected peer review policy, VISNs and the Office of Risk Management play a role in monitoring VAMCs’ implementation of protected peer review processes, which include peer review triggers. In addition, federal internal control standards state that management needs to comprehensively identify risks and analyze the possible effects; these peer review triggers serve as part of VHA’s risk assessment tool to help identify issues of risk to patient safety and improve the organization. Officials of the VISNs for the four VAMCs we visited and the VA OIG told us they have reviewed the establishment of peer review triggers by VAMCs. However, VISNs, VHA, and VA OIG have not monitored whether the triggers have actually been implemented. Officials we interviewed from two of the four VISNs said their VISNs reviewed VAMCs’ peer review triggers during the required annual site visits; an official from a third VISN said that the VISN has confirmed that most VAMCs have established peer review triggers. Officials from the fourth VISN told us they have reviewed types of triggers in place at VAMCs, but they have not confirmed that VAMCs have established peer review triggers. Officials from all four VISNs told us they typically do not monitor whether VAMCs have implemented the established peer review triggers, including monitoring how many FPPEs for cause have been triggered. One VISN official explained that the VISN cannot monitor every aspect of every policy and regulation that govern VAMCs’ operations; therefore, they choose to focus their monitoring efforts on the elements of particular importance where other entities—such as the VA OIG and The Joint Commission—have found evidence of noncompliance. VHA’s Director of Risk Management told us that the office does not monitor whether VAMCs have established peer review triggers. Further, the official told us the office has not monitored how VAMCs have implemented peer review triggers or tracked how many providers may be exceeding the triggers and are subject to FPPEs for cause. The official also noted that VHA has not asked VAMCs to document their peer review triggers and has not asked the VA OIG to look specifically, during Combined Assessment Program reviews, at whether VAMCs have established triggers. Officials from the VA OIG told us they did monitor VAMCs’ establishment of peer review triggers as part of Combined Assessment Program reviews conducted in fiscal year 2009. The OIG review found 97 percent compliance with establishing peer review triggers across the 44 VAMCs OIG officials visited; the four VAMCs we visited were not among the 44 VAMCs included in this review. According to VA OIG officials, they decided not to review the establishment of peer review triggers in subsequent Combined Assessment Program reviews because of the high compliance rate in 2009; instead, subsequent Combined Assessment Program reviews focused on requirements with which VAMCs had not been in compliance, such as requiring the peer review committee to submit quarterly reports on protected peer review to the medical executive committee. VA OIG officials told us they have not reviewed whether VAMCs have implemented the triggers. Because neither VHA’s Office of Risk Management nor VA OIG review whether peer review triggers have been implemented, VHA cannot provide reasonable assurance that VAMCs are using the triggers as a risk assessment tool as intended. Failure to do so weakens VAMCs’ ability to ensure patient safety, and officials cannot be assured that the use of these triggers meets the intended goal of identifying providers that are not delivering safe, quality patient care. FPPEs. According to federal internal control standards for control activities, written documentation should exist for all significant events that occur within an agency; this documentation should be readily available for examination, and it should be complete and accurate in order to facilitate tracing the event from initiation through processing to completion. In documenting FPPEs, building strong and complete evidence on each case is important to support the outcome of the evaluation, as well as to track the identified area of concern over time. VHA’s FPPE policy provides a general definition of an FPPE, that it can be used for cause (when a question arises regarding a provider’s ability to provide safe, quality patient care), that the criteria for the FPPE should be defined by the VAMC in advance, and that the results of the FPPE must be documented in the provider’s profile. However, there are gaps in VHA’s policy regarding how these evaluations should be documented and what information should be included, which limited our ability to assess VAMCs’ adherence to the FPPE policy. Officials from two of the VAMCs we visited told us there are no standardized guidelines on how the FPPE process should be structured. According to the Director of Credentialing and Privileging, VHA’s policy on FPPEs was intended to allow VAMCs flexibility in the design of the evaluation to accommodate the variety of ways that issues are identified and the types of issues that may be addressed (see box for example of an FPPE). Example of an FPPE at one VAMC we visited There are multiple ways that an FPPE may be prompted, including multiple patient complaints or exceeding a VAMC’s peer review trigger. At one VAMC we visited, a provider exceeded a peer review trigger— receiving two or more level of care 3 peer review ratings within 6 months—which prompted a detailed assessment, including a retrospective evaluation of 25 percent of the provider’s medical records over the previous 12 months. The medical records were evaluated specifically for patient evaluations, outcomes, and documentation. Upon completion of the assessment, VAMC officials determined that the provider’s documentation, including patient discharge summaries, patient transfer, medication review, and disclosure notes, was inadequate. An FPPE was initiated, for which the provider was instructed on how to properly document these types of summaries and notes. The evaluator also recommended that the provider write the notes for each patient seen before moving to see other patients, instead of writing the notes for all patients at the end of the day. VAMC officials also initiated a performance improvement plan for the provider, which included monitoring his medical record documentation over a 6-month period. When officials determined that the provider had not improved after this period, the FPPE was extended to continue monitoring aspects of his clinical care. The provider retired 6 months later. Officials from the VAMCs we visited were generally aware that FPPEs can be used to address concerns about the quality of a provider’s care; are time limited; and are not disciplinary, but could ultimately be used to take adverse action against a provider, if necessary. While VAMC officials were generally aware of the FPPE process, we found that the four VAMCs we visited varied widely in their documentation of FPPEs, attributable at least in part to the lack of specificity in VHA’s FPPE policy regarding documentation requirements. In reviewing FPPEs conducted between fiscal years 2009 and 2011 we found the following: One of the four VAMCs provided a completed template for each of its FPPEs, including the purpose of the FPPE, specifying what triggered the review, the time period for review, comments by the evaluator, an action plan based on the review, and evidence of concurrence with the review by the applicable service chief. (See app. I for an example of an FPPE template used at one of the VAMCs we visited.) Two other VAMCs provided various combinations of documents as evidence of their FPPEs, including professional standards board minutes and emails and letters from evaluators. These documents contained varying amounts of information detailing the circumstances prompting the FPPE, comments from evaluators, and follow-up actions, if any. The fourth VAMC initially provided us with documentation of one FPPE, including a completed template identifying the clinical service involved, the method of evaluation, the evaluator’s findings, and the service chief’s conclusions; and several documents, each with focused professional practice evaluation labeled at the top, specifying the medical records evaluated for the FPPE and the evaluator’s comments on each case. The VAMC’s service chief told us that an FPPE had been conducted for cause for this provider, but the VAMC’s quality manager said a formal FPPE had not been conducted and that the documentation we received was part of a protected peer review process. This disagreement illustrates that even within the same facility the interpretation of VHA’s policy on FPPEs differs, which can lead to potentially inappropriate use. Gaps in VHA’s policy on FPPE documentation requirements create a lack of clarity and therefore may affect VAMCs’ ability to appropriately document the evaluation of providers’ skills, support any actions initiated, and track provider-specific FPPE-related incidents over time. For example, if FPPEs are not well documented, VAMC officials may have limited knowledge of the findings to proceed with any actions and limited ability to track that such evaluations were conducted. As a result, if another adverse event subsequently occurred involving the same provider, the VAMC may not be aware of any prior findings. One VAMC official stressed to us the importance of thorough documentation of an FPPE, even if the determination is made that the provider delivered safe, quality patient care and no adverse action is needed. Without adequate documentation, a VAMC may conduct an FPPE that complies with VHA’s policy, and determine that adverse action is needed on the basis of the evaluation’s findings, but ultimately may be unable to take the action because the documented evidence is insufficient. Officials at one VAMC said they did not believe that the evidence gathered from an FPPE was strong enough to hold up against a provider’s appeal of an adverse action. AIBs. Another type of nonprotected review that VAMC officials may choose to address an adverse event is an AIB. In our review of VA’s AIB policy, we found that the policy generally provides clear guidance on the requirements, including documentation. For example, the policy specifies steps on how VAMC officials determine the need for an AIB, select the board members, and write the charge letter that convenes the AIB, and also provides a number of templates and checklists, including templates for the charge letter and investigative report, and checklists for collecting evidence, conducting interviews, and writing the report. When asked about AIBs, VAMC officials told us an AIB could be used when (1) a case is egregious; (2) there is a previous incident involving the same provider; (3) the VAMC needs or wants to take privileging or disciplinary action; and (4) the facts of a case are in dispute or when there is a high level of complexity and ambiguity in the case. For example, at one VAMC we visited, an AIB was convened to investigate the facts and circumstances regarding the care and treatment immediately preceding a veteran’s death. The AIB was charged with investigating allegations that a provider’s care deviated from standard practice procedures, including prescribing inappropriate doses of medications, not ensuring appropriate monitoring or appropriate interval of physical assessments, and providing substandard documentation and communication. FPPEs. According to federal internal control standards for monitoring, agencies should assess the quality of performance over time and provide reasonable assurance that deficiencies are detected and promptly resolved. Further, VHA’s credentialing and privileging policy states that the DUSHOM is responsible for ensuring that VISN directors maintain an appropriate credentialing and privileging process, which includes FPPEs, consistent with VHA policy. The VISN Chief Medical Officer is responsible for oversight of the credentialing and privileging process of the VAMCs within the VISN. Officials we interviewed from three of the four VISNs told us they do not monitor FPPEs conducted in response to questions about a provider’s ability to deliver safe, quality patient care. Officials from two of those three VISNs said that they have examined a sample of FPPEs during annual site visit reviews of VAMC operations; however, the sample of FPPEs would include mostly FPPEs for newly hired providers, since there are very few FPPEs conducted in response to questions that have arisen about a provider’s ability to deliver safe, quality patient care. An official from the fourth VISN said he monitors FPPEs during annual site visits to VAMCs, including FPPEs conducted in response to questions that have arisen about a provider’s ability to deliver safe, quality patient care. Officials from the office of the DUSHOM and the Office of Credentialing and Privileging said they do not monitor FPPEs conducted in response to questions about a provider’s ability to deliver safe, quality patient care. The Office of Credentialing and Privileging does not monitor these FPPEs because there are so few of them that the cost of reviewing a process that occurs so infrequently would outweigh the benefit. According to VHA officials, if an FPPE leads to a proposed reduction or revocation of clinical privileges, the Director of Credentialing and Privileging is frequently consulted to ensure that appropriate due process is afforded the provider. Similar to the VISNs, VA OIG officials told us they do not monitor FPPEs conducted when a question arises regarding a provider’s ability to deliver safe, quality patient care. However, VA OIG officials said that, during site visits for Combined Assessment Program reviews, they do review policy elements of the process for FPPEs for providers newly appointed to the VAMCs’ medical staff or for providers requesting new privileges. Because none of these entities monitor FPPEs conducted when a question arises regarding a provider’s ability to deliver safe, quality patient care, VHA cannot be assured that the process is working as intended or whether VAMCs need additional guidance or training about the process. According to a VA OIG official, these policy elements include whether there is evidence that FPPEs for new providers or new privileges are initiated, completed, and reported to the medical executive committee. requirements for VHA to monitor AIBs; however, federal internal control standards for information and communications state that relevant, reliable, and timely information is needed by an agency to achieve its objectives and to control its operations. In our 2012 report on AIBs, we found that VA does not collect and analyze aggregated data on AIB investigations, and at that time, VHA officials told us that there were no plans to do so. In our report, we recommended that VA establish a process to collect and analyze aggregated data from AIB investigations conducted within VHA. Having these types of data may provide VA with valuable information to systematically gauge the extent to which matters investigated by AIBs are occurring throughout VHA and to take corrective action, if needed, to reduce the likelihood of future occurrences. According to VHA officials, after our 2012 report was issued, VHA created a workgroup to examine our recommendation and concluded that monitoring AIBs was not necessary or warranted. VAMCs’ adherence to policies on protected and nonprotected processes for responding to providers’ actions that contribute to adverse events helps ensure that quality care is provided to veterans and that safety risks are minimized. Having clear and detailed guidance in policy for these processes is critical to helping VAMC officials identify and address adverse events, including providers’ contributing actions, in a timely and appropriate manner. Although officials at the four VAMCs we reviewed generally understood the protected peer review process, we found that none of these four VAMCs adhered to all four VHA protected peer review policy elements selected for review, such as completing peer reviews within required time frames and sending the required peer-reviewed cases to the peer review committee for further assessment. As such, VHA may be missing opportunities for improvements both in the practice of individual providers and organizationally. VHA also may be missing opportunities to identify and intervene early with providers whose care may pose a risk to patient safety if VAMC officials have not established or implemented peer review triggers that would initiate a detailed assessment of a provider’s care. Assisting with and monitoring VAMCs’ development and use of these peer review triggers will help VHA ensure that the protected peer review process contributes to organizational improvements and favorable patient outcomes, as intended by VHA policy. FPPEs and AIBs are nonprotected processes that VAMCs use to address adverse events involving individual providers. However, gaps in VHA’s FPPE policy on documentation requirements have created a lack of clarity for VAMCs on how to appropriately document the process. Inadequate documentation of the FPPE process may result in VAMC officials being unable to take adverse action against a provider when necessary. Providing more specific policy guidance for FPPEs would better support VAMCs’ use of this process, including when officials determine that they may need to take adverse action against a provider. Although VHA officials reiterated that they do not have plans to collect and analyze aggregated AIB data as we recommended in our 2012 report, we continue to believe that this is a potentially important quality improvement tool for use by VHA. To improve VHA’s use of the protected peer review and nonprotected processes to respond to individual providers involved in adverse events or when questions arise regarding providers’ ability to deliver safe, quality patient care, we are making five recommendations. To address protected peer review process requirements, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to ensure that VAMCs send all required initial peer reviews (level of care 2 and 3) to the peer review committee; ensure VAMCs’ peer review committees complete final peer reviews within 120 calendar days; provide clear guidance and assistance on the purpose, development, and implementation of peer review triggers; and require VAMCs to periodically provide data on peer review triggers, including the number of providers that have exceeded the triggers as part of the protected peer review data VAMCs report to VISNs on a quarterly basis. To address the nonprotected FPPE process, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to develop more specific policy on the FPPE process, including documentation requirements such as the FPPE’s purpose, time period covered, evaluator’s assessment, and the summary of actions to be taken. VA provided written comments on a draft of this report, which we have reprinted in appendix II. In its comments, VA generally concurred with our conclusions and our five recommendations, and described the agency’s plans to implement each of our recommendations. VA also provided technical comments, which we have incorporated as appropriate. In response to our first and second recommendations that VA ensure that all initial peer reviews (levels of care 2 and 3) be sent to the peer review committee for review and that the committee’s reviews be completed within 120 days, VA stated that it will provide refresher education to key staff, such as chiefs of staff, risk managers, and VISN officials. VA anticipates that its planned actions will be completed by December 31, 2013. In response to our third recommendation that VA provide clear guidance and assistance on the purpose, development, and implementation of peer review triggers, VA stated that refresher education on this policy requirement, which also encompasses professional activity triggers, was communicated to key staff, such as chiefs of staff, risk managers, and VISN officials, through a conference call. Additionally, VA stated that staff in the VHA Risk Management Program, Office of Quality, Safety and Value will be available to provide consultative assistance to facilities that are unclear on how to implement this requirement. VA anticipates completion of these activities by December 31, 2013. In response to our fourth recommendation that VA require VAMCs to periodically provide data on peer review triggers, VA concurred. VA stated that VAMCs will be required to submit a deidentified, summary report discussing trends and analysis of aggregate data on peer review activity with their quarterly submission to the VISN. VA stated that this new requirement will be included in the fiscal year 2014 revision of VHA Directive 2010-025, Peer Review for Quality Management, which establishes that the VAMC’s medical executive committee is responsible for determining peer review or professional activity trigger levels. VA anticipates completion of these activities by September 30, 2014. VA disagreed with the latter part of our recommendation that the data submitted should include the number of providers that have exceeded the triggers. VA stated that reviewing aggregate data for the number of providers who exceeded trigger thresholds would represent skewed data, which VA officials believe is not reflective of the quality of care provided by those providers submitted for triggered reviews. VA stated larger facilities may appear to have artificially higher levels of providers referred for detailed assessments than those of smaller facilities. We agree with VA that VAMC clinical leadership, with VISN oversight, input, and support, is the preferred means to handle trigger thresholds and data analysis; however, VA did not specify what data on protected peer review triggers that VAMCs would be required to report. We maintain that it is important for VAMCs and the VISNs to review whether the peer review triggers are implemented as intended. Part of this review should include monitoring how many providers have exceeded the trigger thresholds. Additionally, we believe that collecting and reporting such data will help the VISNs and VHA ensure that the protected peer review process contributes to organizational improvements and favorable patient outcomes. In response to our fifth recommendation that VA develop more specific policy on the FPPE process, including documentation requirements such as the FPPE’s purpose, time period covered, evaluator’s assessment, and the summary of actions to be taken, VA stated that it will develop guidance on the FPPE process that will begin with a description of the process for a detailed assessment and define the FPPE for cause process if an opportunity to improve is indicated. VA further noted that the guidance will end with an overview of the adverse action process to be initiated when the provider does not demonstrate adequate improvement and a reduction or revocation of clinical privileges appears to be indicated. VA anticipates completion of these activities by September 30, 2014. While we understand VA’s intention of expediting dissemination of information about the FPPE process through guidance, we believe that it is important for the guidance to be included in the next formal iteration of VHA policy. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, the Under Secretary for Health, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This FPPE form is a full recreation of a template used by one of the VAMCs we visited. In addition to the contact named above, Marcia Mann, Assistant Director; Ashley Dixon; Mariel Lifshitz; Katie McConnell; Elizabeth Morrison; Lisa Motley; Ann Tynan; and Michael Zose made key contributions to this report. | Adverse events--clinical incidents that may pose the risk of injury to a patient as the result of a medical intervention, rather than the patient's underlying health condition--can occur in all health care delivery settings. VAMCs can use one or more of the protected (confidential and nonpunitive) and nonprotected processes to evaluate the role of individual providers in adverse events. GAO was asked to review the extent to which processes used to respond to adverse events are carried out across VAMCs. In this report, GAO examined (1) VAMCs' adherence to VHA's protected peer review process, and the extent to which VHA monitors this process, and (2) VAMCs' adherence to VHA's nonprotected processes and the extent to which VHA monitors these processes. To conduct this work, GAO visited four VAMCs selected for variation in size, complexity of surgeries typically performed, and location. GAO reviewed VHA policies and federal internal control standards and analyzed data from the four selected VAMCs. GAO also interviewed VHA and VA OIG officials, as well as officials from VISNs of the four selected VAMCs. The Department of Veterans Affairs (VA) medical centers GAO visited did not adhere to certain policy elements of the protected peer review process, and monitoring by VA's Veterans Health Administration (VHA) is limited. According to policy issued by VHA, protected peer review may be used by VA medical centers (VAMC) when there is a need to determine whether a provider's actions associated with an adverse event were clinically appropriate--that is, whether another provider with similar expertise would have taken similar action. Despite VAMC officials' general understanding of the protected peer review process, none of the VAMCs GAO visited adhered to all four protected peer review policy elements selected for review, including the timely completion of reviews, and the timely development of peer review triggers that signal the need for further review of a provider's care. Failure of VAMCs to adhere to the protected peer review policy elements may result in missed opportunities to identify providers who pose a risk to patient safety. Veterans Integrated Service Networks (VISN), responsible for oversight of VAMCs, monitor VAMCs' protected peer review processes through quarterly data submissions and annual site visits. A VHA official said that VHA monitors the process by reviewing and analyzing the aggregated quarterly data submitted by VAMCs through the VISNs. The VA Office of the Inspector General (OIG) also conducts oversight of the protected peer review process as part of a larger review of VAMCs' operations. While the VISNs and VA OIG have reviewed VAMCs establishment of peer review triggers to prompt further review of a provider's care, neither they nor VHA has monitored their implementation. As such, VHA cannot provide reasonable assurance that VAMCs are using the peer review triggers as intended, as a risk assessment tool. This weakens VAMCs' ability to ensure they are identifying providers that are unable to deliver safe, quality patient care. VAMCs' adherence to the nonprotected focused professional practice evaluation (FPPE) process is unclear due to gaps in VHA's policy on documentation requirements, and VHA does not routinely monitor nonprotected processes. An FPPE for cause is a time-limited evaluation during which the VAMC assesses the provider's professional competence when a question arises regarding the provider's ability to provide safe, quality patient care. Information collected through the FPPE can be used to inform adverse actions, such as limiting the provider's scope of care. Although VAMC officials were generally aware of the FPPE process, there are gaps in VHA's policy regarding how these evaluations should be documented and what information should be included, which limited GAO's ability to assess VAMCs' adherence to the process. For example, one VAMC provided GAO with documentation labeled as an FPPE and identified by the service chief as an FPPE; however, the quality manager said a formal FPPE was not conducted and that the documentation was actually part of a protected peer review. These differing views illustrate that, even within the same facility, gaps in VHA's policy on documenting FPPEs create a lack of clarity and opportunities for misinterpretation and inappropriate use. Moreover, the gaps in VHA's policy may hinder VAMCs' ability to appropriately document the evaluation of a provider's skills, support any actions initiated, and track provider-specific incidents over time. There is no routine monitoring of FPPEs for cause by VHA, VISNs, or VA OIG. GAO recommends that VA take action to ensure VAMCs adhere to certain elements of the peer review policy, require VAMCs to report data on implementation of peer review triggers, and develop more specific policy to help guide the FPPE process, including documentation requirements. In its written comments, VA generally concurred with GAO's conclusions and recommendations. |
Our past work has identified several major management challenges at EPA, including ensuring consistent environmental enforcement and compliance, addressing human capital issues, and improving the development and use of environmental information. Ensuring consistent environmental enforcement and compliance. EPA has authorized states to carry out much of the day-to-day responsibilities for timely, appropriate enforcement of environmental laws; however, we found that EPA does not effectively oversee how well the states are carrying out these responsibilities. Specifically, we found that EPA has not (1) identified the causes of poorly performing state enforcement programs, (2) informed the public about how well the states are implementing their enforcement responsibilities, and (3) assessed the performance of EPA’s regional offices in carrying out their state oversight responsibilities— performance that has been inconsistent over the years. EPA has also been slow to improve long-standing problems in its enforcement data, which, among other things, hampers its ability to accurately determine the universe and characteristics of entities needing regulation to ensure that (1) the public is afforded equal protection under environmental laws and (2) regulated parties, taxpayers, and ratepayers are not subjected to widely varying costs of environmental compliance among regions. Further, we have reported that how EPA calculates and reports penalties, the value of injunctive relief, and the amount of resulting pollution reduction may undermine the transparency and accuracy of its reported outcomes and cause EPA to both over- and underreport its enforcement achievements. We have recommended that EPA enhance its oversight of regional and state enforcement activities to implement environmental programs consistent with the requirements of federal statutes and regulations. We also recommended that EPA develop an action plan for addressing enforcement problems identified in state programs; ensure that states have sufficient resources to implement and enforce programs, as authorized by EPA; and help the states improve their capacity for enforcement. EPA should also routinely conduct performance assessments of regional and state enforcement programs and communicate the results of the assessments to the public and the regulated industry. We also recommended that the EPA Administrator take a number of actions to disclose more information when reporting penalties and estimates of the value of injunctive relief and pollution reduction. EPA has generally agreed with our recommendations and is in the process of implementing them. In particular, the agency has developed an initiative known as the State Review Framework that it believes will (1) address many of the long-term problems related to providing fair, consistent, and transparent enforcement throughout the country and (2) obtain accurate data that can be used to determine the extent of state compliance with enforcement standards and the need for corrective actions. However, such efforts are still in the early stages, and their success is uncertain and will depend on continued commitment of senior management along with sufficient priority and resources. EPA also stated that it would take actions to disclose more information when reporting estimates of injunctive relief and pollution reductions and consider our recommendation to report collected penalties. Addressing human capital issues. EPA has struggled for several years to identify its needs for human resources and to deploy its staff throughout the country in a manner that would do the most good. We found that EPA’s process for budgeting and allocating resources does not fully consider the agency’s current workload, and that in preparing requests for funding and staffing, EPA makes incremental adjustments, largely based on an antiquated workforce planning system that does not reflect a bottom-up review of the nature or distribution of the current workload. Moreover, EPA’s human capital management systems have not kept pace with changes that have occurred over the years as a result of changing legislative requirements and priorities, changes in environmental conditions in different regions of the country, and the much more active role that states now play in carrying out day-to-day-activities of federal environmental programs. To remedy its antiquated and unscientific methods for determining workload and allocating staff resources, we recommended that EPA substantially improve its resource planning by identifying the factors that derive the national and regional workload and develop more realistic allocation systems for deploying staff with the requisite skills and capabilities to areas of the country where they are most needed to address the highest-priority needs. EPA has not paid sufficient attention to human capital issues over the years. During the past several years, EPA has taken a number of actions to improve its workforce management. For example, the agency has developed a strategic approach to ensure that it has, and will continue to have, the requisite competencies to carry out its programs effectively. Nonetheless, the number of regional staff at individual offices and their skills and competencies continue to be driven primarily by historical staffing patterns rather than a fresh assessment of regional needs, given the regional workload and the role that states play in the enforcement process, which varies greatly from region to region. Improving development and use of environmental information. Critical, reliable environmental information is needed to provide better scientific understanding of environmental trends and conditions and to better inform the public about environmental progress in their locales. We found substantial gaps between what is known and the goal of full, reliable, and insightful representation of environmental conditions and trends to provide direction for future research and monitoring efforts. EPA has struggled with providing a focus and the necessary resources for environmental information since its inception in 1970. While many data have been collected over the years, most water, air, and land programs lack the detailed environmental trend information to address the well- being of Americans. EPA program areas have also been hampered by deficiencies in their environmental data systems. For example, the quality of environmental data constrains EPA’s ability to assess the effectiveness of its enforcement policies and programs throughout the country and to inform the public about the health and environmental hazards of dangerous chemicals. We recommended that EPA better emphasize the development and use of environmental indicators and information, not only in its strategic plan but also as a mechanism for prioritizing its allocation of limited resources and measuring the success of environmental policies and programs. GAO and policymakers in the executive and legislative branches have proposed the establishment of a Bureau of Environmental Statistics to provide the focus and resources needed to address the nation’s current and long-term environmental conditions and trends. Such a bureau would ensure top- level commitment, interagency coordination, and clear responsibility for ensuring the comprehensiveness and credibility of environmental information. In addition, we recommended that EPA develop a consistent approach to ensure the transparency and accuracy of measures to determine its program effectiveness. Finally, we also recommended that EPA ensure that information on environmental health risks and on companies that manufacture and use toxic chemicals is effectively collected and communicated to the public. EPA has generally agreed with our recommendations, and has made some progress in trying to obtain and use improved environmental information over the past several years. However, the agency’s efforts have been sporadic and spread among the various EPA offices. As such, the environmental information initiatives at EPA have been incomplete and lack a high-priority, coordinated, strategic approach that is necessary to link limited resources with the most critical data needs. EPA’s ability to effectively implement its mission of protecting public health and the environment depends on credible and timely assessment of the risks posed by toxic chemicals. Such assessments are the cornerstone of scientifically sound environmental decisions, policies, and regulations under a variety of statutes, such as TSCA. However, EPA has failed to develop sufficient chemical assessment information to determine whether it should establish controls to limit public exposure to many chemicals that may pose substantial health risks. As discussed below, in a number of reports, we have identified actions that are needed to (1) enhance EPA’s ability under TSCA, among other things, to obtain health and safety information from the chemical industry and (2) streamline and increase the transparency of EPA’s Integrated Risk Information System (IRIS) that provides EPA’s scientific position on the potential human health effects of more than 540 chemicals. TSCA generally places the burden of obtaining data on chemicals used in commerce on EPA, rather than on the companies that produce the chemicals. For example, TSCA requires EPA to demonstrate certain health or environmental risks before it can require companies to further test their chemicals. As a result, EPA does not routinely assess the risks of the roughly 80,000 industrial chemicals in use. Moreover, TSCA does not require chemical companies to test the approximately 700 new chemicals introduced into commerce annually for their toxicity, and companies generally do not voluntarily perform such testing. Further, the procedures EPA must follow in obtaining test data from companies can take years to complete. In contrast, the European Union’s chemical control legislation generally places the burden on companies to provide health effects data on the chemicals they produce. In previous reports on TSCA, we have suggested that Congress consider statutory changes to strengthen EPA’s authority to obtain information from the chemical industry. We continue to believe that giving EPA more authority to obtain data from the companies producing chemicals would improve the effectiveness of TSCA and thereby enhance EPA’s ability to protect public health and the environment. In addition, while TSCA authorizes EPA to issue regulations that may, among other things, ban existing toxic chemicals or place limits on their production or use, the statutory requirements EPA must meet present a legal threshold that has proven difficult for EPA and discourages the agency from using these authorities. For example, EPA must demonstrate “unreasonable risk,” which EPA believes requires it to conduct extensive cost-benefit analyses, to ban or limit chemical production. Since 1976, EPA has issued regulations to control only five existing chemicals determined to present an unreasonable risk. Further, its 1989 regulation phasing out most uses of asbestos was vacated by a federal appeals court in 1991 because it was not based on “substantial evidence.” In contrast, the European Union and a number of other countries have largely banned asbestos, a known human carcinogen that can cause lung cancer and other diseases. We have previously suggested that Congress consider amending TSCA to reduce the evidentiary burden EPA must meet to control toxic substances and continue to believe such change warrants serious consideration. Also, under TSCA, EPA has a limited ability to provide the public with information on chemical production and risk because of the act’s prohibitions on the disclosure of confidential business information. About 95 percent of the notices companies have provided to EPA on new chemicals contain some information claimed as confidential. While EPA believes that some claims of confidential business information may be unwarranted, challenging the claims is time- and resource-intensive, and EPA does not challenge most claims. Importantly, state environmental agencies and others have said that information claimed as confidential would help them in such activities as developing contingency plans to alert emergency response personnel to the presence of highly toxic substances at manufacturing facilities. The European Union’s chemical control legislation generally provides greater public access to the chemical information it receives. We previously suggested that Congress (1) consider authorizing EPA to share with the states and foreign governments the confidential business information that chemical companies provide to EPA, subject to regulations to be established by EPA that would set forth the procedures to be followed by all recipients of the information in order to protect the information from unauthorized disclosures, and (2) consider limiting the length of time for which information may be claimed as confidential without resubstantiation of the need for confidentiality. We have also identified significant problems with EPA’s process for developing chemical assessments under EPA’s IRIS program. Created in 1985 to provide EPA with consensus opinions within the agency on the health effects of chronic exposure to chemicals, the IRIS database provides the basic information EPA needs to determine whether it should establish controls, for example, to protect the public from exposure to toxic chemicals in the air and water and at hazardous waste sites. In 2008, we reported that the IRIS database, which contains assessments of more than 540 toxic chemicals, is at serious risk of becoming obsolete because EPA has not been able to keep its existing assessments current or to complete assessments of the most important chemicals of concern. Factors contributing to EPA’s inability to complete assessments in a timely manner—including reviews required by the Office of Management and Budget (OMB) of IRIS assessments; certain management decisions, such as delaying some assessments to await new research; and the compounding effect of delays—can force EPA to essentially restart assessments to incorporate changing science and methods. In fact, a number of key chemicals have been caught in a seemingly endless review cycle, limiting EPA’s ability to protect the public health from ubiquitous chemicals that are likely to cause cancer or other serious health effects. For example, EPA’s formaldehyde and dioxin assessments have been in progress for about 12 and 18 years, respectively. Overall, EPA has finalized a total of only 9 assessments in the past 3 fiscal years; as of December 2007, most of the 70 ongoing assessments had been in progress for more than 5 years; and more than half of all current assessments may be outdated. Moreover, the OMB-required reviews, which are not publicly available, limit the credibility of the assessments because they involve federal agencies that may be affected by the assessments should they lead to regulatory actions. We recommended that EPA adopt a streamlined, more transparent assessment process. In its response, EPA estimated that under its proposed changes to the assessment process, most assessments would take from 3 to 4-1/2 years and mission-critical assessments would take up to 6 years. However, we believe that an IRIS assessment process built around such time frames is problematic. As we stated in our reports, when assessments take longer than 2 years, they can become subject to substantial delays stemming from the need to redo key analyses to take into account changing science and assessment methodologies. Some of our prior recommendations on IRIS and TSCA, aimed at providing EPA with information needed to support its assessment of toxic chemicals, have not been implemented. For example, when EPA implemented a new IRIS assessment process in 2008, it did not incorporate our recommendations to streamline and increase the transparency of the process. In fact, the new IRIS assessment process exacerbates the productivity and credibility concerns GAO identified. Further, our recommendations aimed at providing EPA with the information needed to support its assessments of industrial chemicals under TSCA have not been implemented. Without greater attention to EPA’s efforts to assess toxic chemicals, the nation lacks assurance that human health and the environment are adequately protected. Because of the importance of this issue, and the lack of progress in implementing much-needed change to TSCA, in January 2009 we added transforming EPA’s processes for assessing and controlling toxic chemicals to our list of high-risk areas needing added attention by Congress and the executive branch. The Clean Air Act, a comprehensive federal law that regulates air pollution from stationary and mobile sources, was passed in 1963 to improve and protect the quality of the nation’s air. The act was substantially overhauled in 1970 when Congress required EPA to establish national ambient air quality standards for pollutants at levels that are necessary to protect public health with an adequate margin of safety and to protect public welfare from adverse effects. EPA has set such standards for ozone, carbon monoxide, particulate matter, sulfur oxides, nitrogen dioxide, and lead. In addition, the act directed the states to specify how they would achieve and maintain compliance with the national standard for each pollutant. Congress amended the act again in 1977 and 1990. The 1977 amendments were passed primarily to set new goals and dates for attaining the standards because many areas of the country had failed to meet the deadlines set previously. The act was amended again in 1990 when several new themes were incorporated into it, including encouraging the use of market-based approaches to reduce emissions, such as cap-and- trade programs. In recent years, our work has identified several key challenges in implementing the Clean Air Act, and made recommendations to EPA intended to enhance the effectiveness of its clean air programs. First, we have identified areas where EPA could improve its coordination with the Department of Transportation in making planning decisions. Second, we have found that while EPA had taken steps to strengthen its estimates of health benefits from rules reducing particulate matter air pollution, the agency needed to ensure continued resources toward improving analysis of the uncertainty underlying its estimates. Third, we have identified delays and shortcomings with EPA’s development of rules intended to limit emissions of toxic air pollutants and recommended that the agency develop a plan to improve its management of the air toxics program. In fact, when addressing EPA’s air quality standards in a recent hearing on children’s health, we noted that EPA largely disregarded recommendations from its advisory committee, and recommended that the agency examine ways to use its advisors to reinvigorate its focus on the health of children, who are often disproportionately affected by air pollution. Fourth, we identified major shortcomings with EPA’s economic justification for a proposed rule to limit mercury emissions from power plants and recommended, among other things, that the agency conduct its analysis consistent with OMB guidance for economic analysis and better document its findings. EPA stated that it would address the recommendations by, for example, conducting additional analysis on the rule. EPA also faces a number of challenges related to clean air regulatory decisions that have been vacated or remanded to the agency by the courts. These include regulatory proposals or agency decisions related to (1) mercury emissions from coal-fired power plants; (2) long-range transport of sulfur dioxide and nitrogen oxides—pollutants that contribute to acid rain and other air quality problems—emitted by power plants; (3) the New Source Review program, a permitting program that among other goals seeks to prevent air quality degradation from the addition of new and modified factories, industrial boilers, and power plants; and (4) whether EPA and the states can use existing authority under the Clean Air Act to regulate greenhouse gases. Each of these issues, along with those identified in our prior work, will require substantial management attention in the near term. The Clean Water Act establishes the basic structure for regulating discharges of pollutants into the waters of the United States and regulating the quality of surface waters. However, the law’s effectiveness has been challenged by the fact that many pollution sources are decentralized and diffuse in nature, and therefore difficult to monitor and regulate. One such source is urban storm water runoff. Pollutants and sediment carried by storm water, as well as the volume and temperature of runoff, can alter aquatic habitats and make it hard for fish and other organisms to survive. Some pollutants can also make fish and shellfish unsafe to eat. Moreover, polluted storm water runoff can negatively affect those who use fresh- and saltwater areas for swimming and boating. For example, swimmers in water with high levels of bacteria have a greater risk of contracting gastrointestinal or respiratory illnesses. However, EPA still has not developed rapid water-testing methods and current water quality standards. The safety of our nation’s water is also threatened by other factors, such as pollutants discharged from large-scale animal feeding operations that enter water bodies. More than a dozen government-sponsored or peer- reviewed studies since 2002 on water pollutants emitted by concentrated animal-feeding operations found increased levels of phosphorus, nitrogen, or hormones in surface water and groundwater near animal-feeding operations. According to EPA, excessive amounts of these nutrients can deplete oxygen in water, which could result in fish deaths, reduced aquatic diversity, and illness in infants. Despite its long-term regulation of concentrated animal-feeding operations, EPA still lacks comprehensive and reliable data on the number, location, and size of the operations that have been issued permits and the amounts of discharges they release. As a result, EPA has neither the information it needs to assess the extent to which concentrated animal-feeding operations may be contributing to water pollution, nor the information it needs to ensure compliance with the Clean Water Act. EPA partners with federal, state, and local agencies, as well as nongovernmental organizations, to develop and implement approaches that can reduce pollution in our nation’s significant water bodies. However, after decades of EPA and its partners spearheading restoration efforts in areas such as the Great Lakes and the Chesapeake Bay, improvements in these water bodies remain elusive. Lack of targeted strategies; coordination among federal, state, and local stakeholders; and realistic goals to ensure that limited restoration resources are being used for the most effective restoration activities appear to be long-standing issues impeding such efforts. In recent years, we have made many recommendations to help EPA address these problems. To more effectively regulate the discharges from large-scale animal-feeding operations, EPA should complete its efforts to develop an inventory of permitted operations. In addition, we recommended that EPA evaluate the implementation of the storm water program and issue additional program guidance and consider regulatory changes to improve the quality and consistency of activity reporting by communities. To better protect the safety of our nation’s beaches, EPA needs to publish new or revised water quality criteria for pathogens and pathogen indicators and develop specific guidance on monitoring frequency and methods of public notification. In addition, we recommended that EPA ensure that the Chesapeake Bay Program––a partnership between EPA, several states, and the Chesapeake Bay Commission––develops a coordinated implementation strategy that unifies its various planning documents and establishes a means to better target its limited resources to the most cost-effective restoration activities. We also recommended that for its Great Lakes Initiative, EPA develop a more consistent permitting strategy for controlling mercury and gather more information to help it develop water quality standards and assess the effect of programs intended to minimize pollutants that are exceeding standards. EPA agreed with our recommendations in these areas. For example, while EPA expected to take several years to fully implement a national data system, EPA and states are currently working to develop and implement a national data system to collect and record facility-specific information on concentrated animal-feeding operations and other facilities through its Integrated Compliance Information System, and has initiated an effort to develop a rule to establish required data elements and reporting frequencies. Likewise, for the storm water program, EPA has taken steps to improve the quality and consistency of program data reported by communities and is currently developing guidance, including a reporting form that it believes will help the agency obtain better data for evaluating the program. Finally, EPA agreed with our recommendations regarding the Chesapeake Bay Program, and plans to work with the Great Lakes states in assessing approaches for reducing mercury in lieu of developing a mercury permitting strategy. In addition, in coming years among the most daunting water pollution control problems will be those faced by the nation’s water utilities in grappling with the multibillion-dollar costs of upgrading aging and deteriorating infrastructures and building new ones to serve a growing population. Frequent and highly publicized incidences of combined sewer overflows into rivers and streams as well as water main breaks in the nation’s largest cities are the most visible manifestation of this mounting problem. Overall, water infrastructure needs across the country have been estimated to cost from $485 billion to nearly $1.2 trillion over the next 20 years. Even before the current financial crisis, many water utilities had difficulty raising funds to repair, replace, or upgrade aging capital assets; comply with regulatory requirements; and expand capacity to meet increased demand. For example, based on a nationwide survey of several thousand drinking water and wastewater utilities, we reported several years ago that about one-third of the utilities (1) deferred maintenance because of insufficient funds, (2) had 20 percent or more of their pipelines nearing the end of their useful life, and (3) lacked basic plans for managing their capital assets. We noted in the past that better management techniques can, at least to some extent, help utilities make the best use of available dollars in their struggle to meet their infrastructure needs. For example, we recommended comprehensive asset management—a technique whereby water systems systematically identify their needs, set priorities, and better target their investments—as a tool for helping utilities make better use of available funds. Additional funds, however, will ultimately be needed to narrow the enormous gap between water infrastructure needs and available resources. Of note, EPA will receive $6 billion in additional water infrastructure funding from the recently passed stimulus bill. EPA agreed with our recommendations, and while it has undertaken a number of approaches to encourage asset management, such as implementing a sustainable infrastructure initiative and offering training sessions on best practices, there is further work needed to encourage comprehensive asset management across the nation. In 1980, Congress passed the Comprehensive Environmental Response, Compensation, and Liability Act, establishing the Superfund program and giving the federal government the authority to respond to chemical emergencies and to clean up hazardous waste sites on private and public lands. The Superfund program addresses both short- and long-term risks from toxic chemicals. The act established a trust fund financed primarily by taxes on crude oil and certain chemicals to pay for EPA’s cleanup activities. The authority for these taxes expired in 1995; EPA must now primarily rely on annual appropriations from the general fund to fund cleanups. These appropriations, when adjusted for inflation, have been declining and the pace of cleanups has slowed. Furthermore, citing competing priorities and lack of funds, EPA has not implemented a 1980 statutory mandate under Superfund to require businesses handling hazardous substances to demonstrate their ability to pay for potential environmental cleanups—that is, to provide financial assurances. Because of this inaction, EPA has exposed the Superfund program and U.S. taxpayers to potentially enormous cleanup costs at gold, lead, and other mining sites and other industrial operations. In addition, we found that EPA faces challenges in ensuring that institutional controls—legal or administrative restrictions on land or resource use to protect against exposure to residual contamination at hazardous waste sites—are adequately implemented, monitored, and enforced. In 1984, Congress required EPA to devise regulations for the design and operation of underground tanks. In response, in 1985, EPA began developing the Underground Storage Tank program to prevent releases of petroleum and hazardous substances into the environment, detect releases when they occur, and clean up any contamination resulting from a release. To support the program and provide public funding to states to ensure that releases from tanks are cleaned up, in 1986 Congress established the Leaking Underground Storage Tank Trust Fund, funded primarily through an excise tax on gasoline and other motor fuels. The fund has since grown to an estimated $3.2 billion at the end of fiscal year 2008, yet the pace of cleanup remains slow. Under the program, tank owners and operators are primarily responsible for paying to clean up releases from their tanks. They can demonstrate their financial responsibility by using, among other options, state financial assurance funds. However, we found that tank owners sometimes fail to maintain adequate financial responsibility coverage and that several states’ assurance funds may lack sufficient resources to ensure timely cleanups. Finally, in 2005 we found that federal and state agencies had identified perchlorate—a component of rocket fuel known to affect human health— in groundwater, soil, or public drinking water systems at almost 400 sites across the country. Nevertheless, there is no federal drinking water standard or specific requirement to clean up perchlorate and the National Academy of Sciences called for additional research on the effects of perchlorate exposure. We have made several recommendations to help EPA more quickly clean up hazardous waste sites. Specifically, we recommended that EPA (1) ensure that financial assurances are in place for sites that manufacture or use toxic chemicals; (2) improve the institutional controls at contaminated sites to ensure better protection of the public from inappropriate use of such sites; (3) ensure that the owners of underground storage tanks maintain access to adequate financial resources for cleaning up leaks and that state insurance funds provide reliable coverage for cleaning up leaking tanks; and (4) establish a formal structure to centrally track and monitor perchlorate detections and the status of cleanup efforts. EPA has generally agreed with our recommendations in these areas, but has not yet implemented any of them. EPA disagreed with our recommendation regarding establishing a perchlorate tracking structure because the agency believes that it already has sufficient capability to track and monitor perchlorate detection and cleanup efforts. Nevertheless, we continue to believe that such a system would better inform the public and others about perchlorate’s presence in their communities. In addition to the challenges with which EPA has struggled for years, new challenges are emerging, chief among them, climate change. Changes in the earth’s climate attributable to increased concentrations of greenhouse gases may have significant environmental and economic impacts in the United States and internationally. Among other potential impacts, climate change could threaten coastal areas with rising sea levels, alter agricultural productivity, and increase the intensity and frequency of floods and tropical storms. Furthermore, climate change has implications for the fiscal health of the federal government, affecting federal crop and flood insurance programs, and placing new stresses on infrastructure and natural resources. Accordingly, there are numerous legislative proposals for reducing greenhouse gas emissions and reducing the nation’s use and dependence on fossil fuels. EPA will be at the center of the federal government’s strategy for addressing this monumental challenge. We have previously reported that the federal government’s approach to climate change has been ad hoc, not comprehensive, and not well coordinated across government agencies. Specifically, the federal government lacks a comprehensive approach for targeting federal research dollars at the development and deployment of low-carbon technologies. Federal land management agencies are behind in their efforts to develop strategies and guidance for adapting to climate change, and federal crop insurance and flood insurance have not yet embraced the implications of climate change on their portfolios. Moreover, the technical challenges of carbon capture and storage; biofuels development, production, and distribution; and alternative sources of energy have not been fully researched. Finally, energy conservation efforts have remained stagnant over the past decade. To inform Congress as it considers various legislative proposals for addressing climate change, we reported on the economic implications of different policy options, lessons learned from the European Union’s efforts to implement mandatory carbon reductions, and the Clean Development Mechanism under the Kyoto Protocol. We also reported on the challenges in carbon capture and storage—another key component of most climate change legislative proposals—and identified problems that must be resolved. We have also issued information on the carbon offset market, and identified challenges that must be resolved before this can be a part of climate change legislation. We have made several recommendations to help various federal agencies better address climate changes, including recommending that EPA and the Department of Energy put more rigor into their voluntary emission reduction programs and track and report results. We also recommended that federal agencies develop clear written communications to resource managers that explains how managers are expected to address the effects of climate change. In addition, we recommended that federal agencies better coordinate and more comprehensively identify and address research gaps in alternative fuels, clean coal, and other emission reduction technologies. Finally, we recommended that federal agencies step up energy conservation efforts. Agencies responsible for voluntary climate change programs, including EPA, as well as agencies responsible for climate change research generally agreed with our recommendations but have been slow to implement them. While EPA has made some progress in improving its operations, many of the same issues still remain. EPA’s mission is, without question, a difficult one: its policies and programs affect virtually all segments of the economy, society, and government, and it is in the unenviable position of enforcing myriad inherently controversial environmental laws and maintaining a delicate balance between the benefits to public health and the environment with the cost to industry and others. Nevertheless, the repetitive and persistent nature of the shortcomings we have observed over the years points to serious challenges for EPA to effectively implement its programs. Until it addresses these long-standing challenges, EPA is unlikely to be able to respond effectively to much larger emerging challenges, such as climate change. Facing these challenges head-on will require a sustained commitment by agency leadership. As a new administration takes office and begins to chart the agency’s course, it will be important for Congress and EPA to continue to focus on the issues we have identified. We are sending copies of this report to interested congressional committees and the Administrator of EPA. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix I. In addition to the contact named above, other key contributors to this report include Kevin Bray, Antoinette Capaccio, Kate Cardamone, Steve Elstein, Liz Erdmann, Christine Fishkin, Brian Friedman, John Gates, Melissa Hermes, Michael Hix, Anne Johnson, Rich Johnson, Karen Keegan, Ed Kratzer, Justin Mausel, Sherry McDonald, Mehrzad Nadji, Emily Norman, Alison O’Neill, Vincent Price, Diane Raynes, John Smith, Joe Thompson, and Lisa Vojta. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. EPA’s Execution of Its Fiscal year 2007 New Budget Authority for the Enforcement and Compliance Assurance Program in the Regional Offices. GAO-08-1109R. Washington, D.C.: September 26, 2008. Environmental Enforcement: EPA Needs to Improve the Accuracy and Transparency of Measures Used to Report on Program Effectiveness. GAO-08-1111R. Washington, D.C.: September 18, 2008. Environmental Protection: EPA-State Enforcement Partnership Has Improved, but EPA’s Oversight Needs Further Enhancement. GAO-07-883. Washington, D.C.: July 31, 2007. Environmental Compliance and Enforcement: EPA’s Effort to Improve and Make More Consistent Its Compliance and Enforcement Activities. GAO-06-840T. Washington, D.C.: June 28, 2006. EPA’s Execution of Its Fiscal year 2007 New Budget Authority for the Enforcement and Compliance Assurance Program in the Regional Offices. GAO-08-1109R. Washington, D.C.: September 26, 2008. Environmental Protection: EPA-State Enforcement Partnership Has Improved, but EPA’s Oversight Needs Further Enhancement. GAO-07-883. Washington, D.C.: July 31, 2007. Environmental Compliance and Enforcement: EPA’s Effort to Improve and Make More Consistent Its Compliance and Enforcement Activities. GAO-06-840T. Washington, D.C.: June 28, 2006. Clean Water Act: Improved Resource Planning Would Help EPA Better Respond to Changing Needs and Fiscal Constraints. GAO-05-721. Washington, D.C.: July 22, 2005. Human Capital: Implementing an Effective Workforce Strategy Would Help EPA to Achieve Its Strategic Goals. GAO-01-812. Washington, D.C.: July 31, 2001. Improving the Development and Use of Environmental information Environmental Protection: EPA Needs to Follow Best Practices and Procedures When Reorganizing Its Library Network. GAO-08-579T. Washington, D.C.: March 13, 2008. Environmental Protection: EPA Needs to Ensure That Best Practices and Procedures Are Followed When Making Further Changes to Its Library Network. GAO-08-304. Washington, D.C.: February 29, 2008. Toxic Chemical Releases: EPA Actions Could Reduce Environmental Information Available to Many Communities. GAO-08-128. Washington, D.C.: November 30, 2007. Measuring Our Nation’s Natural Resources and Environmental Sustainability: Highlights of a Forum Jointly Convened by the Comptroller General of the United States and the National Academy of Science. GAO-08-127SP. Washington, D.C.: October 2007. Environmental Right-To-Know: EPA’s Recent Rule Could Reduce Availability of Toxic Chemical Information Used to Assess Environmental Justice. GAO-08-115T. Washington, D.C.: October 4, 2007. Environmental Justice: Measurable Benchmarks Needed to Gauge EPA Progress in Correcting Past Problems. GAO-07-1140T. Washington, D.C.: July 25, 2007. Environmental Information: EPA Actions Could Reduce the Availability of Environmental Information to the Public. GAO-07-464T. Washington, D.C.: February 6, 2007. Environmental Indicators: Better Coordination Is Needed to Develop Environmental Indicator Sets That Inform Decisions. GAO-05-52. Washington, D.C.: November 17, 2004. EPA’s Execution of Its Fiscal year 2007 New Budget Authority for the Enforcement and Compliance Assurance Program in the Regional Offices. GAO-08-1109R. Washington, D.C.: September 26, 2008. Environmental Health: EPA Efforts to Address Children’s Health Issues Need Greater Focus, Direction, and Top-Level Commitment. GAO-08-1155T. Washington, D.C.: September 16, 2008. Chemical Assessments: EPA’s New Assessment Process Will Further Limit the Productivity and Credibility of Its Integrated Risk Information System. GAO-08-810T. Washington, D.C.: May 21, 2008. Toxic Chemicals: EPA’s New Assessment Process Will Increase Challenges EPA Faces in Evaluating and Regulating Chemicals. GAO-08-743T. Washington, D.C.: April 29, 2008. Chemical Assessments: Low Productivity and New Interagency Review Process Limit the Usefulness and Credibility of EPA’s Integrated Risk Information System. GAO-08-440. Washington, D.C.: March 7, 2008. Chemical Regulation: Comparison of U.S. and Recently Enacted European Union Approaches to Protect against the Risks of Toxic Chemicals. GAO-07-825. Washington, D.C.: August 17, 2007. Environmental Contamination: Department of Defense Activities Related to Trichloroethylene, Perchlorate, and Other Emerging Contaminants. GAO-07-1042T. Washington, D.C.: July 12, 2007. Perchlorate: EPA Does Not Systematically Track Incidents of Contamination. GAO-07-797T. Washington, D.C.: April 25, 2007. Chemical Regulation: Actions Are Needed to Improve the Effectiveness of EPA’s Chemical Review Program. GAO-06-1032T. Washington, D.C.: August 2, 2006. Chemical Regulation: Approaches in the United States, Canada, and the European Union. Washington, D.C.: GAO-06-217R. November 4, 2005. Chemical Regulation: Options Exist to Improve EPA’s Ability to Assess Health Risks and Manage Its Chemical Review Program. GAO-05-458. Washington, D.C.: June 13, 2005. World Trade Center: EPA’s Most Recent Test and Clean Program Raises Concerns That Need to Be Addressed to Better Prepare for Indoor Contamination Following Disasters. GAO-07-1091. Washington, D.C.: September 5, 2007. Particulate Matter: EPA Needs to Make More Progress in Addressing the National Academies’ Recommendations on Estimating Health Benefits. GAO-06-992T. Washington, D.C.: July 19, 2006. Particulate Matter: EPA Has Started to Address the National Academies’ Recommendations on Estimating Health Benefits, but More Progress Is Needed. GAO-06-780. Washington, D.C.: July 14, 2006. Clean Air Act: EPA Should Improve the Management of Its Air Toxics Program. GAO-06-669. Washington, D.C.: June 23, 2006. Air Pollution: Estimated Emissions from Two New Mexicali Power Plants Are Low, but Health Impacts Are Unknown. GAO-05-823. Washington, D.C.: August 12, 2005. Clean Air Act: Emerging Mercury Control Technologies Have Shown Promising Results, but Data on Long-Term Performance Are Limited. GAO-05-612. Washington, D.C.: May 31, 2005. Environmental Health: EPA Efforts to Address Children’s Health Issues Need Greater Focus, Direction, and Top-Level Commitment. GAO-08-1155T. Washington, D.C.: September 16, 2008. Recent Actions by the Chesapeake Bay Program Are Positive Steps Toward More Effectively Guiding the Restoration Effort, but Additional Steps Are Needed. GAO-08-1131R. Washington, D.C.: August 28, 2008. Chesapeake Bay Program: Recent Actions Are Positive Steps Toward More Effectively Guiding the Restoration Effort. GAO-08-1033T. Washington, D.C.: July 30, 2008. Concentrated Animal Feeding Operations: EPA Needs More Information and a Clearly Defined Strategy to Protect Air and Water Quality from Pollutants of Concern, GAO-08-944. Washington, D.C.: Sept. 4, 2008. Physical Infrastructure: Challenges and Investment Options for the Nation’s Infrastructure, GAO-08-763T. Washington, D.C.: May 8, 2008. International Boundary and Water Commission: Two Alternatives for Improving Wastewater Treatment at the United States-Mexico Border. GAO-08-595R. Washington, D.C.: April 24, 2008. Great Lakes Initiative: EPA and States Have Made Progress, but Much Remains to Be Done If Water Quality Goals Are to Be Achieved. GAO-08-312T. Washington, D.C.: January 23, 2008. Coastal Wetlands: Lessons Learned from Past Efforts in Louisiana Could Help Guide Future Restoration and Protection. GAO-08-130. Washington, D.C.: December 14, 2007. South Florida Ecosystem: Some Restoration Progress Has Been Made, but the Effort Faces Significant Delays, Implementation Challenges, and Rising Costs. GAO-07-1250T. Washington, D.C.: September 19, 2007. Maritime Transportation: Major Oil Spills Occur Infrequently, but Risks to the Federal Oil Spill Fund Remain. GAO-07-1085. Washington, D.C.: September 7, 2007. The BEACH Act of 2000: EPA and States Have Made Progress Implementing the Act, but Further Actions Could Increase Public Health Protection. GAO-07-1073T. Washington, D.C.: July 12, 2007. South Florida Ecosystem: Restoration Is Moving Forward but Is Facing Significant Delays, Implementation Challenges, and Rising Costs. GAO-07-520. Washington, D.C.: May 31, 2007. Clean Water: Further Implementation and Better Cost Data Needed to Determine Impact of EPA’s Storm Water Program on Communities. GAO-07-479. Washington, D.C.: May 31, 2007. Great Lakes: EPA and States Have Made Progress in Implementing the BEACH Act, but Additional Actions Could Improve Public Health Protection. GAO-07-591. Washington, D.C: May 1, 2007. Chesapeake Bay Program: Improved Strategies Needed to Better Guide Restoration Efforts. GAO-06-614T. Washington, D.C.: July 13, 2006. Chesapeake Bay Program: Improved Strategies Are Needed to Better Assess, Report, and Manage Restoration Progress, GAO-06-96. Washington, D.C.: Oct. 28, 2005. Great Lakes Initiative: EPA Needs to Better Ensure the Complete and Consistent Implementation of Water Quality Standards. GAO-05-829. Washington, D.C.: July 27, 2005. Water Infrastructure: Comprehensive Asset Management Has Potential to Help Utilities Better Identify Needs and Plan Future Investments, GAO-04-461. Washington, D.C.: Mar. 19, 2004. Electronic Waste: Harmful U.S. Exports Flow Virtually Unrestricted Because of Minimal EPA Enforcement and Narrow Regulation. GAO-08-1166T. Washington, D.C.: September 17, 2008. Hurricane Katrina: Continuing Debris Removal and Disposal Issues. GAO-08-985R. Washington, D.C.: August 25, 2008. Superfund: Funding and Reported Costs of Enforcement and Administration Activities. GAO-08-841R. Washington, D.C.: July 18, 2008. Aboveground Oil Storage Tanks: More Complete Facility Data Could Improve Implementation of EPA’s Spill Prevention Program. GAO-08-482. Washington, D.C.: April 30, 2008. Hazardous Waste: Information on How DOD and Federal and State Regulators Oversee the Off-Site Disposal of Waste from DOD Installations. GAO-08-74. Washington, D.C.: November 13, 2007. Hazardous Materials: EPA May Need to Reassess Sites Receiving Asbestos-Contaminated Ore from Libby, Montana, and Should Improve Its Public Notification Process. GAO-08-71. Washington, D.C.: October 12, 2007. Aboveground Oil Storage Tanks: Observations on EPA’s Economic Analyses of Amendments to the Spill Prevention, Control, and Countermeasure Rule. GAO-07-763. Washington, D.C.: July 27, 2007. Hurricane Katrina: EPA’s Current and Future Environmental Protection Efforts Could Be Enhanced by Addressing Issues and Challenges Faced on the Gulf Coast. GAO-07-651. Washington, D.C.: June 25, 2007. Leaking Underground Storage Tanks: EPA Should Take Steps to Better Ensure the Effective Use of Public Funding for Cleanups. GAO-07-152. Washington, D.C.: February 8, 2007. Recycling: Additional Efforts Could Increase Municipal Recycling. GAO-07-37. Washington, D.C.: December 29, 2006. Environmental Liabilities: EPA Should Do More to Ensure That Liable Parties Meet Their Cleanup Obligations, GAO-05-658. Washington, D.C.: Aug. 17, 2005. Perchlorate: A System to Track Sampling and Cleanup Results Is Needed, GAO-05-462. Washington, D.C.: May 20, 2005. Hazardous Waste Sites: Improved Effectiveness of Controls at Sites Could Better Protect the Public, GAO-05-163. Washington, D.C.: Jan. 28, 2005. Climate Change: Federal Actions Will Greatly Affect the Viability of Carbon Capture and Storage As a Key Mitigation Option. GAO-08-1080. Washington, D.C.: September 30, 2008. Carbon Offsets: The U.S. Voluntary Market Is Growing, but Quality Assurance Poses Challenges for Market Participants. GAO-08-1048. Washington, D.C.: August 29, 2008. Climate Change: Expert Opinion on the Economics of Policy Options to Address Climate Change. GAO-08-605. Washington, D.C.: May 9, 2008. Climate Change Research: Agencies Have Data-Sharing Policies but Could Do More to Enhance the Availability of Data from Federally Funded Research. GAO-07-1172. Washington, D.C.: September 28, 2007. Climate Change: Agencies Should Develop Guidance for Addressing the Effects on Federal Land and Water Resources. GAO-07-863. Washington, D.C.: August 7, 2007. Biofuels: DOE Lacks a Strategic Approach to Coordinate Increasing Production with Infrastructure Development and Vehicle Needs. GAO-07-713. Washington, D.C.: June 8, 2007. Climate Change: Financial Risks to Federal and Private Insurers in Coming Decades are Potentially Significant. GAO-07-820T. Washington, D.C.: May 3, 2007. Climate Change: Financial Risks to Federal and Private Insurers in Coming Decades Are Potentially Significant. GAO-07-285. Washington, D.C.: March 16, 2007. Energy Efficiency: Long-standing Problems with DOE’s Program for Setting Efficiency Standards Continue to Result in Forgone Energy Savings. GAO-07-42. Washington, D.C.: January 31, 2007. Climate Change: Federal Agencies Should Do More to Make Funding Reports Clearer and Encourage Progress on Two Voluntary Programs. GAO-06-1126T. Washington, D.C.: September 27, 2006. Climate Change: Greater Clarity and Consistency Are Needed in Reporting Federal Climate Change Funding. GAO-06-1122T. Washington, D.C.: September 21, 2006. Climate Change: EPA and DOE Should Do More to Encourage Progress Under Two Voluntary Programs. GAO-06-97. Washington, D.C.: April 25, 2006. | The Environmental Protection Agency's (EPA) overarching mission is to protect human health and the environment by implementing and enforcing environmental laws intended to improve the quality of the nation's air and water and to protect its land. EPA's policies and programs affect virtually all segments of the economy, society, and government. As such, it operates in a highly complex and controversial regulatory arena. In recent years, GAO has identified several key challenges EPA faces and corrective actions that would enable the agency to more effectively accomplish its mission. GAO was asked to identify challenges at EPA that hinder its ability to implement its programs effectively, based on prior GAO work. These challenges include (1) improving agencywide management, (2) transforming EPA's processes for assessing and controlling toxic chemicals, (3) improving implementation of the Clean Air Act, (4) reducing pollution in the nation's waters, (5) speeding the pace of cleanup at Superfund and other hazardous waste sites, and (6) addressing emerging climate change issues. EPA faces the following challenges that hinder its ability to implement its programs effectively: (1) improving agencywide management, (2) transforming EPA's processes for assessing and controlling toxic chemicals, (3) improving implementation of the Clean Air Act, (4) reducing pollution in the nation's waters, (5) speeding the pace of cleanup at Superfund and other hazardous waste sites, and (6) addressing emerging climate change issues. EPA has launched various initiatives to address crosscutting general management issues, including environmental enforcement and compliance, human capital management, and the development and use of environmental information. However, these initiatives have generally fallen considerably short of their intended results. EPA has failed to develop sufficient chemical assessment information to limit public exposure to many chemicals that may pose substantial health risks. In January 2009, GAO added a new issue--the need to transform EPA's process for assessing and controlling toxic chemicals--to its list of high-risk areas warranting increased attention by Congress and the executive branch. EPA faces many important challenges related to implementation of the Clean Air Act, including those highlighted by GAO regarding its coordination with other federal agencies, analyses of health impacts from air pollution, and delays in regulating mercury and other air toxics. EPA also faces challenges relating to numerous regulatory proposals that have been overturned or remanded by the courts. EPA partners with federal, state, and local agencies and others to reduce pollution in the nation's waters. Among the most daunting water pollution control problems, the nation's water utilities face billions of dollars in upgrades to aging and deteriorating infrastructures that left unaddressed can affect the quality of our water. EPA will receive $6 billion in additional water infrastructure funding from the recently passed stimulus bill. Congress passed the Comprehensive Environmental Response, Compensation, and Liability Act, better known as Superfund, in 1980, giving the federal government the authority to ensure the cleanup of hazardous waste sites both on private and public lands. Nonetheless, several key management problems have not been resolved since that time. For example, citing competing priorities and lack of funds, EPA has not implemented a 1980 statutory mandate under Superfund to require businesses handling hazardous substances to provide financial assurances to pay for potential environmental cleanups. In GAO's view, the federal government's approach to climate change has been ad hoc and is not well coordinated across government agencies. For example, the federal government lacks a comprehensive approach for targeting federal research dollars toward the development and deployment of low-carbon technologies. |
The TANF block grant was created by the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA) and was designed to give states the flexibility to provide both traditional welfare cash assistance benefits as well as a variety of other benefits and services to meet the needs of low-income families and children. States have responsibility for designing, implementing, and administering their welfare programs to comply with federal guidelines, as defined by federal law and HHS that oversees state TANF programs at the federal level. Importantly, with the fixed federal funding stream, states assume greater fiscal risks in the event of a recession or increased program costs. However, in acknowledgment of these risks, PRWORA also created a TANF Contingency Fund that states could access in times of economic distress. Similarly, during the recent economic recession, Congress created a $5 billion Emergency Contingency Fund for state TANF programs through the American Recovery and Reinvestment Act of 2009, available in fiscal years 2009 and 2010. The story of TANF’s early years is well known. During a strong economy, increased federal support for work supports like child care, and the new TANF program’s emphasis on work, welfare rolls were cut by more than half. Many former welfare recipients increased their income through employment, and employment rates among single parents increased. At the same time that some families worked more and had higher incomes, others had income that left them still eligible for TANF cash assistance. However, many of these eligible families were not participating in the program. According to our estimates in a previous report, the vast majority—87 percent—of the caseload decline can be explained by the decline in eligible families participating in the program, in part because of changes to state welfare programs. These changes include mandatory work requirements, changes to application procedures, lower benefits, and policies such as lifetime limits on assistance, diversion policies, and sanctions for non-compliance, according to a review of the research. Among eligible families who did not participate, 11 percent did not work, did not receive means-tested disability benefits, and had very low incomes. While we have not updated this analysis, some research shows that this potentially vulnerable group may be growing. Despite the decrease in the cash assistance caseload overall, the number of cases in which aid was provided only for the children in the household increased slightly, amounting to about half the cash assistance caseload. For these households, the adult is not included in the benefit calculation, generally either because: (1) the parent is receiving cash support through the Supplemental Security Income program; (2) the parent is an immigrant who is ineligible; (3) the child is living with a nonparent caregiver; or (4) the parent has been sanctioned and removed from cash assistance for failing to comply with program requirements. Nationally, about one-third of these “child only” households are children living with non-parent caregivers. We also know that during and after this recent significant recession, while caseloads increased in most states, the overall national increase totaled about 13 percent from fiscal years 2008 to 2011. This has been the first test of TANF—with its capped block grant structure—during severe economic times. This relatively modest increase—and decreases in some states—has raised questions about the responsiveness of TANF to changing economic conditions. We recently completed work on what was happening to people who had exhausted their unemployment insurance While almost 40 percent of benefits after losing a job in the recession.near-poor households with children that had exhausted UI received aid through the Supplemental Nutrition Assistance Program (formerly known as food stamps), we estimated that less than 10 percent received TANF cash assistance. A key TANF goal is helping parents prepare for and find jobs. The primary means to measure state efforts in this area has been TANF’s work participation requirements. Generally, states are held accountable for ensuring that at least 50 percent of all families receiving TANF cash assistance and considered work-eligible participate in one or more of the federally defined allowable activities for the required number of hours each week. However, over the years, states have not typically engaged that many recipients in work activities on an annual basis—instead, states have engaged about one third of families in allowable work activities nationwide. Most states have relied on a combination of factors, including various policy and funding options in federal law and regulations, to meet the work participation requirements without reaching the specified 50 percent. Factors that influenced states’ work participation rates included not only the number of families receiving TANF cash assistance who participated in work activities, but also: decreases in the number of families receiving TANF cash assistance (not due to program eligibility changes) that provide a state credit toward meeting its rates , state spending on TANF-related services beyond what is required that also provides a state credit toward meeting its rates, state policies that allow working families to continue receiving TANF cash assistance, helping a state to increase its rate, and state policies that provide nonworking families cash assistance outside of the TANF program. For example, some states serve families with work barriers outside of state TANF because of concerns that they will not be able to meet work requirements. Many states have cited challenges in meeting TANF work participation rates, such as requirements to verify participants’ actual activity hours and certain limitations on the types and timing of activities that count toward meeting the requirements. Because of the various factors that affect the calculation of states’ work participation rates, the rate’s usefulness as an indicator of a state’s effort to help participants achieve self-sufficiency is limited. Further, the TANF work participation rates, as enacted, in combination with the flexibility provided, may not serve as an incentive for states to engage more families or to work with families with complex needs. While the focus is often on TANF’s role in cash assistance, it plays a significant role in states’ budgets for other programs and services for low- income families, as allowed under TANF. The substantial decline in traditional cash assistance caseloads combined with state spending flexibilities under the TANF block grant allowed states to broaden their use of TANF funds. As a result, TANF and state TANF-related dollars played an increasing role in state budgets outside of traditional cash assistance payments. In our 2006 report that reviewed state budgets in nine states, we found that in the decade after Congress created TANF, the states used their federal and state TANF-related funds to support a wide range of state priorities, such as child welfare services, mental health services, substance abuse services, prekindergarten, and refundable state earned income credits for the working poor, among others. While some of this spending, such as that for child care assistance, relates directly to helping cash assistance recipients leave and stay off the welfare rolls, other spending is directed to a broader population that did not necessarily ever receive welfare payments. This is in keeping with the broad purposes of TANF specified in the law: providing assistance to needy families so that children could be cared for in their own homes or in the homes of relatives; ending needy families’ dependence on government benefits by promoting job preparation, work, and marriage; preventing and reducing the incidence of out-of-wedlock pregnancies; encouraging the formation and maintenance of two-parent families. This trend away from cash assistance has continued. In fact, in fiscal year 2011, federal TANF and state expenditures for purposes other than cash assistance totaled 71 percent of all expenditures. This stands in sharp contrast with 27 percent spent for purposes other than cash assistance in fiscal year 1997, when states first implemented TANF. Beyond the cash assistance rolls, the total number of families assisted is not known, as we have noted in our previous work. TANF funds can play an important role in some states’ child welfare budgets. In our previous work, Texas state officials told us that 30 percent of the child welfare agency’s budget was funded with TANF dollars in state fiscal year 2010. Many states have used TANF to fund child welfare services because, although TANF funding is a capped block grant, it is a relatively flexible funding source. However, some states may not be able to continue relying on TANF to fund child welfare services because they need to use TANF funds to address other program goals, such as promoting work. For example, Tennessee officials told us that they previously used some of their TANF grant to fund enhanced payments for children’s relative caregivers and their Relative Caregiver Program, but that the state recently discontinued this practice due to budget constraints. While states have devoted significant amounts of the block grant as well as state funds to these and other activities, little is known about the use of these funds. Existing TANF oversight mechanisms focus more on the cash assistance and welfare-to-work components of the block grant. For example, when states use TANF funds for some purposes, they are not required to report on funding levels for specific services and how those services fit into a strategy or approach for meeting TANF goals. In effect, there is little information on the numbers of people served by TANF- funded programs other than cash assistance, and there is no real measure of workload or of how services supported by TANF and state TANF-related funds meet the goals of welfare reform. This information gap hinders decision makers in considering the success of TANF and what trade offs might be involved in any changes to TANF when it is authorized. The federal-state TANF partnership makes significant resources available to address poverty in the lives of families with children. With these resources, TANF has provided a basic safety net to many families, triggered a focus on work in the nation’s welfare offices while helping many parents step into jobs, and provided states flexibility to help families in ways they believe will help prevent dependence on public assistance and improve the lives of children. At the same time, it does raise questions about the strength and breadth of the TANF safety net. Are some eligible families falling through? The emphasis on work participation rates as a measure of program performance has helped change the culture of state welfare programs to focus on moving families into employment, but weaknesses in the measure undercut its effectiveness. Are the work participation rates providing the right incentive to states to engage parents, including those difficult to serve, and help them achieve self-sufficiency? The flexibility of the TANF block grant has allowed states to shift their spending away from cash assistance and toward other programs and services for low-income families, potentially expanding the ability of states to combat poverty in new ways. However, we do not have enough information about the use of these funds to determine whether this flexibility is resulting in the most efficient and effective strategies at this time of scarce government resources and great need among the nation’s low-income families. Chairman Baucus, Ranking Member Hatch, and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions you may have. For questions about this statement, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Alexander G. Galuten, Gale C. Harris, Sara S. Kelly, Kathryn A. Larin, and Theresa Lo. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This hearing is on combating poverty and understanding new challenges for families. The testimony focuses on the role of the Temporary Assistance for Needy Families (TANF) block grant in helping low-income families with children. As you know, the federal government significantly changed federal welfare policy in 1996 when it created TANF, a $16.5 billion annual block grant provided to states to operate their own welfare programs within federal guidelines. States are also required to maintain a specified level of their own spending to receive TANF funds. Over the past 15 years, the federal government and states have spent a total of $406 billion for TANF, about 60 percent of which were federal funds. This federal-state partnership has undergone multiple program and fiscal changes, including a dramatic drop in the number of families receiving monthly cash assistance benefits, as well as two economic recessions. According to the Bureau of the Census, poverty among children fell from about 21 percent in 1995 to about 16 percent in 2000, rising again to 22 percent in 2010. Examining TANFs past performance can help shed light on the challenges facing low-income families and the role of the federal government in combating poverty. This testimonybased primarily on reports issued by GAO from 2010 to 2012 on TANF and related issueswill focus on TANFs performance in three areas: (1) as a cash safety net for families in need, (2) as a welfare-to-work program that promotes employment, and (3) as a funding source for various services that address families needs. The federal-state TANF partnership makes significant resources available to address poverty in the lives of families with children. With these resources, TANF has provided a basic safety net to many families and helped many parents step into jobs. At the same time, there are questions about the strength and breadth of the TANF safety net. Many eligible familiessome of whom have very low incomesare not receiving TANF cash assistance. Regarding TANF as a welfare-to-work program, the emphasis on work participation rates as a measure of state program performance has helped change the culture of state welfare programs to focus on moving families into employment. However, features of the work participation rates as currently implemented undercut their effectiveness as a way to encourage states to engage parents, including those difficult to serve, and help them achieve self-sufficiency. Finally, states have used TANF funds to support a variety of programs other than cash assistance as allowed by law. Yet, we do not know enough about this spending or whether this flexibility is resulting in the most efficient and effective use of funds at this time. |
With certain exceptions, Title IX of the Education Amendments of 1972 requires all entities receiving any form of federal financial assistance to prohibit sex discrimination in their education programs or activities, which are defined broadly under Title IX to include all the operations of the entity. Because most postsecondary schools have students who receive federal financial assistance, such as federally supported student aid, Title IX applies to most 2-year and 4-year schools, both public and private. In addition to postsecondary schools, many other recipients of federal educational grants, such as K-12 school districts, private laboratories, and museums, are also subject to Title IX. Title IX’s provisions apply to all operations and ancillary services of covered programs. For example, the law applies to recruitment, student admissions, scholarship awards, tuition assistance, other financial assistance, housing, access to courses and other academic offerings, counseling, employment assistance to students, health and insurance benefits and services, and athletics. It also applies to all aspects of employment, including recruitment, hiring, promotion, tenure, demotion, transfer, layoff, termination, compensation, benefits, job assignments and classifications, leave, and training. Under Title IX, federal agencies that administer grants are required to conduct several compliance activities. For example, Title IX regulations require agencies to conduct periodic compliance reviews of their grant recipients. A compliance review is an agency-initiated assessment of grantees to determine if they are complying with the law. Agencies must also make a prompt investigation in response to timely written complaints from individuals who allege that a grantee has engaged in sex discrimination, or whenever a compliance review or any other information indicates a possible failure to comply with Title IX. If the investigating agency does not find evidence that the grantee has failed to comply with Title IX, it must inform both the grant recipient and the complainant in writing. If the investigating agency does find evidence of noncompliance with Title IX, then the agency must first attempt to resolve the matter informally. For example, the agency could attempt to mediate the issue to encourage the grantee to voluntarily modify its activities in order to comply with the law. If the matter cannot be resolved informally, then the agency must take additional steps to secure compliance, including suspending or terminating federal financial assistance. Individuals or groups are allowed to file their complaints with grantees or with funding agencies such as Education or NSF. If complainants are not satisfied with the result of investigations, they can file their complaints with another entity. For example, if a complaint is filed at the grantee level and the complainant is unhappy with the result, he or she can file a complaint at the agency level. In addition to filing complaints, individuals or groups have the option of filing suit in federal court. (See app. III for information on selected legal cases and events involving Title IX since 1972.) While federal agencies have primary responsibility for ensuring compliance with Title IX, recipients of federal grants also have some compliance responsibilities. For example, grantees are required to provide assurances that their education programs or activities are operated in compliance with Title IX. Grantees are also required to designate at least one employee to coordinate their compliance efforts and to establish complaint procedures to resolve student and employee Title IX complaints. Finally, grantees must provide notification to students and employees that sex discrimination is prohibited in their programs or activities. All federal agencies have enforcement responsibilities under Title IX. All federal agencies, including Education, Energy, NASA, and NSF, are responsible for handling Title IX enforcement of their own grantees and may refer complaints against educational institutions to Education’s OCR and employment-related sex discrimination complaints to EEOC. Education’s OCR plays a key role in ensuring compliance with Title IX because it has primary responsibility to investigate most types of complaints at educational institutions, including complaints referred from other federal agencies. Although EEOC does not have any authority under Title IX, it does have authority under Title VII of the Civil Rights Act of 1964 to investigate sex- based complaints of employment discrimination, including sex discrimination against faculty and scientists. Even though Title IX regulations specifically include employment as a protected activity, agencies generally send all employment-related discrimination complaints to EEOC for investigation. EEOC officials told us that they process these complaints as Title VII complaints. As such, EEOC will review referrals from other federal agencies made under Title IX to see if they warrant investigation under Title VII. The Department of Justice (Justice) was given authority under Executive Order 12250 for the “consistent and effective implementation” of several civil rights laws, including Title IX. Specifically, Justice is responsible for the coordination of agencies’ enforcement of Title IX, including (1) reviewing and approving of agencies’ regulations, (2) developing standards and procedures for conducting investigations and compliance reviews, (3) arranging for referral of cases between agencies, and (4) representing federal agencies in court proceedings. Justice consequently published a Final Common Rule in August 2000, which promulgated Title IX regulations adopted by 21 agencies, patterned after the Department of Education’s Title IX regulations. Figure 1 broadly outlines the various complaint processes under Title IX. The four science agencies we reviewed for this request—Education, Energy, NASA, and NSF—award billions of dollars in grants each year for mathematics, engineering, and science grants and projects. Combined, these four agencies awarded almost $5 billion in grants for the sciences in fiscal year 2003. NASA, Energy, and NSF have been promoting scientific and technological research and programs in K-12 schools, higher education, and private industry for decades. Although Education’s mission encompasses more than scientific research, it has several programs dedicated to the sciences. One such program, Graduate Assistance in Areas of National Need, provides fellowships, through academic departments of institutions of higher education; these fellowships assist graduate students with excellent records who demonstrate financial need and who plan to pursue the highest degree available in a field designated as an area of national need. This program has designated biology, chemistry, computer and information science, engineering, geological science, mathematics, and physics as areas of national need. This program was funded at over $30 million in fiscal year 2003. (See app. IV for a list of grants these agencies award for the sciences.) The four federal science agencies have made efforts to ensure that federal grant recipients comply with Title IX in the sciences by performing several compliance activities, such as investigating complaints and providing technical assistance, but most have not monitored grantees as required by the law. Agency officials reported that Energy, NASA, and NSF refer complaints involving educational institutions to Education and those involving employment to EEOC, where they are investigated. Because grantees are not required to report on complaints filed with them, the agencies could not determine whether grantees have investigated Title IX sex discrimination complaints they have received. To encourage compliance with Title IX, federal agencies have provided grantees technical assistance and also require an assurance statement from every grantee that it will not discriminate. However, only Education has monitored its grantees by conducting periodic compliance reviews—an agency-initiated assessment of grantees to determine if they are complying with the law. The lack of grantee monitoring was, in part, because agencies have not effectively coordinated the implementation of compliance reviews and, according to agency officials, a shortage of resources to conduct the reviews. Each of the four agencies we reviewed has established a process to ensure that complaints made under Title IX are reviewed and addressed. Specifically, Education officials told us that it investigated or resolved all Title IX complaints it has received involving educational institutions, including those referred to it by other agencies through formal and informal agreements. Excluding athletic complaints, Education reported that it has received over 3,300 Title IX complaints against institutions of higher education since 1993. Some of these complaints were referred to Education by other agencies, including Energy, NASA, and NSF. Education officials told us that they are unable to determine which of the complaints concerned higher education programs in the sciences because their data management system does not track that information. Officials at Energy, NASA, and NSF told us that complaints meriting further investigation were referred to Education if they involved educational institutions or to EEOC if they were related to employment issues. However, officials at Energy, NASA, and NSF told us that they have received very few Title IX complaints each year. (See table 1 for information on compliance procedures required by Title IX.) EEOC also has established procedures to review complaints made under Title IX, but the full number of complaints it has reviewed cannot be determined. Officials at EEOC told us that it has received some Title IX referrals, but since EEOC does not have statutory authority under Title IX, it reviews complaints to determine if Title VII of the Civil Rights Act of 1964 applies. Consequently, EEOC does not track which complaints originated as Title IX complaints, and it cannot determine how many Title IX complaints it has investigated under Title VII. Although EEOC investigates tens of thousands of complaints each year, officials told us that they cannot determine if their investigations involved scientists, or one of the four agencies, because EEOC’s database does not track the employee’s occupation or the department in which the complaint originated. While grantees are required to establish procedures to resolve Title IX sex discrimination complaints, agencies could not determine whether they had done so because grantees are not required to report this information. Despite this requirement, there is some evidence that some grantees have not established these procedures. For example, Education recently reviewed the Title IX compliance status of selected grantees and found several instances in which grantees had not adopted or published complaint procedures. Recognizing this issue, Education issued a “dear colleague” letter in April 2004 to its grantees reminding them of their Title IX requirements to establish and publicize complaint procedures. Even if grantees have established procedures to address Title IX complaints, they may not be tracking the complaints they handle. Officials from every university we visited told us that they had an internal process to handle Title IX sex discrimination complaints, but a few were unable to provide us with actual numbers because they do not keep these data. Also, some officials told us that most complainants choose to file at the grantee level rather than with the federal government. Students and university and laboratory officials we spoke with offered a number of reasons why there have been so few Title IX sex discrimination complaints involving the sciences filed with Education, Energy, NASA, and NSF. Specifically, many students and staff suggested that their peers would be unlikely to file a complaint because of a lack of awareness that Title IX covers academics. For example, scientists and students at most schools we visited told us that they thought Title IX covered only sports and did not know the law also encompassed academic issues. Also, others suggested they would be unlikely to file a complaint for fear of retribution from supervisors or colleagues. For example, some women faculty members we spoke with said that although they perceive that discrimination exists in their department, filing a complaint could hinder their ability to attain tenure. In addition, filing a sex discrimination complaint would take time away from their research. Officials at federal agencies told us that they required statements of assurance from grantees and provided technical assistance to grantees upon request. Each agency required grantees to submit a statement of assurance that their education programs or activities are operated in compliance with Title IX, as well as with other civil rights laws, as part of their grant application. (See table 1.) In addition to obtaining these statements, agencies provide outreach materials or technical assistance to grantees. We found that each agency provided materials to grantees to help them better understand Title IX and its requirements. At Education, officials sometimes issue “dear colleague” letters to better inform grantees about how to fulfill Title IX requirements. We found that compliance reviews, which are required by Title IX regulations, have been largely neglected by agencies. Officials at three of the four agencies told us that they have not conducted any Title IX compliance reviews of their grantees. Education has conducted 17 compliance reviews of academic programs’ adherence to Title IX at institutions of higher education since 1993, 3 of which have dealt with the sciences. Education officials told us that each year they plan to conduct a number of reviews of compliance based on available funding remaining after they conduct complaint investigations and provide technical assistance to grantees. Officials reported that their goal is to use 20 percent of their budget for both outreach and reviews of compliance with federal laws, but in reality only about 15 percent of their budget goes toward these activities. When choosing which reviews would be conducted, officials reported that they identify compliance issues based on Education’s priorities and issues raised by Congress or interest groups. Specifically, Education officials told us that the three compliance reviews of science grantees—conducted in 1994 and 1995—were initiated because of congressional interest. This year, Education plans to conduct over 50 compliance reviews on issues related to special education and accommodations for the disabled. Officials told us that they are not conducting any compliance reviews involving Title IX this year. In addition to the requirement that Education conduct its own compliance reviews, Education has agreements with 17 other agencies to conduct compliance reviews of educational institutions under Title IX as well as other civil rights laws. However, Education officials stated that performing compliance reviews for other agencies was never feasible and that Education has informed those agencies that it could not conduct these reviews for them. Energy, NASA, and NSF officials reported that they have not conducted any Title IX compliance reviews of their grantees. Energy officials reported that they have provided their field office staff with guidance on conducting compliance reviews and that many field office staff attended training on compliance reviews offered by Justice. Energy officials also told us that they have conducted site visits to several field offices to determine if compliance reviews were being done, but found that no compliance reviews have been conducted, primarily due to resource constraints. While NASA has an agreement with Education for Education to conduct compliance reviews, neither Education nor NASA has conducted reviews of NASA’s grantees. Recognizing this, NASA has begun to take steps toward ensuring that compliance reviews are conducted on their grantees. NASA officials reported that they are developing a compliance review program and have requested compliance information from all of their grantees. Officials reported that they are in the process of reviewing grantee responses to systematically ascertain if grantees are in compliance, identify problem areas, and assist in targeting grantees for possible on-site compliance reviews. Officials at NSF reported that a lack of funding and staff precludes development of a compliance review program. (See table 1.) Justice officials told us that it carries out three main activities to coordinate agency compliance with Title IX. Specifically, it provides technical assistance to agencies when questions arise about compliance activities or requirements, brokers agreements between agencies and Education to carry out complaint investigations and compliance reviews of educational institutions, and requires agencies to submit an annual report on their compliance activities. Some technical assistance has taken the form of published guidance for agencies to assist them with Title IX compliance, while other assistance is provided to agency officials directly to address specific issues. For example, Energy officials reported that they consult with Justice from time to time on how to handle complex complaints they receive. Justice officials reported that they helped to arrange the agreements between Education and other agencies whereby Education has agreed to conduct complaint investigations and compliance reviews on behalf of the other agencies. Justice officials reported that they were not aware that Education has not been adhering to the compliance review portion of the agreements. However, Justice officials were aware that other agencies, including Energy, NASA and NSF, were not conducting compliance reviews as required, due to limited resources. Justice officials reported that every agency submitted annual reports on their compliance activities. Agencies are required to report the numbers of complaints they received under Title VI and Title IX and what action was taken on those complaints. Agencies must also report on the total number of grants the agency awarded and whether those grantees completed a statement of assurance not to discriminate. In addition, agencies have to report and characterize any agreements they may have with other agencies, such as Education. Justice officials reported that they review these reports to determine gaps in compliance and subsequently provide agencies with guidance on how to alleviate those gaps. Although Executive Order 12250 requires Justice to coordinate the implementation and enforcement by executive agencies of various nondiscrimination provisions of several civil rights laws, including Title IX, it has no legal authority to make agencies conduct required compliance activities. Justice officials reported that aside from reminding the agencies of the need to comply with Title IX regulations and providing the agencies with guidance and technical assistance, there is little they can do to ensure compliance with Title IX. In addition, Executive Order 12250 states “the Attorney General shall annually report to the President through the Director of the Office of Management and Budget on the progress in achieving the purposes of this Order. This report shall include any recommendations for changes in the implementation or enforcement of the nondiscrimination provisions of the laws covered by this Order.” However, Justice officials told us that this report has not been issued since 1998 because the reports were not an effective mechanism to encourage agency compliance with regulations. Women’s participation in the sciences has increased substantially in the last three decades, especially in the life sciences, such as biology. The proportion of women science students has grown, but to a lesser extent at the graduate level than the undergraduate level. Meanwhile, the proportion of faculty in the sciences who are women has also increased since the early 1970s. However, women still lag behind their male counterparts in terms of salary and rank, and much of their gain in numbers has been in the life sciences, as opposed to mathematics and engineering. A variety of studies indicate that experience, work patterns, and education levels can largely explain differences in salaries and rank. Studies also suggest that discrimination may still affect women’s choices and professional progress. Although women’s participation in the sciences has improved steadily over the last three decades, men still outnumber women in nearly every field in the sciences. In 1960, women made up less than 3 percent of all scientists, but by 2003 women constituted nearly 20 percent of all scientists. Although the number of women increased in every field of science, the participation of women in scientific occupations varied by field, with women having the largest percentage gains in science and the smallest percentage gains in mathematics. In 1960 women constituted less than 1 percent of engineers, 8 percent of scientists, and 26 percent of mathematicians. By 2003 women made up 14 percent of engineers, 37 percent of scientists, and 33 percent of mathematicians. Data on women in faculty positions at 2 and 4-year colleges and universities in 1999 indicate that women’s participation differs based on when they earned their PhD. Specifically, NSF data reveal that 11 percent of faculty at a 2 or 4-year college in 1999 who received their PhD in the early 1970s were women, as were 34 percent who received their PhD in the in the late 1990s. Figure 2 shows that women working at 2 and 4-year colleges in 1999 have the greatest participation in life sciences. Nineteen percent of life sciences faculty at a 2 or 4-year college in 1999 who received their PhD in the early 1970s were women, as were 44 percent who received their PhD in the late 1990s. However, data show that women still constitute a relatively small share of faculty in the sciences. For example, engineering has the lowest participation levels for women faculty. Less than 1 percent of engineering faculty at a 2 or 4-year college in 1999 who received their PhD in the early 1970s were women, as were 19 percent who received their PhD in the late 1990s. Women continue to major in the sciences and earn degrees in the sciences to a lesser extent than men, even though women now make up a majority of all college students. In 2000, two of five undergraduates in the sciences were women. Similarly, in 2000, while women made up over half of all graduate students, they accounted for less than a third of graduate students in the sciences. The percentage of women students differs across scientific fields, as shown in figure 3. In 1999-2000, women were a majority of both undergraduate and graduate students in life sciences, while only one-fifth of engineering students were women, at both the undergraduate and the graduate levels. Regarding degrees earned, the majority of degrees in fields other than the sciences, at all levels—bachelors, masters, and doctorates—are earned by women. However, with one exception, women continue to earn fewer degrees than men in the sciences, at all levels. Again, the exception is life sciences, in which women earned more bachelors and masters degrees—but not doctorate degrees—than men. The proportion of degrees in the sciences earned by women is highest in life sciences and lowest in engineering. (See app. VI for enrollment and degrees earned by men and women by field and level of study.) The proportion of bachelors degrees in various science areas awarded to women has grown relatively steadily since the mid-1960s, with the exception of degrees in mathematics, which fluctuated within the narrow range of 33 to 39 percent. (See fig. 4.) Similarly, the percentage of PhDs awarded to women has generally increased in these science fields, including mathematics, since 1966. Women made the greatest gains in life sciences. (See fig. 5.) Some researchers suggest that the shortage of women pursuing degrees in science is due to a lack of preparation and mentoring. Recent research reported that women are not adequately prepared in K-12 or undergraduate school and so they lose interest in the sciences. According to several studies, in grade 12, high school girls took fewer courses in science, scored slightly lower on standardized science exams, were more likely to have negative attitudes toward science, and were less likely to declare science as a college major, as compared with high school boys. Some of the women students and faculty with whom we talked reported that a strong mentor was a crucial part of their academic training. In fact, some students and faculty told us they had pursued advanced degrees because of the encouragement and support of mentors. Some felt that having women mentors, who served as role models, was important for women considering careers in the sciences. Some pointed out that with few faculty women in some departments in the sciences, it was hard for women students to find women mentors. However, we found that women who begin college with an engineering, mathematics, or science major had similar rates of completing a bachelors degree within 6 years as their male counterparts, according to the Beginning Postsecondary Students (BPS) Longitudinal Study. About 65 percent of women did so in 2001, while 18 percent were still enrolled at the end of 6 years and about 17 percent left college without a degree. Comparably, about 62 percent of men completed a bachelors degree within 6 years, while about 19 percent were still enrolled at the end of 6 years and about 19 percent left college without a degree. Women who begin college with majors in the sciences had higher rates of completing a degree in 6 years than women who started college with other majors or undeclared majors. (See app. V for the enrollment status in 2001 of students who began postsecondary education in 1995, by type of initial major and sex.) Several recent studies show that salary and rank differences between men and women can largely be explained by work patterns and choices. Even though the percentage of women in faculty positions has increased, many studies show that women faculty have not yet caught up with men faculty in several areas, including salary and tenure. However, a recent study found that just over 91 percent of the discrepancy between men’s and women’s faculty salaries could be explained by differences in experience, work patterns, seniority, and education levels. Our review of faculty data found that women science faculty compared with men faculty more often taught as their primary responsibility, less often conducted research as their primary responsibility, less often held a first professional degree or PhD, more often worked part-time, more often had less experience, more often were younger, and more often were native U.S. citizens. Similarly, a recent study of the top 50 departments of engineering and science, as ranked by NSF, revealed that women faculty were more often associate or assistant professors than full professors and that women faculty were a minority of tenured faculty in the sciences. Figure 6 shows that the percentage of women faculty by rank varies by field. Several studies have discussed that some women trade off career advancement or higher earnings for a job that offers flexibility to manage work and family responsibilities. In fact, a recent study on part-time faculty found that women faculty are 6 percent more likely than men to prefer part-time employment. During our site visits, women faculty told us that juggling family life with a tenure track faculty position was extremely challenging. Some women told us that they felt discouraged from pursuing a tenure track university position because the biological clock and the tenure clock tend to tick simultaneously. Some faculty members told us that they felt they had to put off having children until they achieved tenure or entirely give up the goal of having children, choices that men faculty do not necessarily have to make. Others we spoke with commented that they observed the long hours and difficult work of professors at research universities in the sciences and felt they could not perform well while also devoting time to family responsibilities. In addition, National Center for Education Statistics (NCES) found that men and women faculty also worked in different types of institutions. Among full-time faculty, women were more likely than men to work in 2-year institutions (33 percent versus 23 percent), while men were more likely than women to work in research universities (20 percent versus 14 percent). Women PhD students we interviewed revealed that very few would seek tenure track positions at research institutions. Most said that they would rather become faculty at small colleges or scientists at a laboratory where they thought work pressures would be less intense and they could maintain a more healthy balance between work and family life. Studies have also argued that the variability in men and women’s participation in the sciences may result from discrimination in the workplace or subtler discrimination about what types of career or job choices women can make. NCES recently reported that preparation is not the sole factor leading to women’s low participation in science occupations but that workplace discrimination is a consistent barrier to women in the sciences. In addition, when studying women science faculty issues at MIT, researchers found that, after tenure, many senior women faculty began to feel “marginalized.” These faculty members reported that they sensed they may not have been treated equally with their men colleagues. During our site visits, some women faculty and students told us that the climate in some academic departments was changing for the better over time, as older men faculty, who were unused to working with women, retire. On the other hand, in other departments, women students reported that fellow men students were hostile to women and made it very uncomfortable for women to pursue their studies. Students and faculty we talked with reported that deans, department chairs, and other officials were attempting to bring about positive change for women on their campuses, but that progress would be slow. We found several examples of grant-making agencies that have instituted policies and practices designed to foster greater participation by women in the sciences. While some of the policies and practices are aimed at encouraging more women to pursue and to persist in education in the sciences, others provide time off and fewer teaching duties so junior faculty can balance work and family life while beginning a university career. Finally, a few policies and practices seek to expand the recruiting pool for jobs in the sciences and make them more attractive to women. NSF, as part of its formal evaluation of grant applications, uses a “second criterion,” the impact of the project on U.S. society. NSF makes a particular effort to recruit reviewers, experts in the substantive area of the proposal, from nonacademic institutions, minority-serving institutions, and disciplines closely related to the one addressed in the proposal. These reviewers evaluate grant proposals based on two merit criteria: first, what is the intellectual merit of the proposed activity; second, what are the broader societal impacts of the proposed activity. This second criterion includes promoting teaching, broadening the participation of underrepresented groups, and enhancing research infrastructure, such as facilities and partnerships, as well as the integration of diversity into NSF projects, and research mentoring, particularly for students typically underrepresented in the sciences. Projects meeting NSF’s societal impact criterion may increase interest in the sciences among students or provide valuable experience to a diverse group of researchers, but to date there has not been a full evaluation of this criterion. Beyond the first year of graduate school, science education is largely laboratory centered. NSF grantees may take more care to include graduate students or other researchers from diverse backgrounds as staff on their projects. This could help ensure that women and minorities can get the training and experience they need to complete advanced degrees and work in an academic environment. However, the effects of implementing the second criterion have yet to be fully evaluated. A review by the National Academy of Public Administration in 2001 found that NSF does not have adequate data to track changes or improvements to encourage greater participation by underrepresented minority researchers. In addition, researchers told us that many NSF supported projects include outreach components, frequently aimed at undergraduates and K-12 students. Often, analysts speak of an insufficient “pipeline” of women high school and college students planning to pursue higher levels of education in the sciences. The goal of outreach programs is to pique the interest of younger students in the sciences. Outreach activities can include speeches or demonstrations or work opportunities in a laboratory. These outreach activities may encourage some young women, who otherwise might have lost interest, to pursue education in the sciences. Some universities extend the tenure clock by one semester or one year when a junior faculty member has a child. Most commonly, tenure decisions are made several years after appointment as assistant professor. To achieve tenure in the sciences, high productivity in research and publication is required. As one faculty member expressed it, “the biological clock and the tenure clock are perfectly in sync.” Some female faculty put off children until after they gain tenure, often in their late 30s. Allowing junior faculty to “stop the clock” relieves some of the pressure on junior faculty seeking tenure. Many universities allow female faculty only 6 to 8 weeks of paid maternity leave. At some universities, the tenure clock adjustment that comes with the arrival of a child applies to male faculty members as well. Some professors we spoke with told us that often male professors do not play as large a role as women in caring for newborns and can use the extra year to add to their research and publication portfolios. In addition, some junior faculty fear that stopping the clock will be counted against them in the tenure decision. Even though adjusting the tenure clock may be university policy, that policy may not be evenly implemented in all departments. Moreover, assistant professors seeking tenure must have many recommendations from established academics in their field, some of whom may not be aware that the tenure candidate stopped the clock. Therefore, some tenure recommendations may criticize resulting gaps in a résumé. Some universities, primarily major research institutions, relieve faculty members of one semester of teaching duties when a child is born or other urgent family issues arise. Some faculty we spoke with noted that there are events other than childbirth that require large amounts of a faculty member’s time and attention, such as assisting elderly parents. Reduced teaching loads may operate in tandem with stopping the tenure clock and generally applies to both men and women professors. Relief from teaching duties frees up time to deal with family issues and provides added flexibility in arranging work hours. However, when one is involved in scientific research, pressure remains to produce results. Researchers still have to run their laboratories. Scientists responsible for research projects have to organize the work, supervise graduate students working on the projects, and also advise students on their academic course work and projects. Some faculty we spoke with pointed out that relief from teaching duties may benefit male faculty more than female faculty. In connection with the arrival of a child, to the extent that male faculty may have less involvement in caring for newborns, male faculty may use the extra time to do additional research or laboratory work. Some universities and at least one laboratory we visited have developed or expanded on-campus child care or made arrangements with nearby facilities. Sometimes, when on-campus facilities are unavailable or inadequate, arrangements may be made with nearby child care providers to reserve a certain number of openings for faculty and staff. However, obtaining child care can still be a problem in some situations such as care for sick children. One laboratory we visited had plans for developing a separate day care facility for sick children, but it has not come about because of lack of funding. Universities may specify a search process for new faculty. Such a process might involve widespread advertising, might specify representation of women and minorities on search committees, and might require that there be members of underrepresented groups in the candidate pool. This type of formal process may extend the hiring time. However, if hiring pools, at first, are not sufficiently broad, further publicity and additional work by the search committee may be required. Universities may also conduct periodic studies of recruiting, hiring, tenure decisions, salaries, and resources provided. These are among the aspects of university employment that can be quantified so comparisons can be made between male and female professors. Periodic reviews of such data can call the attention of the university or laboratory community to imbalances that may exist. Continuing review of such data helps ensure that inequities do not develop. Schools and laboratories can conduct periodic surveys of faculty concerns to develop information about factors such as inclusive social atmosphere, or sexist attitudes. Though not easily quantifiable, such factors nonetheless impact women’s employment experience. Periodic surveys raise awareness of the university or laboratory community to attitudes and practices that may make it uncomfortable for women at the institution. University officials hope that greater awareness will help to avoid “marginalization” of female faculty and foster an inclusive atmosphere. Some laboratories subsidize the expenses of obtaining additional education and training for their current employees. Further education may lead to promotions or higher-level work. Such support is not limited to women, but at one laboratory we visited, a high proportion of beneficiaries were women. Some laboratories allow part-time or flexi-time schedules, allowing staff to vary their arrival and departure times. Additionally, at least one laboratory we visited allowed job sharing, whereby two employees each work on the same job on a part-time basis, coordinating closely with one another to accomplish the assigned tasks. Each of these alternative work arrangements helps workers balance their personal lives with their work lives and makes it easier for researchers to deal with family responsibilities, which some scientists told us are often are borne more by women than men. Over the past three decades, women have made substantial gains as professionals in the sciences, particularly in the life sciences. A review of their numbers and roles today in the educational pipeline suggests, however, that women will continue to fall short of equal participation. Their lower levels of participation also suggest that they remain a less than well tapped resource in the nation’s growing demand for scientists. Our review of federal science agencies’ oversight for Title IX suggests that much of the leverage afforded by this law lies underutilized in the science arena, even as several billion dollars are spent each year on federal science grants. Although Energy, NASA, and NSF have carried out most of the activities required of them under Title IX, the impact of their work may be limited without compliance reviews of grantees and their practices. Given the general lack of knowledge and familiarity with the reach of Title IX and the disincentives for filing complaints against superiors, investigations of complaints alone by federal agencies are not enough to judge if discrimination exists. Without making full use of all compliance activities available, agencies lack a complete picture of federal grantee efforts to address occurrences of sex discrimination. On the other hand, a more aggressive exercise of oversight on the part of agencies that wield enormous influence in the world of science funding—Energy, NASA, and NSF—would provide an opportunity to strengthen the goal of Title IX and enable this legislation to better achieve intended results. To fully comply with Title IX regulations, we recommend the Secretary of Energy and Director of NSF ensure that compliance reviews of grantees are periodically conducted. To fully comply with Title IX regulations, we recommend the Administrator of NASA continue to implement its compliance review program to ensure that compliance reviews of grantees are periodically conducted. We provided a draft of this report to the Department of Education, the Department of Energy, the Department of Justice, the National Aeronautics and Space Administration, and the National Science Foundation for review and comment. Officials at each agency confirmed that they had reviewed the draft and generally agreed with its findings and recommendations. Officials from all five agencies provided us with technical comments, many of which we have incorporated into the report, and formal comments from Education, Energy, NASA and NSF are included in appendixes VII through X. Justice did not provide formal written comments for this report. As discussed in their formal comments and in our report, Energy, NASA, and NSF have begun to take steps, such as providing technical assistance and collecting compliance information from grantees, to ensure greater compliance with Title IX. Although officials at these agencies agree that compliance reviews have not been conducted, officials from each agency reported that they are making efforts to carry out compliance reviews in the future. Where appropriate, we incorporated information about agency efforts in the final version of this report. In the comments from Education, officials reported about compliance reviews and other efforts that Education has conducted on school districts and on nonscience programs at institutions of higher education. While we agree that these efforts may provide greater access for women in higher education science programs, as they may for women in other fields, they were not within the scope of our review and, therefore, were not included in this report. We are sending copies of this report to the Secretary of Education, the Secretary of Energy, the Attorney General, the Administrator of NASA, the Acting Director of NSF, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please call me at (202) 512-8403 if you or your staff have any questions about this report. Other major contributors to this report are listed in appendix XI. Because of increased interest about women’s access to mathematics, engineering, and science, which receive billions of dollars in federal assistance, you asked us to determine what is being done to ensure compliance with Title IX in regard to mathematics, engineering, and science. This report addresses: 1) how do the Department of Education (Education), the Department of Energy (Energy), the National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF) ensure that federal grant recipient institutions comply with Title IX in mathematics, engineering, and science; 2) what do data show about women’s participation in these fields; and 3) what promising practices exist to promote their participation? We reviewed the legislation and regulations to identify all areas of compliance relevant to each federal agency. We interviewed officials at Education, Energy, NASA, and NSF and gathered documentation to identify agency activities to ensure compliance with Title IX. We analyzed data from the Office for Civil Rights (OCR) at Education and the Equal Employment Opportunity Commission (EEOC), the agencies where most sex discrimination complaints are filed. Given its role as coordinating agency of Title IX compliance, we also gathered data and interviewed officials at the Department of Justice (Justice). We chose to visit research universities and national laboratories because by visiting those institutions we were able to interview future and practicing scientists working in a wide variety of areas. During this phase of the review, we visited seven research universities where we interviewed grant recipients, students and faculty. We also visited six national laboratories where we talked with administrators and scientists. Research universities were selected for site visits because they received grants from at least three of the four agencies we reviewed and were near at least one national laboratory and another research university that also met our criteria. Those selected were: Clemson University, Columbia University, Duke University, Stanford University, State University of New York at Stony Brook, University of California, Berkeley, and University of South Carolina. Brookhaven National Laboratory, Environmental Measurements Laboratory, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Savannah River Ecology Laboratory, and Savannah River National Laboratory. To gather nationwide information on women’s participation and experiences in mathematics, engineering, and science, we analyzed data from the Integrated Postsecondary Education Data System (IPEDS), the Beginning Postsecondary Students (BPS) Longitudinal Study, and the National Study of Postsecondary Faculty (NSOPF) from Education and the Survey of Earned Doctorates (SED), and the Survey of Doctorate Recipients (SDR) from NSF. We also reviewed literature to obtain information about women in the sciences and issues they face preparing for and pursuing careers in the sciences. In addition, we spoke with practitioners regarding promising practices to promote the participation of women in mathematics, engineering, and science. To assess the reliability of the various Education and NSF data sources, we reviewed documentation on how the data were collected and performed electronic tests to look for missing or out-of-range values. In addition, we reviewed the methodology of studies and reports using generally accepted social science principles as a basis for including their results in our report. Based on these reviews and tests, we found the data and studies sufficiently reliable for our purposes. We conducted our review from July 2003 through June 2004 in accordance with generally accepted government auditing standards. First federal law specifically prohibiting sex discrimination at educational institutions receiving federal financial assistance. Held that private parties may file suit in Title IX cases. Delegated to Justice authority to coordinate the implementation and enforcement by federal agencies of various nondiscrimination provisions, including Title IX. Held that Title IX applied only to programs that directly receive or benefit from federal financial assistance. Requires all programs of an educational institution receiving federal funds to be subject to Title IX, superseding the Grove City College v. Bell decision. Held that monetary damages are available to plaintiffs in private Title IX actions. Provided policy guidance on Title IX prohibitions against sexual harassment in schools. Provided Title IX enforcement regulations for 21 agencies 20 U.S.C. §§ 1681-1688. 45 C.F.R. Part 86, currently found at 34 C.F.R. Part 106. C441 U.S. 677 (1979). 465 U.S. 555 (1984). 20 U.S.C. § 1687. 65 Fed. Reg. 52858 (Aug. 30, 2000). Obligations were not separately identifiable. Estimated. Data from the Beginning Postsecondary Students (BPS) Longitudinal Study show that for those who started with a major in the sciences, there is no appreciable difference between men and women in the proportion that have completed a bachelors degree 6 years after starting college. Among those who started college with nonscience or undeclared majors, a greater proportion of women than men had achieved bachelors degrees within 6 years. For both men and women, those who began college majoring in the sciences were more likely to have earned degrees within 6 years than those who began college with nonscience or undeclared majors. Data from the Integrated Postsecondary Education Data System (IPEDS) show that the proportion of students, and of degree earners, who are women varies substantially from one area to another in the sciences. In addition to those named above, Kopp Michelotti, Kelsey Bright, John Mingus, James Rebbe, Richard Burkard and Sue Bernstein made important contributions to this report. | Title IX of the Education Amendments of 1972 extended protections against sex discrimination to students and employees at institutions receiving federal assistance for educational programs or activities. In the 32 years since Title IX was enacted, women have made significant gains in many fields, but much attention has focused on women's participation in the sciences. Because of the concern about women's access to opportunities in the sciences, which receive billions of dollars in federal assistance, this report addresses: (1) how do the Department of Education (Education), the Department of Energy (Energy), the National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF) ensure that federal grant recipient institutions comply with Title IX in math, engineering, and science; (2) what do data show about women's participation in these fields; and (3) what promising practices exist to promote their participation? Four federal science agencies have made efforts to ensure that grantees comply with Title IX in the sciences by performing several compliance activities, such as investigating complaints and providing technical assistance, but most have not conducted all required monitoring activities. Agency officials at Energy, NASA, and NSF told us that they refer complaints to Education and the Equal Employment Opportunity Commission, where they are investigated. However, only Education has monitored its grantees by conducting compliance reviews--periodic, agency-initiated assessments of grantees to determine if they are complying with Title IX. Women's participation in the sciences has increased substantially in the last three decades, especially in the life sciences, such as biology. The proportion of women science students has grown, but to a lesser extent at the graduate level than the undergraduate level. Meanwhile, the proportion of faculty in the sciences who are women has also increased, but they still lag behind men faculty in terms of salary and rank. However, studies indicate that experience, work patterns, and education levels can largely explain these differences. Studies also suggest that discrimination may still affect women's choices and professional progress. We found several examples of agencies and grantees that have instituted practices designed to foster greater women's participation in the sciences. While some of the practices are aimed at encouraging more women to pursue the sciences, others provide time off and fewer teaching duties so faculty can balance work and family life. Finally, a few practices seek to expand the recruiting pool for jobs in the sciences and make them more attractive to a greater portion of the U.S. population, including women. |
DOD is a massive and complex organization. The department reported that its fiscal year 2007 operations involved approximately $1.5 trillion in assets and $2.1 trillion in liabilities; more than 2.9 million military and civilian personnel; and $544 billion in net cost of operations. For fiscal year 2008, the department has received discretionary budget authority for about $546 billion and reports total obligations of about $492 billion to support ongoing operations and activities related to the Global War on Terrorism. Organizationally, the department includes the Office of the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, the military departments, numerous defense agencies and field activities, and various unified combatant commands that are either responsible for specific geographic regions or specific functions. (See fig. 1 for a simplified depiction of DOD’s organizational structure.) In support of its military operations, the department performs an assortment of interrelated and interdependent business functions, including logistics management, procurement, health care management, and financial management. As we have previously reported, the DOD systems environment that supports these business functions is overly complex and error prone, and is characterized by (1) little standardization across the department, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, and (4) the need for data to be entered manually into multiple systems. Moreover, the department recently reported that this systems environment is comprised of approximately 3,000 separate business systems. For fiscal year 2007, Congress appropriated approximately $15.7 billion to DOD, and for fiscal year 2008, the department has requested about $15.9 billion in appropriated funds to operate, maintain, and modernize these business systems and associated IT infrastructure. As we have previously reported, the department’s nonintegrated and duplicative systems impair its ability to combat fraud, waste, and abuse. In fact, DOD currently bears responsibility, in whole or in part, for 15 of our 27 high-risk areas. Eight of these areas are specific to the department, while it shares responsibility for seven other governmentwide high-risk areas. DOD’s business systems modernization is one of the high-risk areas, and it is an essential enabler to addressing many of the department’s other high-risk areas. For example, modernized business systems are integral to the department’s efforts to address its financial, supply chain, and information security management high-risk areas. Effective use of an enterprise architecture—a modernization blueprint—is a hallmark of successful public and private organizations. For more than a decade, we have promoted the use of architectures to guide and constrain systems modernization, recognizing them as a crucial means to this challenging goal: optimally defined operational and technological environments. Congress, the Office of Management and Budget (OMB), and the federal Chief Information Officer’s (CIO) Council also have recognized the importance of an architecture-centric approach to modernization. The Clinger-Cohen Act of 1996 mandates that an agency’s CIO develop, maintain, and facilitate the implementation of an information technology architecture. Further, the E-Government Act of 2002 requires OMB to oversee the development of enterprise architectures within and across agencies. In addition, we, OMB, and the CIO Council have issued guidance that emphasizes the need for system investments to be consistent with these architectures. A corporate approach to IT investment management is characteristic of successful public and private organizations. Recognizing this, Congress enacted the Clinger-Cohen Act of 1996, which requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in IT systems made by executive agencies. In response to the Clinger-Cohen Act and other statutes, OMB has developed policy and issued guidance for planning, budgeting, acquisition, and management of federal capital assets. We also have issued guidance in this area. An enterprise architecture provides a clear and comprehensive picture of an entity, whether it is an organization (e.g., a federal department) or a functional or mission area that cuts across more than one organization (e.g., financial management). This picture consists of snapshots of both the enterprise’s current (“As Is”) environment and its target (“To Be”) environment. These snapshots consist of “views,” which are one or more interdependent and interrelated architecture products (e.g., models, diagrams, matrixes, and text) that provide logical or technical representations of the enterprise. The architecture also includes a transition or sequencing plan, which is based on an analysis of the gaps between the “As Is” and “To Be” environments; this plan provides a temporal road map for moving between the two environments and incorporates such considerations as technology opportunities, marketplace trends, fiscal and budgetary constraints, institutional system development and acquisition capabilities, legacy and new system dependencies and life expectancies, and the projected value of competing investments. The suite of products produced for a given entity’s enterprise architecture, including its structure and content, is largely governed by the framework used to develop the architecture. Since the 1980s, various architecture frameworks have been developed, such as John A. Zachman’s “A Framework for Information Systems Architecture” and the DOD Architecture Framework. The importance of developing, implementing, and maintaining an enterprise architecture is a basic tenet of both organizational transformation and systems modernization. Managed properly, an enterprise architecture can clarify and help optimize the interdependencies and relationships among an organization’s business operations (and the underlying IT infrastructure and applications) that support these operations. Moreover, when an enterprise architecture is employed in concert with other important management controls, such as portfolio-based capital planning and investment control practices, architectures can greatly increase the chances that an organization’s operational and IT environments will be configured to optimize mission performance. Our experience with federal agencies has shown that investing in IT without defining these investments in the context of an architecture often results in systems that are duplicative, not well integrated, and unnecessarily costly to maintain and interface. One approach to structuring an enterprise architecture is referred to as a federated enterprise architecture. Such a structure treats the architecture as a family of coherent but distinct member architectures that conform to an overarching architectural view and rule set. This approach recognizes that each member of the federation has unique goals and needs as well as common roles and responsibilities with the levels above and below it. Under a federated approach, member architectures are substantially autonomous, although they also inherit certain rules, policies, procedures, and services from higher-level architectures. As such, a federated architecture enables component organization autonomy while ensuring enterprisewide linkages and alignment where appropriate. Where commonality among components exists, there also are opportunities for identifying and leveraging shared services. A service-oriented architecture (SOA) is an approach for sharing business capabilities across the enterprise by designing functions and applications as discrete, reusable, and business-oriented services. As such, service orientation permits sharing capabilities that may be under the control of different component organizations. As we have previously reported, such capabilities or services need to be, among other things, (1) self-contained, meaning that they do not depend on any other functions or applications to execute a discrete unit of work; (2) published and exposed as self- describing business capabilities that can be accessed and used; and (3) subscribed to via well-defined and standardized interfaces. A SOA approach is thus not only intended to reduce redundancy and increase integration, but also to provide the kind of flexibility needed to support a quicker response to changing and evolving business requirements and emerging conditions. IT investment management is a process for linking IT investment decisions to an organization’s strategic objectives and business plans that focuses on selecting, controlling, and evaluating investments in a manner that minimizes risks while maximizing the return of investment. During the selection phase, the organization (1) identifies and analyzes each project’s risks and returns before committing significant funds to any project and (2) selects those IT projects that will best support its mission needs. During the control phase, the organization ensures that, as projects develop and investment expenditures continue, they continue to meet mission needs at the expected levels of cost and risk. If the project is not meeting expectations or if problems arise, steps are quickly taken to address the deficiencies. During the evaluation phase, actual versus expected results are compared once a project has been fully implemented. This is done to (1) assess the project’s impact on mission performance, (2) identify any changes or modifications to the project that may be needed, and (3) revise the investment management process based on lessons learned. Consistent with this guidance, our IT Investment Management framework (ITIM) consists of five progressive stages of maturity for any given agency relative to selecting, controlling, and evaluating its investment management capabilities. (See fig. 2 for the five ITIM stages of maturity.) Stage 2 critical processes lay the foundation by establishing successful, predictable, and repeatable investment control processes at the project level. Stage 3 is where the agency moves from project-centric processes to portfolio-based processes and evaluates potential investments according to how well they support the agency’s missions, strategies, and goals. Organizations implementing these Stages 2 and 3 practices have in place selection, control, and evaluation processes that are consistent with the Clinger-Cohen Act. Stages 4 and 5 require the use of evaluation techniques to continuously improve both investment processes and portfolios in order to better achieve strategic outcomes. The overriding purpose of the framework is to encourage investment selection, control, and evaluate processes that promote business value and mission performance, reduce risk, and increase accountability and transparency. We have used the framework in several of our evaluations, and a number of agencies have adopted it. With the exception of the first stage, each maturity stage is composed of “critical processes” that must be implemented and institutionalized in order for the organization to achieve that stage. Each ITIM critical process consists of “key practices”—to include organizational structures, policies, and procedures—that must be executed to implement the critical process. Our research shows that agency efforts to improve investment management capabilities should focus on implementing all lower stage practices before addressing higher stage practices. In 2005, the department reassigned responsibility for providing executive leadership for the direction, oversight, and execution of its business systems modernization efforts to several entities. These entities and their responsibilities include the Defense Business Systems Management Committee (DBSMC), which serves as the highest ranking investment review and decision-making body for business systems modernization activities; the Principal Staff Assistants, who serve as the certification authorities for business system modernizations in their respective core business missions; the Investment Review Boards (IRB), which are chaired by the certifying authorities and form the review and decision- making bodies for business system investments in their respective areas of responsibility; and the Business Transformation Agency (BTA), which is responsible for supporting the DBSMC and the IRBs, and for leading and coordinating business transformation efforts across the department. DOD’s component organizations, to varying degrees, have leveraged existing, and established new, business system governance bodies to support their respective investment precertification responsibilities. Table 1 lists these entities and provides greater detail on their roles, responsibilities, and composition. In 2005, DOD reported that it had adopted a “tiered accountability” approach to business transformation. Under this approach, responsibility and accountability for business architectures and systems investment management are assigned to different levels in the organization. For example, the BTA is responsible for developing the corporate BEA (i.e., the thin layer of corporate policies, capabilities, standards, rules), and the associated enterprise transition plan (ETP). The components are responsible for defining a component-level architecture and transition plans associated with their own tier of responsibility and for doing so in a manner that is aligned with (i.e., does not violate) the corporate BEA. Similarly, program managers are responsible for developing program-level architectures and plans and ensuring alignment with the architectures and transition plans above them. This concept is to allow for autonomy while also ensuring linkages and alignment from the program level through the component level to the enterprise level. Table 2 describes the four investment tiers and identifies the associated reviewing and approving entities. Congress included six provisions in the fiscal year 2005 National Defense Authorization Act that are aimed at ensuring DOD’s development of a well-defined BEA and associated ETP, as well as the establishment and implementation of effective investment management structures and processes. The requirements are as follows: 1. Develop a BEA that includes an information infrastructure that, at a comply with all federal accounting, financial management, and reporting requirements; routinely produce timely, accurate, and reliable financial information for management purposes; integrate budget, accounting, and program information and systems; provide for the systematic measurement of performance, including the ability to produce timely, relevant, and reliable cost information; include policies, procedures, data standards, and system interface requirements that are to be applied uniformly throughout the department; and be consistent with OMB policies and procedures. 2. Develop an ETP for implementing the architecture that includes: an acquisition strategy for new systems needed to complete the a list and schedule of legacy business systems to be terminated; a list and strategy of modifications to legacy business systems; and time-phased milestones, performance metrics, and a statement of financial and non-financial resource needs. 3. Identify each business system proposed for funding in DOD’s fiscal year budget submissions and include: description of the certification made on each business system proposed for funding in that budget; funds, identified by appropriations, for current services and for business systems modernization; and the designated approval authority for each business system. 4. Delegate the responsibility for business systems to designated approval authorities within the Office of the Secretary of Defense. 5. Require each approval authority to establish investment review structures and processes, including a hierarchy of IRBs—each with appropriate representation from across the department. The review process must cover: review and approval of each business system by an IRB before funds are obligated; at least an annual review of every business system investment; use of threshold criteria to ensure an appropriate level of review and use of procedures for making architecture compliance certifications; use of procedures consistent with DOD guidance; and incorporation of common decision criteria. 6. Effective October 1, 2005, DOD may not obligate appropriated funds for a defense business system modernization with a total cost of more than $1 million unless the approval authority certifies that the business system modernization: complies with the BEA and is necessary to achieve a critical national security capability or address a critical requirement in an area such as safety or security; or is necessary to prevent a significant adverse effect on an essential project in consideration of alternative solutions, and the certification is approved by the DBSMC. In November 2005, May 2006, and May 2007, we reported that DOD had partially satisfied four of the six business system modernization requirements in the fiscal year 2005 National Defense Authorization Act relative to architecture development, transition plan development, budgetary disclosure, and investment review. In addition, we reported that it had fully satisfied the requirement concerning designated approval authorities and it was in the process of satisfying the last requirement for certification and approval of modernizations costing in excess of $1 million. As a result, each report concluded that the department had made important progress in defining and beginning to implement institutional management controls (i.e., processes, structures, and tools). However, each report also concluded that much remained to be accomplished relative to the act’s requirements and relevant guidance. Among other things, this included developing component architectures that are aligned with the corporate BEA and ensuring that investment review and approval processes are fully developed and institutionally implemented across all organizational levels. Notwithstanding this progress on business systems modernization, we previously reported and more recently testified in February 2008 that two items remained to be done before DOD’s overall business transformation efforts, which include business systems modernization, would be on a sustainable path to success. First, DOD had yet to establish a strategic planning process that results in a comprehensive, integrated, and enterprisewide plan or set of plans that would guide transformation. Second, DOD had yet to designate a senior official who could provide full- time attention and oversight to the business transformation effort. Subsequently, the National Defense Authorization Act for Fiscal Year 2008 designated the Deputy Secretary of Defense as the department’s Chief Management Officer (CMO), created a Deputy CMO position, and designated the undersecretaries of each military department as CMOs for their respective departments. The act also required the Secretary of Defense, acting through the CMO, to develop a strategic management plan that, among other things, is to include a detailed description of performance goals and measures for improving and evaluating the overall efficiency and effectiveness of the business operations of the department. According to DOD, steps have been taken and are ongoing to address these provisions. We also testified in February 2008 that DOD continues to take steps to comply with key business systems modernization legislative requirements, but that much remained to be accomplished before the full intent of this legislation would be achieved. In particular, we stated that DOD’s BEA, while addressing several issues previously reported by us, was still not sufficiently complete to effectively and efficiently guide and constrain business system investments across all levels of the department. Most notably, the BEA did not yet include well-defined architectures for DOD’s components, and DOD’s strategy for “federating” or extending its architecture to the military departments and defense agencies was still evolving and had yet to be implemented. In addition, the scope and content of the department’s ETP still did not address DOD’s complete portfolio of IT investments. We also testified that while the department had established and begun to implement legislatively mandated corporate investment review structures and processes, neither DOD nor the military departments had done so in a manner that was fully consistent with relevant guidance. DOD continues to take steps to comply with the requirements of the act and to satisfy relevant systems modernization management guidance. In particular, on March 14, 2008, DOD released an update to its BEA (version 5.0) and ETP, and issued its annual report to Congress describing steps that have been taken and are planned relative to the act’s requirements, among other things. Collectively, these steps address several legislative provisions and best practices concerning the BEA, transition plan, budgetary disclosure, and investment review of systems costing in excess of $1 million. However, additional steps are needed to fully comply with the act and relevant guidance. Most notably, the department has yet to extend and evolve its corporate BEA to the department’s component organizations’ (military departments and defense agencies) architectures and fully define IT investment management policies and procedures at the corporate and component levels. BTA officials agree that additional steps are needed to fully implement the act’s requirements and our related recommendations. According to these officials, DOD leadership is committed to fully addressing these areas and efforts are planned and under way to do so. Among other things, the act requires DOD to develop a BEA that would cover all defense business systems and the functions and activities supported by defense business systems and enable the entire department to (1) comply with all federal accounting, financial management, and reporting requirements, (2) routinely produce timely, accurate, and reliable financial information for management purposes, and (3) include policies, procedures, data standards, and system interface requirements that are to be applied throughout the department. As such, the act provides for an architecture that extends to all defense organizational components. In 2006, the department adopted an incremental and federated approach to developing such an architecture. Under this approach, the department committed to releasing new versions of its BEA every 6 months that would include a corporate BEA that was augmented by a coherent family of component architectures. As we have previously reported, such an approach is consistent with best practices and appropriate given the DOD’s scope and size. In 2007, we reported that the then current version of the BEA (version 4.1) resolved several of the architecture gaps associated with the prior version and added content proposed by DOD stakeholders, but that gaps still remained. On March 14, 2008, DOD released BEA 5.0 which addresses some of these remaining gaps. For example, it improves the Financial Visibility business enterprise area by expanding the Standard Financial Information Structure data elements (i.e., types of data) associated with information exchanges among operational nodes (e.g., organizational units or system functions) to include data attributes (characteristics of data elements). In addition, the latest version introduces data standards for the Enterprise Funds Distribution initiative. Together, these additions bolster the department’s efforts to standardize financial data across DOD so that information is available to inform corporate decision making. Version 5.0 of the BEA also addresses, to varying degrees, missing elements, inconsistencies, and usability issues that we previously identified. Examples of these improvements and remaining issues are summarized below. The latest version includes performance metrics for the business capabilities within enterprise priority areas, including actual performance relative to performance targets that are to be met. For example, it states that 62 percent of DOD assets are now using the Department of the Treasury’s United States Standard General Ledger compliant formats, as compared to a target of 100 percent. Further, this version provides actual baseline performance for operational activities (e.g., “Manage Audit and Oversight of Contractor”). As we previously reported, performance models are an essential part of any architecture because having defined performance baselines to measure actual performance against provides the means for knowing whether the intended mission value to be delivered by each business process is actually being realized. The latest version includes important “As Is” information (e.g., current capability problems and limitations that enterprise priorities are to address and their root causes) for all six business enterprise priorities. As we previously reported, such “As Is” content is essential for analyzing capability gaps that in turn inform the plan for transitioning from the “As Is” to the “To Be” environments. The latest version includes 1,201 new business rules. As we previously reported, business rules are important because they explicitly translate business policies and procedures into specific, unambiguous rules that govern what can and cannot be done. As such, they facilitate consistent implementation of policies and procedures. Examples of new business rules are that (1) each request for commercial export of DOD technology must be processed within 30 days of request from the Department of State or the Department of Commerce and (2) DOD must first seek to acquire commercial items before developing military unique material. In addition to adding business rules, Version 5.0 reflects the deletion of 1,046 business rules that were no longer applicable and thus obsolete. Notwithstanding these additions and deletions, BEA 5.0 still does not provide business rules for all business processes. For example, there are no business rules for the “Perform Acceptance Procedures for Other Goods and Services” business process under the Common Supplier Engagement enterprise priority area. Also, business rules are defined at inconsistent levels of detail. For example, the Travel Authorization business rule states that each travel authorization must be processed in accordance with the Allowance Law, however, it does not identify the specific conditions that must be met. In contrast, the Trial Balance Reporting business rule is more explicitly defined, specifically citing the conditions under which actions are to be taken. Without well-defined business rules, policies and procedures can be implemented inconsistently because they will be interpreted differently by different organizations. The latest version includes updates on the information that flows among operational nodes (i.e., organizations, business operations, and system elements). Information flows are important because they define what information is needed and where and how the information moves to and from operational entities. In particular, Version 5.0 adds 240 new information exchanges (e.g., Accounts Payable) among business operations and 28 data exchanges (e.g., Acknowledge Inter-governmental Order) among system elements. However, it still does not provide information flows for all organizational units. For example, it does not identify information exchanges among the organizations that support the Human Resources Management enterprise priority area, and continues to lack information flows among DOD corporate and components organizations. Without such information exchanges, a common understanding of the semantic meaning of the information moving among these organizations does not exist. Moreover, Version 5.0 contains information exchanges (e.g., Accounts Payable Account) that are not attached or linked to any operational nodes. Further, this version’s information-related architecture products contain inconsistencies. For example, “Acceptance Results” is identified as a new information exchange in the integrated dictionary, but it is not in the operational information exchange product. The latest version expands on the operational activities that are or will be performed at each location and by each organization. For example, it now identifies the Defense Logistics Agency as one of the organizations involved in the “Authorize Return or Disposal” activity. However, as was the case with BEA Version 4.1, not all operational activities are assigned to an organization. For example, the “Manage Capabilities Based Acquisition” activity is not assigned. In addition, BEA 5.0 still does not include the roles and responsibilities of organizations performing the same operational activity, which is important because not doing so can result in either duplicative organizational efforts or gaps in activity coverage. Moreover, BEA 5.0 still does not include the Foreign Military Sales operational activity, which affects multiple DOD business missions and organizations. The latest version continues to lack important security architecture content. For example, while DOD officials told us that the Enterprise Information Environment Mission Area will provide infrastructure information assurance services (e.g., secure, reliable messaging) for business systems and applications, this information is not reflected in the latest version. Also, this version still does not describe relevant information assurance requirements contained in laws, regulations, and policies, or provide a reference to where these requirements are described. Such information is essential to adequately reflect security in the BEA, and thereby ensure that designs for business systems, applications, and services comply with applicable information assurance requirements. Beyond the above discussed limitations, Version 5.0 also continues to represent only the thin layer of corporate architectural policies, capabilities, rules, and standards that apply DOD-wide (i.e., to all DOD federation members). This means that Version 5.0 appropriately focuses on addressing a limited set of enterprise-level (DOD-wide) priorities, and providing the overarching and common architectural context that the distinct and substantially autonomous member (i.e., component) architectures inherit. However, this also means that Version 5.0 does not provide the total federated family of DOD parent and subsidiary architectures for the business mission area that are needed to comply with the act. To produce the federated BEA, the BTA released an update to its federation strategy in January 2008. (See fig. 3 for a simplified diagram of DOD’s federated BEA.) In April 2007, we reported on the prior version of this strategy, concluding that while it provided a foundation on which to build and align DOD’s parent BEA with its subsidiary architectures, it lacked sufficient details to permit effective and efficient execution. Accordingly, we made recommendations to improve the strategy. The updated strategy, along with the associated global information grid (GIG) strategy, partially addresses our recommendations. For example, the strategies now provide high-level roles and responsibilities for federating the architecture and additional definition around the tasks needed to achieve alignment among DOD and component architectures. In particular, the strategy for the business mission area provides for conducting pilot programs across the components to demonstrate the technical feasibility of architecture federation. BTA and CIO officials described the strategy for federating DOD’s architectures as still evolving. They added that lessons learned from the pilots will be used to improve and update the strategies. They also noted that subsequent releases of the corporate BEA will reflect the evolving federation strategy by, for example, defining enforceable interfaces to ensure interoperability and information sharing. To help assist the department in its BEA federation efforts, we have made a number of recommendations. While DOD agreed with these recommendations, it did not implement one related to its latest annual report. Specifically, we previously recommended that DOD include in its annual report, required under the National Defense Authorization Act for Fiscal Year 2005, the results of its BEA independent verification and validation (IV&V) contractor’s assessment of the completeness, consistency, understandability, and usability of the federated family of architectures. However, its latest annual report does not include this information. According to BTA officials, this is because the contractor’s report was not finalized in time to include the results. While we have yet to receive either the contractor’s statement of work or the results of the contractor’s assessments, BTA officials provided us with a report dated April 11, 2008, that summarizes selected IV&V contractor observations and recommendations relative to the Version 5.0’s ability to provide a foundation for BEA federation. Overall, the summary confirms our findings by stating that while the BEA provides a foundation for federation, much remains to be done before the department will have a complete family of architectures. In this regard, it provides several recommendations, such as having BTA track, measure, and report on the adoption of shared vocabularies and standards within the component architectures. However, the summary does not demonstrate that the IV&V contractor is being used to address the full scope of our recommendation. For example, the summary does not address the extent to which the department’s federated family of architectures, including the related transition plan(s), are complete, consistent, understandable, and useable. The challenges that the department faces in federating its BEA, and the importance of disclosing to congressional defense committees the state of its federation efforts, are amplified by our recent report on the current state of the military departments’ enterprise architecture programs. Specifically, we reported in May 2008, that none of the three military departments could demonstrate through verifiable documentation that it had established all of the core foundational commitments and capabilities needed to effectively manage the development, maintenance, and implementation of an architecture, although in relative terms the state of the Air Force’s architecture efforts was well ahead of those of the Navy and Army. Examples of their architecture limitations are discussed below. None of the military departments had fully defined its “As Is” and “To Be” architecture environments and associated transition plans. This is important because without a full understanding of architecture-based capability gaps, the departments would not have an adequate basis for defining and sequencing its ongoing and planned business system investments. None of the military departments had fully addressed security as part of its respective “As Is” and “To Be” environments. This is important because security is relevant and essential to every aspect of an organization’s operations, and therefore the nature and substance of institutionalized security requirements, controls, and standards should be embedded throughout the architecture, and reflected in each system investment. None of the military departments was using an IV&V agent to help ensure the quality of its architecture products. IV&V is a proven means for obtaining unbiased insight into such essential architecture qualities as completeness, understandability, usability, and consistency. None of the military departments had established a committee or group with representation from across the enterprise to direct, oversee, and approve its architecture. This is significant because the architecture is a corporate asset that needs to be enterprisewide in scope and endorsed by senior leadership if it is to be leveraged for optimizing operational and technology change. None of the military departments could demonstrate that its IT investments were actually in compliance with its architectures. This is relevant because the benefits from using an architecture, such as improved information sharing, increased consolidation, enhanced productivity, and lower costs, cannot be fully realized unless individual investments are actually in compliance with, among other things, architectural rules and standards. To address these limitations, we have made recommendations aimed at improving the management and content of these architectures. DOD agreed with our recommendations. Until DOD has a well-defined family of architectures for its business mission area, it will not fully satisfy the requirements of the act and it will remain challenged in its ability to effectively manage its business system modernization efforts. Among other things, the act requires DOD to develop an ETP for implementing its BEA that includes listings of the legacy systems that will and will not be part of the target business systems environment and specific time-phased milestones and performance metrics for each business system investment. In 2007, we reported that the then version of the ETP addressed several of the missing elements that we previously identified relative to the act’s requirements and relevant guidance. However, we also reported that the ETP was limited in several ways. On March 15, 2008, DOD released the latest version of its ETP, which provides required information on 102 programs (systems and initiatives) that are linked to key transformational objectives. For example, it includes specific time-phased milestones for about 90 business system programs and performance metrics for about 75 of these. Further, the latest version of the ETP discusses progress made on business system investments over the last 6 months, as well as descriptions of planned near-term activities (i.e., next 6 months). The Defense Integrated Military Human Resources System program completed all interface designs required for system deployment to the Army and to defense agencies, and over half of the interface designs required for deployment to the Air Force. It also states that system interface testing and operational testing for the Army deployment will be completed in the next 6 months. The Contractor Performance Assessment Reporting System was fully implemented following replacement of a proprietary software product with an open source product and rehosting of this product to a new facility. As a result, improvements in system performance, reliability, and security were attained. This version also partially addresses issues that we identified in our prior report. Examples of improvements and remaining issues are summarized here. The latest version contains the results of analyses of gaps between its “As Is” and “To Be” architectural environments, in which capability and performance shortfalls are described and investments (such as transformation initiatives and systems) that are to address these shortfalls are identified. It also discusses planned and ongoing gap analyses. For example, it relates the DOD Electronic Mall investment to the Common Supplier Engagement business enterprise priority area and describes how it will address business capability gaps by providing access to off-the-shelf finished goods and services from both commercial and government sources. It also describes how related performance shortfalls will be addressed through shorter logistics response time, improved visibility of sources of supplies, one-stop tracking of order status, and improved ability to shop for best price. As we stated, determining how business capability gaps between the baseline and target architecture are to be addressed for all priority areas is key to the department’s transition plan’s ability to support informed investment selection and control decisions. The latest version provides a range of information for the 102 systems and initiatives identified, such as 3 years of budget information for 67 of these systems and initiatives. However, as we reported last year, the plan has yet to address our prior finding for including system and budget information for investments by 13 of DOD’s 15 agencies and for eight of its nine combatant commands. At that time, BTA officials stated that information for these defense agencies and combatant commands was excluded because the ETP focused on those business-related organizations having the majority of the tier 1 and 2 business investments, and the majority of the defense agencies and combatant commands do not have investments that meet this threshold criteria. However, not all DOD components have developed subordinate transition plans. For example, we recently reported that only one military department, the Air Force, had developed a transition plan and that this plan was limited because it did not include an analysis of the gap in capabilities between the military departments’ “As Is” and “To Be” environments. This means that, similar to DOD’s federated BEA, a complete family of DOD and component transition plans does not yet exist. The latest version provides performance measures for both enterprise and component investments (i.e., programs), including key milestones (e.g., initial operating capability). However, it does not include other important information needed to understand the sequencing of these investments. In particular, the planned investments are not sequenced based on a range of important factors cited in federal guidance, such as technology opportunities, marketplace trends, fiscal and budgetary constraints, institutional system development and acquisition capabilities, new and legacy system dependencies and life expectancies, and the projected value of competing investments. While the ETP has begun to incorporate some top-down analysis based on gaps in the business enterprise priorities, the plan continues to be largely based on a bottom-up planning process in which ongoing programs were examined and categorized in the plan around business enterprise priorities. For example, many of these investments are dependent on Net-Centric Enterprise Services (NCES) for its core services, and as such the plans and milestones for each should reflect the incremental capability deployment of NCES. According to the BTA official responsible for the ETP, the investments were sequenced based on only fiscal year budgetary constraints. However, BTA officials said that they intend to depict investment dependencies in future versions of the ETP, especially program-to-program dependencies associated with adoption of a service-oriented architecture approach. The latest version of the ETP also includes discussion of how the department plans to use enterprise application integration, including plans, methods, and tools for reusing applications that already exist while also adding new applications and databases. However, as we reported last year, this discussion lacks specifics on which investments will reuse which applications. According to BTA officials, a number of actions are envisioned to address the above cited areas and further improve the ETP, such as adding the results of capability gap analyses for all business priority areas, including tier 1 and 2 programs for all components, and recognizing dependencies among investments. Until the ETP, or a federated family of such plans, either directly or by reference includes relevant information on the full inventory of investments across the department (and does so in a manner that reflects consideration of the range of variables associated with a well- defined transition plan, such as timing dependencies among investments and the department’s capability to manage them), it will not have a sufficient basis for informed investment decision making regarding disposition of the department’s existing inventory of systems or for sequencing the introduction of modernized systems. To help DOD in addressing its transition planning challenges, we have previously made recommendations that the department is in the process of addressing. Among other things, the act requires DOD’s annual IT budget submission to include key information on each business system for which funding is being requested, such as the system’s designated approval authority and the appropriation type and amount of funds associated with development/modernization and current services (i.e., operation and maintenance). The department’s fiscal year 2009 budget submission includes a range of information for the approximately 3,000 business system investments for which DOD is requesting funding. Of these, 273 involve modernization/development activities. For each of the 273, the information provided includes the system’s (1) name, (2) approval authority, and (3) appropriation type. The submission also identifies the amount of the fiscal year 2009 request that is for development/modernization versus operations/maintenance. For example, the Army’s General Fund Enterprise Business System, the amount of modernization funds related to “Other Procurement, Army” and “Research, Development, Testing and Evaluation, Army” are identified. For systems in excess of $1 million in modernization funding, the submission also cites its certification status (e.g., approved, approved with conditions, not applicable, and withdrawing) and the DBSMC approval date, where applicable. The National Defense Authorization Act for Fiscal Year 2005 requires DOD to establish business system investment review structures, such as the previously mentioned DBSMC and five IRBs, and processes that are consistent with the investment management provisions of the Clinger- Cohen Act. As we have previously reported, organizations that have satisfied stages 2 and 3 of our ITIM framework have established the investment selection, control, and evaluation structures, and the related policies, procedures, and practices that are consistent with the investment management provisions of the Clinger-Cohen Act. DOD and the Air Force have established the kind of investment management structures provided for in the act and our ITIM framework. However, the Navy has not. Moreover, neither DOD, the Air Force, nor the Navy have defined the full range of related investment management policies and procedures that our framework identifies as necessary to effectively manage investments as individual business system projects (stage 2) and as portfolios of projects (stage 3). Accordingly, we made recommendations to address the limitations that the department is addressing. Until all of DOD has in place these requisite investment management structures and supporting policies and procedures, the billions of dollars that the department and its components invest annually in business systems will remain at risk. DOD has partially established the organizational structures that are associated with Stages 2 and 3 of our framework. Specifically, we reported in May 2007 that the department had established an enterprisewide investment board and four subordinate boards, and assigned them responsibility for business systems investment governance, including conducting investment certification and approval reviews and annual reviews as provided for in the act. The enterprisewide board—the DBSMC—is composed of senior executives, such as the Deputy Secretary of Defense and the ASD(NII)/CIO, as provided for in the act. Among other things, the DBSMC is responsible for establishing and implementing policies governing the organization’s investment process and approving lower-level investment board processes and procedures. The subordinate boards include four IRBs that are composed of senior officials representing their respective business areas, including representatives from the combatant commands, defense agencies, military departments, and Joint Chiefs of Staff. Among other things, the IRBs are responsible and accountable for overseeing and controlling certain business system investments, including ensuring compliance and consistency with the BEA. The department has also assigned responsibility to the USD(AT&L) for managing business system portfolio selection criteria. However, as we reported last year, the department has yet to establish the fifth review board required pursuant to the act, the Enterprise Information Environment Mission Area IRB. According to ASD(NII)/CIO officials, this board has been operating under a draft concept of operations for about 2 years, but has not been chartered because of issues surrounding its authority across IT infrastructure-related investments. However, they stated that a policy is expected to be approved and issued by the end of May 2008 that will, among other things, establish a CIO Enterprise Guidance Board that will meet the act’s requirements for Enterprise Information Environment Mission Area IRB. Specifically, the policy is to provide the Enterprise Guidance Board with DOD-wide oversight of IT investments. With respect to the military departments’ investment management structures, we reported in October 2007 that the Air Force had established the organizational structures associated with stages 2 and 3 of our framework. Specifically, it has instituted a business systems IRB, called the Senior Working Group, consisting of senior executives from the functional business units, including the Office of the Air Force CIO. This group has been assigned responsibility for business system investment governance, including conducting investment precertification and approval reviews and annual reviews, as required by the act. However, we also reported in October 2007 that the Navy had not established such investment management structures. Specifically, it did not have an enterprisewide IRB, composed of senior executives from its IT and business units, to define and implement a Navy-wide business system governance process. Without such structures, we concluded that the Navy’s ability to ensure that business system investment decisions are made consistently and reflect the needs of the organization is limited. Accordingly we made a recommendation to the Navy for establishing these management structures. Neither DOD nor the departments of the Air Force and the Navy have defined the full range of policies and procedures needed to effectively support project-level (stage 2) and portfolio-based (stage 3) investment management practices. While the department is in the process of developing a new methodology for managing its business system investments throughout their life cycles that it reports will address this lack of policies and procedures, this new methodology is still in draft, has not been approved, and we have yet to be provided a copy. Until these missing policies and procedures are defined, it is unlikely that the thousands of DOD business system investments will be managed in a consistent, repeatable, and effective manner. To DOD’s credit, it has defined corporate policies and procedures relative to several key practices in our ITIM framework that are associated with project-level investment management (stage 2). However, it does not have the full range of project-level policies and procedures needed for effective investment management. Specifically, we reported in May 2007 that DOD had satisfied several policy- and procedure-related stage 2 practices, such as requiring that systems support ongoing and future business needs through alignment with the BEA, having procedures for identifying and collecting information about these systems to support DBSMC and IRB investment decision making, and assigning responsibility for ensuring that the information collected about projects meets the needs of DOD’s investment review structures and processes. However, we also reported that it had not, for example, developed policies and procedures outlining how the DBSMC/IRB investment review processes are to be coordinated with other decision-support processes used at DOD, such as the Joint Capabilities Integration and Development System; the Planning, Programming, Budgeting, and Execution process; and the Defense Acquisition System. Without clear linkage among these processes, inconsistent and uninformed decision making may result. Furthermore, without considering component and corporate budget constraints and opportunities, the IRBs risk making investment decisions that do not effectively consider the relative merits of various projects and systems when funding limitations exists. Other important project-level, as well as portfolio-based, investment management policies and procedures that we reported as lacking include ones that (1) specify how the full range of cost, schedule, and benefit data accessible by the IRBs is to be used in making selection decisions; (2) ensure sufficient oversight and visibility into component-level (e.g., Air Force and Navy) investment management activities, including component reviews of systems in operations and maintenance; (3) define the criteria to be used for making portfolio selection decisions; (4) create the portfolio of business systems investments; and (5) provide for conducting postimplementation reviews of these investments. DOD agreed with our findings and described actions that it planned to take to address our recommendations, including developing a new life cycle management methodology for business systems. In addition, it stated that while its actions would improve the department’s corporate policies and procedures for business system investments, each component is responsible for developing and executing investment management policies and procedures needed to manage its business systems. In this regard, the military departments also have not developed the full range of related investment management policies and procedures needed to execute the project and portfolio-level practices reflected in our ITIM framework. Specifically, we reported in October 2007 that the state of the Air Force and the Navy’s investment management policies and procedures were similar to that of DOD in that while several of our ITIM framework stage 2 practices were satisfied, others were not, and none of the stage 3 practices were satisfied. For example, both the Air Force and the Navy, to their credit, had developed procedures for identifying and collecting information about their business systems to support investment selection and control, and assigned responsibility for ensuring that the information collected during project identification meets the needs of the investment management process. However, neither the Air Force nor the Navy had fully documented policies and procedures for overseeing the management of business system investments and for developing and managing complete business systems investment portfolio(s). Among other things, they did not have policies and procedures that specify decision-making processes for program oversight and describe how corrective actions should be taken when projects deviate from their project management plans. Without such policies and procedures, we concluded that both are at risk of investing in systems that are duplicative, stovepiped, nonintegrated, and unnecessarily costly to manage, maintain, and operate. To address these areas, we made recommendations aimed at implementing our framework’s stage 2 and 3 practices, and DOD partially agreed with these recommendations. DOD reports that it has begun to address our investment management findings and recommendations. Specifically, it has drafted and is piloting aspects of (e.g., an Enterprise Risk Assessment Methodology) a new lifecycle management methodology, called the Business Capability Lifecycle (BCL). The annual report states that these pilots have validated the BCL and that interim guidance for major business systems has been developed. However, the new methodology has yet to be approved. Further, BTA officials stated that plans for its finalization and full implementation have been placed on hold until the department has implemented the Chief Management Officer (CMO) provisions of the National Defense Authorization Act for Fiscal Year 2008. Based on a draft of the BCL and descriptions of it contained in the annual report and briefed to us by BTA officials, this new lifecycle methodology could address some, but not all, of the policy and procedure gaps that we have recently reported. For example, the BCL is to consolidate DOD’s currently distinct and separate system requirements, acquisition, and architectural/investment oversight processes into a single governance process. However, while lack of integration among these separate processes is a limitation that reported with DOD’s business system investment management policies and procedures, this limitation also included lack of integration with DOD’s budgeting process. Unless this new lifecycle methodology incorporates DOD’s funding process, the risk of the respective processes producing inconsistent investment decisions remains. The following are other examples of investment management policy and procedure limitations cited in our recent reports that the draft of the BCL methodology does not fully address. The BCL does not apply to programs after they have completed development/modernization activities and are in an operations and maintenance mode, except for certain programs designated as “special interest.” As we recently reported, our ITIM framework provides for including both new system development/acquisition investments and operations and maintenance of existing system investments in the investment management process. According to the department, it plans to examine the applicability of the BCL methodology to systems in operations and maintenance. The BCL does not address how the full range of cost, schedule, and benefit data is to be used by the IRBs when making their program certification decisions. Without documenting how such boards are to consider cost, schedule, and benefits factors when making these decisions, the department cannot ensure that the boards consistently and objectively select proposals that best meet the department’s needs and priorities. The BCL does not provide for DOD-level oversight and visibility into component-level investment management activities, including component reviews of systems in operations and maintenance and smaller investments, commonly referred to as tier 4 investments. This is particularly important because, as DOD reports, only 353 of about 3,000 total business systems have completed the IRB certification process and have been approved by the DBSMC. This means that the vast majority of business systems have not come before the IRBs and DBSMC, and thus are reviewed and approved only within the component organizations. Without policies and procedures defining how the DBSMC and IRBs have visibility into and oversight of all business system investments, DOD risks components continuing to invest in systems that will fall short of expectations. The BCL does not provide for portfolio-based business system investment management. Without defining how projects are to be managed as part of portfolios of related investments, the department will not be able to take advantage of the synergistic benefits to be found among the entire collection of investments, rather than just from the sum of individual investments. Further, adequately documenting both the policies and procedures that provide predictable, repeatable, and reliable investment selection and control and govern how an organization reduces investment risk of failure and provides the basis for having rigor, discipline, and respectability in how investments are selected and controlled across the entire organization. According to the department, as it implements both the CMO provisions of the National Defense Authorization Act for Fiscal Year 2008, and capability portfolio management, the IRB/DBSMC investment management approach is expected to become more portfolio oriented. In finalizing the BCL, it will be important for DOD to address these gaps in its draft methodology. If it does not, the department will continue to risk selecting and controlling its business system investments in an inconsistent, incomplete, and ad hoc manner, which in turn will reduce the chances that these investments will optimally support mission needs in the most cost-effective manner. The act specifies two basic requirements that took effect October 1, 2005, relative to DOD’s use of funds for business system modernizations that involve more than $1 million in obligations in any given fiscal year. First, it requires that these modernizations be certified by a designated approval authority as meeting specific criteria. Second, it requires that the DBSMC approve each of these certifications. The act also states that failure to do so before the obligation of funds for any such modernization constitutes a violation of the Anti-deficiency Act. As we have previously reported, the department has established an approach to meeting the act’s requirements that reflects its philosophy of “tiered accountability.” Under its approach, investment review begins within the military departments and defense agencies and advances through a hierarchy of review and decision-making authorities, depending on the size, nature, and significance of the investment. For those investments that meet the act’s dollar thresholds, this sequence of review and decision making includes component precertification, IRB certification, and DBMSC approval. For those investments that do not, investment decision-making authority remains with the component. This review and decision-making approach has two types of reviews for business systems: certification/approval reviews and annual reviews. Certification/approval reviews. Certification/approval reviews apply to new modernization projects with total costs over $1 million. These reviews focus on program alignment with the BEA and must be completed before components obligate funds for programs. Tiers 1, 2, and 3 investments in development and modernization are certified at three levels—components precertify, the IRBs certify, and the DBSMC approves. At the component level, program managers prepare, enter, maintain, and update information about their investments in their respective data repositories. Examples of information are regulatory compliance reporting, architectural profile, and requirements for investment certification and annual reviews. According to the process, the component precertification authority is to validate that the system information is complete and accessible on the repository, review system compliance with the BEA, and verify the economic viability analysis. This information is then transferred to DOD’s IT Portfolio Repository. The precertification authority asserts the status and validity of the investment information by submitting a component precertification letter to the appropriate IRB for its review. At the corporate level, the IRB reviews the pre-certification letter and related material, and if certification is decided, prepares a certification memorandum for the designated certification authority’s signature that documents the IRB’s decisions and any related conditions. The memorandum is forwarded to the DBSMC, which either approves or disapproves the IRB’s decisions and issues a memorandum containing its decisions. If the DBSMC disapproves a system investment, it is up to the component precertification authority to decide whether to resubmit the investment after it has resolved the relevant issues. Annual reviews. The annual reviews apply to all business system investments and are intended to determine whether the investment is meeting its milestones and addressing its IRB certification conditions. Tiers 1, 2, 3, and 4 business system investments are annually reviewed at two levels—the component and the IRBs. At the component level, program managers update information on all tiers of system investments that are identified in their component’s data repository. For tiers 1 through 3 systems that are in development or being modernized, information is updated on cost, milestones, and risk variances and actions or issues related to certification conditions. The component precertification authority then verifies and submits the information for these business system investments for the IRB in an annual letter. The letter addresses system compliance with the BEA and ETP and includes investment cost, schedule, and performance information. IRBs annually review tiers 1, 2, and 3 business system development or modernization investments. These reviews focus on program compliance with the BEA, program cost and performance milestones, and progress in meeting certification conditions. IRBs can advise the DBSMC to revoke a certification when the investment has significantly failed to achieve performance commitments (i.e., capabilities and costs). When this occurs, the component must address the IRB’s concerns and resubmit the investment for certification. Since October 1, 2005 (the effective date of the relevant provision of the act), DOD has continued to certify and approve investments with annual obligations in excess of $1 million. For example, as of March 2007, DOD reported that the DBSMC had approved 285 system investments that had been previously certified by the IRBs. By September 30, 2007, DOD reported that the DBSMC had approved an additional 29 IRB-certified system investments, for a total of 314 approved systems. According to DOD: All 314 systems were certified and approved as meeting the first condition in the act—being in compliance with the BEA—and the 314 systems represent all of the modernization programs meeting the act’s threshold through fiscal year 2007. Collectively, these 314 involved $7.9 billion in modernization funding. About 60 percent (187) of the 314 were reviewed and precertified within the military departments. More specifically, 69 were pre-certified within the Army, 58 within the Navy, and 60 within the Air Force. The remaining 127 were reviewed and precertified within 1 of 15 defense agencies, including 26 in the Military Health Service, 24 within the Defense Logistics Agency, and 20 in the BTA. Since September 30, 2007, the IRBs have certified and the DBSMC has approved 39 additional system modernization investments. Moreover, available information from the military departments shows that 35 additional investments have been precertified. Specifically, the Air Force, Navy, and Army, report that 14, 19, and 2 investments, respectively, have been precertified. In addition, both the Air Force and Navy reported that they have reviewed and approved investments that are below the act’s thresholds, and thus do not require IRB certification or DBSMC approval. Specifically, the Air Force reports 46 of these systems have been reviewed and approved, while the Navy reports 4 additional systems reviewed and approved. We have yet to receive comparable information from the Army. The basis for DOD’s continuing efforts to certify and approve business systems modernization investments as being compliant with the BEA are essentially each individual program’s assertion of compliance. These assertions in turn are largely based on DOD BEA compliance assessment guidance. At the request of the Senate Armed Services Committee, we have ongoing reviews of several major business systems investments that include determining the extent to which these investments have demonstrated compliance with the BEA. Over the last year, DOD has continued to make important progress in defining and implementing key institutional modernization management controls, but much remains to be accomplished. In particular, the corporate BEA, while continuing to improve, is still missing important content, and it has yet to be federated through development of aligned subordinate architectures for each of the department’s component organizations. Further, while the department has developed a strategy for federating the BEA in this manner, this strategy is still evolving and has yet to be implemented. Compounding this situation are recurring limitations in the ETP, as well as the immaturity of the military service architecture programs, to include their own transition plans. In addition, neither the corporate nor the military departments’ approaches to business systems investment management have all the requisite structures and defined policies and procedures in place to be considered effective investment selection, control, and evaluation mechanisms. These architecture and investment management limitations continue to put billions of dollars spent each year on thousands of business system investments at risk. Development of a well-defined federated architecture and accompanying transition plans for the business mission area, along with institutionalization of effective business system investment management policies and procedures across all levels of the department, are critically important to addressing the business system modernization high-risk area. Equally, if not more important is for the department to actually implement the architecture and investment management controls on each and every business system investment. While not a guarantee, having an architecture-centric approach to investment management, combined with following the other key system acquisition disciplines that are reflected in our existing recommendations to the department, can be viewed as a recipe for the business systems modernization program’s removal from our high-risk list. Related to implementing our existing recommendations is the department’s need to keep congressional defense committees fully informed about its progress in federating the DOD corporate BEA, to include the maturity of component organization architecture efforts and the related transition plan(s). In its most recent annual report to congressional defense committees pursuant to the National Defense Authorization Act for Fiscal Year 2005, the department missed an opportunity to do this by not including the results of its IV&V contractor’s assessments of the completeness, consistency, understandability, and usability of the federated family of business mission area architectures, including associated transition plans, as we previously recommended. Because we have existing recommendations to the Secretary of Defense that address the issues raised in this report and that the department has yet to fully implement, we are not making additional recommendations at this time. In comments on a draft of this report, signed by the Deputy Under Secretary of Defense (Business Transformation), the department stated that it appreciated our support in advancing its business transformation efforts. It also provided several technical comments that we have incorporated throughout the report, as appropriate. We are sending copies of this report to interested congressional committees; the Director, Office of Management and Budget and the Secretary of Defense. Copies of this report will be made available to other interested parties upon request. This report will also be available at no charge on our Web site at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-3439 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. As agreed with defense congressional committees, our objective was to assess the actions by the Department of Defense (DOD) to comply with the requirements of section 2222 of Title 10, U.S. Code. To address this, we focused on five of the six requirements in section 2222, and related best practices contained in federal guidance, that we identified in our last annual report under the act as not being fully satisfied. Generally, these five requirements are (1) development of a business enterprise architecture (BEA), (2) development of a transition plan for implementing the BEA, (3) inclusion of business systems information in DOD’s budget submission, (4) establishment of business systems investment review processes and structures, and (5) approval of defense business systems investments with obligations in excess of $1 million. (See the background section of this report for additional information on the act’s requirements.) We did not include the sixth requirement because our 2006 annual report under the act shows that it had been satisfied. Our methodology relative to each of the five requirements is as follows: To determine whether the BEA addressed the requirements specified in the act, and related guidance, we analyzed version 5.0 of the BEA, which was released on March 14, 2008, relative to the act’s specific architectural requirements and related guidance that our last annual report under the act identified as not being met. We also reviewed version 5.0 to confirm whether statements made in DOD’s March 15, 2008, annual report about the BEA’s content were accurate. In addition, we reviewed DOD’s Business Mission Area Federation Strategy and Road Map Version 2.0 released in January 2008, comparing the strategy and any associated implementation plans with prior findings and recommendations relative to the content of the strategy. Further, we reviewed the Business Transformation Agency’s report of selected independent verification and validation (IV&V) contractor observations and recommendations relative to the Version 5.0’s ability to provide a foundation for BEA federation, and compared this to our prior finding and recommendation relative to the content of an IV&V review of the BEA. Finally, we reviewed and leveraged the applicable results contained in our recent reports on the military departments’ enterprise architecture programs, on the Air Force and Navy’s investment management processes, and our recent testimony on DOD’s Business Transformation. To determine whether the enterprise transition plan (ETP) addressed the requirements specified in the act, we reviewed the updated version of the ETP, which was released on March 15, 2008, relative to the act’s specific transition plan requirements and related guidance that our last annual report under the act identified as not being met. We also reviewed the ETP to confirm that statements in DOD’s March 15, 2008, annual report about the content of the ETP were accurate. To determine whether DOD’s fiscal year 2009 information technology budget submission was prepared in accordance with the criteria set forth in the act, we reviewed and analyzed the department report entitled “Report on Defense Business System Modernization FY 2005 National Defense Authorization Act, Section 332,” dated February 2008 and compared it to the specific requirements in the act. To determine whether DOD has established investment review structures and processes, we focused on the act’s requirements that our last annual report under the act identified as not being met, obtaining documentation and interviewing cognizant DOD officials about efforts to establish the one IRB specified in the act that we previously reported had yet to be established. We also reviewed and leveraged our recent reports that assessed the department’s, Air Force’s, and Navy’s approaches to managing business system investments. To determine whether the department was reviewing and approving business system investments exceeding $1 million, we reviewed DOD’s list of business system investments certified by the Investment Review Boards (IRB) and approved by the Defense Business Systems Management Committee (DBSMC). We then compared the detailed information provided with the summary information contained in the department’s March 15, 2008, report to the congressional defense committees to identify any anomalies. We also obtained documentation from the Air Force and the Navy to ascertain the specific actions that were taken (or planned to be taken) in order to perform the annual systems reviews as required pursuant to the act. We requested similar information from representatives of the Army, but did not receive it in time to include in this report. We did not independently validate the reliability of the cost and budget figures provided by DOD because the specific amounts were not relevant to our findings. We conducted this performance audit at DOD headquarters in Arlington, Virginia, from March 2008 to May 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact person named above, key contributors to this report were Elena Epps, Michael Holland, Tonia Johnson (Assistant Director), Neelaxi Lakhmani, Rebecca LaPaze, Anh Le, and Freda Paintsil. | In 1995, GAO first designated the Department of Defense's (DOD) business systems modernization program as "high risk," and GAO continues to do so today. To assist in addressing this high-risk area, the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 contains provisions that are consistent with prior GAO investment management and enterprise architecture-related recommendations, and requires the department to submit annual reports to its congressional committees on its compliance with these provisions. The act also directs GAO to review each annual report. In response, GAO assessed the actions taken by DOD to comply with requirements of the act. To do so, GAO leveraged its recent reports on various aspects of the department's modernization management controls, and it reviewed, for example, the latest version of its business enterprise architecture and the associated transition plan and architecture federation strategy. GAO also interviewed key officials. As part of DOD's continuing efforts to strengthen management of its business systems modernization program, it has taken steps over the last year to build on past efforts and further comply with the National Defense Authorization Act's requirements and related federal guidance. Notwithstanding this progress, aspects of these requirements and relevant guidance have yet to be fully satisfied. In particular, the military departments, under DOD's "federated" and "tiered" approach to establishing institutional modernization management controls, have lagged well behind DOD's corporate efforts, and the corporate efforts are still not yet where they need to be. For example, the latest version of DOD's corporate business enterprise architecture continues to add content needed to improve its completeness, consistency, understandability, and usability. Moreover, its latest architecture federation strategy is more detailed and explicit than the prior version. However, the corporate architecture is still missing important content, such as business rules for, and information flows among, certain business activities. Moreover, the architecture has yet to be federated. Specifically, the military departments, which are the largest members of the federation, do not yet have mature enterprise architecture programs, and the federation strategy aimed at accomplishing this is still evolving. GAO has existing recommendations to address these and other architecture issues. The updated enterprise transition plan, which provides a temporal investment roadmap for transitioning from the current architectural environment to the target environment, continues to identify systems and initiatives that are to fill business capability gaps and address the DOD-wide and component business priorities that are contained in the business enterprise architecture. However, the plan still does not include investments for all components and does not reflect key factors associated with properly sequencing planned investments, such as dependencies among investments and the capability to execute the plan. Furthermore, the military departments, which are the largest members of the business federation, have yet to fully develop their own architecturally-based transition plans. GAO has existing recommendations to address these and other transition plan issues. DOD and the military departments have yet to fully establish key investment review structures and have yet to define related policies and procedures for effectively performing both project-level and portfolio-based investment management. GAO has existing recommendations to address these and other investment issues. Until DOD fully implements GAO's existing recommendations relative to the act and related guidance, its business systems modernization will likely remain a high-risk program. |
DOD’s primary medical mission is to maintain the health of 1.6 million active duty service personnel and to provide health care during military operations. Also, DOD offers health care to 6.6 million non-active duty beneficiaries, including dependents of active duty personnel, military retirees, and dependents of retirees. Most care is provided in about 115 hospitals and 470 clinics worldwide—collectively referred to as military treatment facilities (MTF)—operated by the Army, Navy, and Air Force. The DOD direct care system is supplemented by care that is mostly paid for by DOD but is provided by civilian physicians under the CHAMPUS program. DOD is currently transitioning to a nationwide managed care program called TRICARE, under which the CHAMPUS program is now offered as one of three health care options called TRICARE Standard. In response to the rapid escalation of CHAMPUS costs in the 1980s, the Congress urged DOD, beginning with the Appropriations Act for Fiscal Year 1991 (P.L. 101-511), that physician payments under CHAMPUS be gradually brought in line with payments under Medicare, with reductions not to exceed 15 percent in a given year. Starting with the DOD Appropriations Act for Fiscal Year 1993 (P.L. 102-396), the Congress also enacted provisions (1) directing DOD, through regulations, to limit beneficiaries’ out-of-pocket costs through balance billing limits and (2) authorizing waivers to “freeze” CMAC rates at current levels if DOD determines that further rate reductions would impair beneficiaries’ adequate access to health care. DOD set balance billing limits for nonparticipating physicians at 115 percent of CMAC, which is the same limitation used for the Medicare program. By basing physician reimbursement on the Medicare fee schedule, DOD estimates that beneficiaries will save about $155 million in out-of-pocket costs in fiscal year 1998. To further contain rising health care costs, the Congress directed DOD in the National Defense Authorization Act for Fiscal Year 1994 (P.L. 103-160) to prescribe and implement a nationwide managed health care benefit program modeled on HMOs. Drawing from its experience with demonstrations of alternative health care delivery approaches, DOD designed TRICARE. As a triple-option benefit program, TRICARE is designed to give beneficiaries a choice among an HMO, a preferred provider organization (PPO), and a fee-for-service benefit. The HMO option, called TRICARE Prime, is the only option for which beneficiaries must enroll. TRICARE Extra is the PPO option, and TRICARE Standard is the fee-for-service option, which remains identical in structure to the previous CHAMPUS program. Regional MCSCs help administer the TRICARE program. The MCSCs’ many responsibilities include claims processing, customer service, and developing and maintaining an adequate network of civilian physicians. CMAC rates serve as the maximum level of reimbursement under each of TRICARE’s three options. To treat military beneficiaries under the Prime and Extra options, civilian physicians must join a network through the MCSC. The MCSC individually contracts with physicians or physician groups at a negotiated reimbursement rate, which is usually discounted from the CMAC rate. Network physicians are reimbursed at their negotiated rate regardless of whether they are providing care to enrollees under Prime or nonenrollees under the Extra option. Network physicians must accept their negotiated rate as payment in full. Physicians who do not join the network may still provide care to military beneficiaries under TRICARE Standard, for which they are reimbursed up to the full CMAC rate. Under this option, physicians may choose, on a case-by-case basis, whether to participate on a claim, that is, accept the CMAC rate as payment in full, less any applicable copayment. By law, physicians who decide not to participate on a particular claim under TRICARE Standard will receive the full CMAC rate and can balance bill the beneficiary for up to an additional 15 percent above that rate. The methodology DOD uses to set and transition CMAC rates to the Medicare level of payment complies with statutory requirements established under section 1079(h) of title 10 U.S.C. and generally conforms with accepted actuarial practice. Since 1991, DOD has annually adjusted and set CMAC rates on the basis of the Medicare fee schedule, which will result in a savings of about $770 million in fiscal year 1998. The methodology used to adjust CMAC rates is described in appendix II. As of March 1997, the most recent available CMAC rate update, approximately 80 percent of the national CMAC rates were at the same level as Medicare and about 20 percent were higher than Medicare because the transition for these rates is not yet complete. Only the rates for 61 of about 7,000 procedures—less than 1 percent—were below the Medicare level of payment. DOD has proposed a new rule (62 Fed. Reg. 61058 (1997)) to increase the payment amounts for the 61 procedures to the Medicare fee schedule amounts. The proposed rule is expected to be finalized in March 1998 after comments are received and analyzed. See appendix III for a list of these procedures. While CMAC rates are initially set at the national level, adjustments are made for each procedure code for 225 different localities within the United States. The locality-adjusted CMAC rates are the rates actually used to reimburse physicians. We found that the selected high-volume CMAC rates at each of the four locations were generally consistent with Medicare rates. DOD began using CMAC rates to reimburse civilian physicians on May 1, 1992. During the initial CMAC transition process to the Medicare level of payment, some physicians expressed concern about the low level of payment for certain obstetric and pediatric procedures, but payment levels for these procedures have since been addressed by DOD and HHS’ Health Care Financing Administration (HCFA). Current physician complaints about the CMAC level of reimbursement are primarily directed at the discounted CMAC rates paid to TRICARE network physicians under the Prime and Extra options rather than the full CMAC rate used to reimburse nonnetwork physicians under the Standard option. DOD reports that the vast majority of physicians who accept military beneficiaries as patients under the Standard option agree to accept the CMAC rate as payment in full for their services and do not balance bill for additional payment. During the transition of CMAC rates, physicians initially complained about the CMAC reimbursement levels for obstetric and pediatric procedures. In response to complaints about obstetric rates, HCFA reexamined and adjusted the Medicare fee schedule’s obstetric cost components and increased the reimbursement rates for some obstetrical delivery procedures. DOD, in turn, made corresponding adjustments to obstetric fees during its yearly CMAC revision. DOD did not, however, adjust pediatric rates. Physicians argued that CMAC rates for pediatric procedures should not be set at the same levels as services provided to adults because physician costs for caring for children are higher. To determine the validity of this concern, DOD commissioned a study, which concluded that only 12.3 percent of all payments would be for services for which there is a higher cost for children, and 56.2 percent of all payments would be for services for which there is a lower cost for children. The study found that 31.5 percent of payments were the same for children and adults. Consequently, DOD concluded that no payment differential was needed. According to the actuaries, DOD’s decision conforms with common insurance industry practice. Because most CMAC rates are equivalent to Medicare rates, the discounted CMAC rates that TRICARE network physicians agree to accept are typically below the Medicare level of payment. The American Medical Association and some medical society members we interviewed told us that they considered the discounted CMAC rates network physicians were being asked to accept by the MCSCs to be too low, but that the full CMAC rate paid under Standard, though not desirable, is acceptable. Because of this, some physicians told us that they would not join the TRICARE network but would continue to see military beneficiaries under the Standard option. In the four locations, we found that the differences in the discounted CMAC rates physicians are willing to accept depend largely upon local health care market conditions such as the degree of HMO penetration as well as the dependence of the local physicians on the military beneficiary population. Physicians whose practices include a large percentage of military beneficiaries are more likely to join the network and accept the discounted rates offered by the MCSCs to maintain their patient base. For example, in Ozark, Alabama, one of the two rural, low-HMO-penetration locations we selected, the median discount rate physicians were willing to accept to maintain their patient base was 10 percent. In Abilene, Texas, the other rural, low-HMO-penetration location we visited, most physicians said that they did not need to join the network to maintain their patient base, and consequently, many of those who did agree to join did so only on the condition that their fee would not be discounted. In each of these locations, DOD and MCSC officials told us that the local physicians also tended to be unfamiliar with and averse to managed care. In contrast, however, network physicians in the two urban, high-HMO-penetration locations—San Diego, California, and Jacksonville, Florida—accepted higher median discounts of 15 and 20 percent, respectively. According to the actuaries, in areas with significant competition among managed care plans such as the states of Florida, California, Minnesota, and Massachusetts, physician reimbursement is approaching the Medicare level of payment, and in some of these areas, typical reimbursement is based on 80 percent of the Medicare level of payment. Likewise, a study conducted by Milliman and Robertson concluded that HMO reimbursement rates are approaching those of the Medicare fee schedule in many states.For example, an analysis of HMO payments as a percentage of Medicare showed that HMOs in California pay at 105 percent of Medicare and those in Florida pay at 95 percent of Medicare, on average. DOD reported in April 1997 a physician participation rate of 86 percent for the TRICARE Standard option, based on an analysis of claims submitted from July 1995 through June 1996. This participation rate means that the vast majority of physicians accepted the allowed charges as payment in full and did not balance bill beneficiaries for services rendered. As a safeguard to ensure participation, DOD also monitors participation for individual procedures for each locality during the yearly CMAC update process. If participation on claims falls below 60 percent for a particular procedure for which there are at least 50 claims, DOD uses a waiver to automatically “freeze” the rate for that procedure at the current level with no downward adjustments for that year. During 1997, 167 automatic waivers for physician payments were in effect, which represents less than 1 percent of the approximately 1.6 million locality-specific CMAC rates. Waivers can also be requested through written petitions. To date, DOD has received about 20 waiver petitions but has approved only 1 on the basis of the information provided. Our discussions with physician groups, physicians, and physician office staff revealed considerable concern with several other aspects of TRICARE administration—all of which negatively affected their opinion of the program. The administrative concerns range from slow claims payment to unreliable customer telephone service. And while these concerns resulted in some physicians dropping out of the network or not joining, these physicians told us that they continue to treat military beneficiaries as nonnetwork physicians under the Standard option. DOD and MCSC officials acknowledged these complaints and told us they are in the process of addressing them. Consequently, the success of these efforts will not be known for some time. Slow reimbursement was a common physician complaint about TRICARE and, when combined with discounted payment levels, has resulted in some physicians dropping out of the TRICARE network and others choosing not to join. During the start-up phase of health care delivery, the MCSCs for the four selected locations experienced to varying degrees some problems regionwide in meeting their contractual timeliness requirement that 75 percent of claims be processed within 21 days, primarily because of higher-than-expected claims volume. To begin meeting claims processing timeliness standards, the MCSC for Abilene, Texas, told us it closed its understaffed claims processing center for DOD’s Southwest region and subcontracted with a company that specializes in claims processing to clear a backlog of about 200,000 claims. In addition, it sent a team of claims adjudicators to Abilene to resolve physicians’ individual claims. The MCSC’s claims processing center for the regions encompassing Ozark, Alabama, and Jacksonville, Florida, hired an additional 200 staff to adjudicate the larger-than-expected workload. The MCSC responsible for these regions also told us that it has teams of claims processors that can be sent to specific locations when needed. Although the MCSCs for the four locations reported to DOD that they are now meeting the contractual claims processing requirements, physicians in all four locations still complained to us about slow and cumbersome reimbursement. Physicians and their office staffs told us they spend considerable time refiling and appealing TRICARE claims as a result of denials and partial payments. Physicians and their office staffs also complained that there seem to be no distinct or specific TRICARE requirements on how a treatment should be coded on a claim to receive payment. DOD and MCSC officials responded that although the MCSCs use national Current Procedural Terminology coding standards, some of the coding confusion is due to the use of Claim Check, a software program that DOD requires all MCSCs to use for claims review. Claim Check performs an initial claim review and edits the procedure codes to eliminate nonreimbursable and duplicate procedures to prevent overpayment. According to DOD, all Claim Check determinations are considered final and, as such, are not appealable. These edits may result in the denial or recoding of submitted procedure codes, which may cause physicians to receive lower-than-expected payments. DOD and MCSC officials also said that payments are delayed for other reasons, such as the lack of preauthorization for treatment. To help remedy this, MCSC officials told us that they are conducting educational seminars on proper claims submission techniques for physicians and their office staffs. Contributing to physicians’ discouragement with the TRICARE program is that they are not routinely provided with fee schedules, and as a result, they do not always know what they should be paid. MCSC officials responded that physicians can request fee information up front for their high-use procedures and that CMAC rates are available on the Internet.They also told us that physicians may request fee information for specific procedures through a toll-free customer service telephone line. In addition, fee information can be purchased from the federal government in hard copy for $75 or as an electronic file for $152. These sources contain over 1.6 million CMAC rates—representing approximately 7,000 procedure codes for each of the 225 localities. However, some physician offices may be unwilling to pay these prices for information they believe should be provided by the MCSC or DOD—especially since physicians would only be interested in the rates for their specific locality. Furthermore, we were told by physician office staff that not every physician’s office has access to the Internet and that repeatedly requesting specific fees by telephone is time consuming. Physicians complained that other administrative problems, such as slow preauthorizations for care and unreliable customer service telephone lines, have also resulted in increased paperwork and staff time, which is not cost-effective. Physicians at each of the locations we examined cited the slow and paperwork-intensive preauthorization process, which is used to approve certain types of care for reimbursement. Some of the physicians told us they have had to delay treatment to obtain preauthorization, and some said that they went ahead and treated patients who, in their opinion, needed immediate attention, thereby running the risk of not being reimbursed for their services. DOD and MCSC officials responded that the preauthorization process takes time because it is a two-level review. The local MTF must review the request to determine whether the care could be provided within that facility, then MCSC officials must perform a medical necessity review. MCSC officials also stated that incomplete information could require resubmission and thus a delayed determination. Recognizing physicians’ concerns, MCSC officials are working on ways to streamline and improve the preauthorization process. For example, in Ozark, Alabama, local MCSC personnel rerouted preauthorization requests to first obtain the medical necessity decision, thus giving the MTF staff information necessary to make a faster determination as to whether the care could be provided at the MTF. And in Abilene, Texas, a team of military and MCSC officials evaluated the preauthorization process. Their review resulted in the retraining of civilian network physicians and their staffs on a case-by-case basis to ensure complete initial submissions of patient identification and clinical data. Some physicians also complained that their office staffs spent inordinate amounts of time trying to get through to customer service on the telephone, and once connected, they had a long wait for a representative. In one location, some office staff told us that they called the customer service line repeatedly over a 2-day period trying to get through to a representative. Other office staff told us that they typically stayed on hold 30 to 45 minutes for a representative after being connected. The MCSC told us they are trying various approaches to address these problems. For example, the MCSC for Abilene, Texas, responded that the telephone system at the TRICARE Service Center had been improved by adding more telephone lines, modifying the automated telephone menu, and streamlining the rerouting process. The MCSC in San Diego, California, installed an additional toll-free telephone line dedicated solely for physician use, and the MCSC for Ozark, Alabama, and Jacksonville, Florida, more than doubled the staff at its central telephone center. On the basis of congressional direction, DOD limited beneficiaries’ out-of-pocket costs by setting balance billing limits for nonparticipating physicians at 115 percent of the CMAC rate, which is the same limitation used for the Medicare program. This provision became effective for all care provided on and after November 1, 1993. An infraction of this requirement will result in a physician possibly losing his or her status as a TRICARE authorized provider. DOD has proposed a new rule (62 Fed. Reg. 61058 (1997)) that noncompliant physicians also be excluded from other federal health care and benefit programs such as Medicare and Medicaid.According to a recent DOD analysis of claims submitted under the TRICARE Standard option, physicians who did not participate balance billed for 14 percent of claims filed during the period of July 1, 1995, through June 30, 1996. For these nonparticipating claims, beneficiaries saved approximately $78.6 million dollars as a result of balance billing limits. DOD and MCSC officials told us they were aware of only a very small number of balance billing infractions—all of which were easily resolved. However, MCSC officials told us that after adjudicating the claim and paying the physician, they did not receive notice of any bill the physician may have subsequently sent to the beneficiary. Consequently, the MCSC does not know whether physicians are balance billing beneficiaries in excess of the 115 percent limit unless beneficiaries complain. While the MCSCs have attempted to educate beneficiaries about balance billing limits through briefings and written materials such as benefit booklets, the explanation of benefits statement, which contains information on claim adjudication, does not contain information on the balance billing limits for TRICARE Standard claims submitted by nonparticipating physicians. Including this information on the explanation of benefits statements for both beneficiaries and physicians, as Medicare does, would educate both parties about the amount that can be balance billed. For the few cases in which beneficiaries notified DOD and the MCSCs that physician charges exceeded the balance billing limits, DOD and the MCSCs reported that these excess charges were due to either billing mistakes or ignorance of procedures rather than deliberate intent. Each of the MCSCs has procedures in place on how to resolve excessive balance billing through a series of notifications to the physician and the beneficiary. To date, all of the identified infractions have been easily resolved, and, according to DOD officials, no physicians have been sanctioned under TRICARE for excessive balance billing practices. By lowering CMAC rates to levels comparable to rates paid under the Medicare program, DOD will save nearly three-quarters of a billion dollars in fiscal year 1998 in health care expenditures. And throughout the nearly complete transition process, DOD has appropriately set and adjusted CMAC rates in compliance with statutory requirements using a methodology that also generally complies with accepted actuarial practice. Although physicians complained about the level of reimbursement under TRICARE, their complaints are focused on the discounted rates paid to network physicians under TRICARE Prime and Extra—rates that are typically lower than Medicare. However, it is the combination of low payments and administrative impediments associated with untimely payments and slow authorizations for treatment that has negatively affected many physicians’ opinions of the TRICARE program. Furthermore, when physicians are reimbursed, they do not always know how much to expect or whether they are being paid correctly because written or published fee schedules are not routinely furnished by the MCSCs. While most of the physicians we spoke with continue to treat military beneficiaries, addressing physicians’ concerns is crucial to the development and maintenance of TRICARE networks. Because of administrative and cost issues, physicians are becoming disillusioned with the program. Although DOD and MCSCs are addressing these problems, if they are not resolved, DOD could face increasing problems in the future attracting the number of physicians necessary to ensure that beneficiaries have adequate access to care. While balance billing limits under the Standard option are intended to protect beneficiaries from excessive out-of-pocket costs, DOD, MCSCs, and beneficiaries do not always know when physicians charge above the 115 percent limit. Although the MCSCs attempt to educate beneficiaries on balance billing limits, this information could be easily communicated by following Medicare’s practice of including balance billing information on explanation of benefits statements sent to both the beneficiaries and physicians. To improve the administration of the TRICARE program, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Health Affairs to require MCSCs to provide to physicians written or published locality-specific fee schedules after each yearly CMAC update to help eliminate confusion about CMAC reimbursement rate amounts and require MCSCs to notify beneficiaries and physicians of balance billing limits on the explanation of benefits statements for all TRICARE Standard claims submitted by nonparticipating physicians. In commenting on a draft of our report, the Deputy Assistant Secretary of Defense (Health Services Financing) concurred with our findings and stated that the draft fairly and thoroughly addresses a complex set of issues related to the reimbursement of physicians. In response to our first recommendation, DOD agreed to seek additional, cost-effective methods to ensure that all physicians have access to accurate, timely information about CMAC rates. In response to our second recommendation, DOD agreed to develop balance billing information statements for inclusion on the explanation of benefits forms. We incorporated several technical revisions as suggested by DOD. DOD’s comments are presented in their entirety in appendix IV. As agreed with your offices, we are sending copies of this report to the Secretary of Defense and will make copies available to others upon request. Please contact me on (202) 512-7101 or Michael T. Blair, Jr., Assistant Director, on (404) 679-1944 if you or your staff have any questions concerning this report. Other major contributors to this report include Cynthia M. Fagnoni, Associate Director; Bonnie W. Anderson, Evaluator-in-Charge; Jonathan Ratner, Senior Health Economist; and Dayna K. Shah, Assistant General Counsel. To evaluate the compliance of DOD’s rate-setting methodology with statutory requirements, we obtained assistance from an actuarial consulting firm. It reviewed documentation of the methodology used in developing CHAMPUS maximum allowable charges (CMAC) along with the requirements of section 1079(h) of title 10, U.S.C. In addition to assessing compliance, the actuary made a determination of whether DOD’s methodology is generally consistent with accepted actuarial practice and reviewed and provided observations on DOD’s approach for setting pediatric rates. We reviewed and discussed with DOD officials the changes made to obstetric procedure fees by the Health Care Financing Administration (HCFA). We obtained information from DOD officials regarding the status of the CMAC transition process. To determine whether reimbursement levels differed between CMAC and Medicare rates, we compared a number of high-volume procedures for the following four selected locations: (1) Abilene, Texas; (2) Jacksonville, Florida; (3) Ozark, Alabama; and (4) San Diego, California. In addition, we obtained the discounted rates paid to network physicians in each of the four locations. We used specific criteria to select the locations to ensure that they were representative of the various health care markets where military beneficiaries reside, within regions with the most extensive TRICARE experience. Our selection criteria included the level of HMO penetration, whether the area was rural or urban, the military facility branch of service, and the size and mix of the beneficiary population. These four locations also served as the focus for our evaluation of physician complaints and balance billing enforcement. We selected the high-volume procedures on the basis of an analysis of claim data for each location for the time period of July 1995 through June 1996. For each location, we used the top five high-use specialties in addition to obstetrics and pediatrics for a total of seven specialties. For each specialty, we then used the top procedures on the basis of the frequency, or volume, of claims received for the service. For each procedure, we calculated the CMAC rate as a percentage of Medicare. To determine the basis of physician complaints about CMAC rates and to identify other physician complaints about TRICARE, we spoke with members of the local medical societies for each of the four locations. To obtain an overall perspective of physician concerns, we met with officials from the American Medical Association. We also interviewed officials from the National Military Family Association and The Retired Officers Association. To determine whether and how physicians’ concerns were being addressed, we interviewed local military and MCSC officials for each of the locations as well as DOD officials at the Office of the Assistant Secretary of Defense for Health Affairs. We also reviewed DOD’s physician participation report to determine the extent to which physicians were willing to accept the CMAC rate as full payment. We discussed the report’s methodology with DOD officials along with DOD’s use of participation rates to waive rate reductions for procedures in locations where participation is low. To determine the extent and difficulty of balance billing enforcement, we interviewed the local military and MCSC officials for the four locations. We met with officials at the TRICARE Support Office to discuss the methods of enforcement and the extent of infractions. We also met with officials at HCFA to determine how they enforce Medicare’s balance billing limits. We performed our work between March 1997 and January 1998 in accordance with generally accepted government auditing standards. CMAC rates for a particular year are calculated using actual charge data submitted on DOD claims for service dates during a 12-month period starting July 1 and ending June 30. A national prevailing charge for each procedure is then calculated at the 80th percentile of these actual billed charges. For each procedure, the previous year’s national CMAC is then compared with the lesser of the current-year prevailing charge or the current-year Medicare fee schedule amount. Depending on the outcome, one of the following three scenarios applies: If the current-year prevailing charge is lower than the Medicare fee schedule amount, the prevailing charge becomes the new CMAC rate. If the current-year prevailing charge is above the Medicare amount, the previous year’s CMAC is cut the lesser of 15 percent or the amount necessary to reach the Medicare amount, and thus becomes the new CMAC rate. If the previous year’s CMAC is below the Medicare amount, it is updated by the Medicare Economic Index (MEI), either in full or by the amount necessary to reach the Medicare level of payment. After CMAC rates are calculated at the national level, locality-specific adjustments are made for each procedure code. Injection for elbow X ray Remove cataract, insert lens (continued) Preventive visit, new, infant Preventive visit, new, age 1-4 Preventive visit, new, age 5-11 Preventive visit, new, age 12-17 Preventive visit, new, age 40-64 Preventive visit, established, infant Preventive visit, established, age 1-4 Preventive visit, established, age 5-11 Preventive visit, established, age 12-17 Preventive visit, established, age 18-39 Preventive visit, established, age 40-64 (continued) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO examined: (1) whether the Department of Defense's (DOD) methodology for setting the Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) maximum allowable charge (CMAC) rates complies with statutory requirements and how current CMAC rates compare with Medicare rates for similar services; (2) the basis for physicians' concerns about CMAC rates and how these concerns affect physicians' willingness to treat military beneficiaries; (3) the basis for other concerns physicians have about TRICARE that could also affect their willingness to treat military beneficiaries; and (4) how balance billing limits are being enforced. GAO noted that: (1) the methodology used by DOD to transition CMAC rates to the Medicare level of payment complies with statutory requirements and generally conforms with accepted actuarial practice; (2) these adjustments will result in DOD saving about three-quarters of a billion dollars in fiscal year 1998 in health care expenditures; (3) as of the most recent available CMAC rate adjustment in March 1997, 80 percent of CMAC rates nationwide were at the same level as Medicare, with about 20 percent higher and less than 1 percent below the Medicare level of payment; (4) the CMAC rates at the four locations GAO selected were generally consistent with Medicare rates; (5) while physicians' initial concerns about low obstetric and pediatric rates have been addressed by DOD, current physician complaints about reimbursement levels are focused on the discounted CMAC rates paid to network physicians under DOD's TRICARE program; (6) because most CMAC rates are now equivalent to Medicare rates, the discounted CMAC rates that TRICARE network physicians agree to accept are typically below the Medicare level of payment; (7) some physicians told GAO that they considered the discounts unacceptable, and they would not join the TRICARE network but would continue to treat military beneficiaries as nonnetwork physicians; (8) the discount rates physicians were willing to accept in the four locations were largely dependent on local health care market factors such as the degree of health maintenance organization penetration and the dependence of local physicians on the military beneficiary population; (9) physicians GAO met with also expressed concerns about administrative hassles, which contributed to their frustration with the TRICARE program; (10) in many cases, physicians said that while they would be willing to accept discounted CMAC rates, the administrative impediments provided significant disincentives to joining the TRICARE network; (11) DOD and managed care support contractors (MCSC) officials acknowledged these complaints and are making efforts to address them and alleviate physicians' concerns; (12) DOD and MCSC officials told GAO that they were aware of only a very small number of balance billing infractions--all of which had been easily resolved; (13) while the MCSCs attempt to educate beneficiaries about balance billing limits, the explanation of benefits statement does not include information on the balance billing limits; and (14) Medicare, which has the same balance billing limit, sends notice of balance billing limitations on the statements it provides to beneficiaries and physicians. |
When the WTC buildings collapsed on September 11, 2001, an estimated 250,000 to 400,000 people were immediately exposed to a noxious mixture of dust, debris, smoke, and potentially toxic contaminants in the air and on the ground, such as pulverized concrete, fibrous glass, particulate matter, and asbestos. Those affected included people residing, working, or attending school in the vicinity of the WTC and thousands of emergency response workers. Also affected were the estimated 40,000 responders who were involved in some capacity in the days, weeks, and months that followed, including personnel from many government agencies and private organizations as well as other workers and volunteers. A wide variety of physical and mental health effects have been observed and reported among people who were involved in rescue, recovery, and cleanup operations and among those who lived and worked in the vicinity of the WTC. Physical health effects included injuries and respiratory conditions, such as sinusitis; asthma; and a new syndrome called WTC cough, which consists of persistent coughing accompanied by severe respiratory symptoms. Almost all firefighters who responded to the attack experienced respiratory effects, including WTC cough, and hundreds had to end their firefighting careers because of WTC-related respiratory illnesses. The most commonly reported mental health effects among responders and others were symptoms associated with posttraumatic stress disorder—an often debilitating disorder that can develop after a person experiences or witnesses a traumatic event, and which may not develop for months or years after the event. Behavioral effects such as alcohol and tobacco use and difficulty coping with daily responsibilities were also reported. Several federally funded programs monitor the health of people who were exposed to the WTC attack and its aftermath. The monitoring programs vary in such aspects as eligibility requirements, methods used for collecting information about people’s health, and approaches for offering referrals. Of the four programs that offer medical examinations to WTC responders, the only one that is open to federal workers who responded to the disaster in an official capacity is the one implemented by HHS. (See table 1.) None of the monitoring programs receives federal funds to provide clinical treatment for health problems that are identified. The majority of federal funding for these monitoring programs was provided by DHS’s Federal Emergency Management Agency (FEMA), as part of the approximately $8.8 billion in federal assistance that the Congress appropriated to FEMA for response and recovery activities after the WTC disaster. One appropriation in 2003 specifically authorized FEMA to use a portion of its WTC-related funding for screening and long- term monitoring of emergency services and rescue and recovery personnel. Generally, however, FEMA may fund only short-term care after a disaster, such as emergency medical services, and not ongoing clinical treatment. FEMA entered into interagency agreements with HHS to fund most of these health monitoring programs. HHS is the designated lead agency for the public health and medical support function under the National Response Plan and is responsible for coordinating the medical resources of all federal departments and agencies. HHS’s OPHEP coordinates and directs HHS’s emergency preparedness and response program. Three federally funded programs implemented by state and local governments or private organizations, with total federal funding of about $104 million—the FDNY WTC Medical Monitoring Program, WTC Medical Monitoring Program (worker and volunteer program), and New York State responder screening program—have made progress in monitoring the physical and mental health of people affected by the WTC attack. Federal employees who responded to the WTC disaster in an official capacity were not eligible for these programs because it was expected that another program would be developed for them. The New York State program stopped providing health screening examinations in November 2003, and in February 2004 state workers became eligible for initial or continued monitoring through the worker and volunteer program. The state program, in general, did not inform state responders that they were eligible to participate in the worker and volunteer program. Worker and volunteer program officials are working with state employee unions to inform state workers of their eligibility. All three programs and the WTC Health Registry, with total federal funding of $23 million, have collected information that could contribute to better understanding of the health consequences of the attack and improve health care for affected individuals. Officials from the FDNY, worker and volunteer, and WTC Health Registry programs are concerned that federal funding for their programs could end before sufficient monitoring occurs to identify all long-term health problems related to the WTC disaster. In January 2006, CDC received a $75 million appropriation to fund baseline health screening, long-term monitoring, and treatment for WTC responders. CDC officials are in the process of deciding how they are going to allocate these funds among programs and how long the allocated funds will be available for each program that receives funding. Three federally funded programs implemented by state and local governments or private organizations, with total funding of about $104 million, have provided medical examinations to identify physical and mental health problems related to the WTC attack. (See table 2.) Two of these programs—the FDNY WTC Medical Monitoring Program and the worker and volunteer program—are tracking the health of WTC rescue, recovery, and cleanup workers and volunteers over time. The third program, the New York State responder screening program, offered one- time screening examinations to state employees, including National Guard personnel, who participated in WTC rescue, recovery, and cleanup work. Federal employees who responded to the WTC disaster in an official capacity were not eligible for any of these programs because it was expected that another program would be developed for them. The FDNY program completed initial screening for over 15,000 firefighters and emergency medical service personnel, and the worker and volunteer program completed initial screening for over 14,000 other responders. In both programs, screenings include physical examinations, pulmonary function tests, blood and urine analysis, a chest Xray, and questionnaires on exposures and mental health issues. Both programs have begun to conduct follow-up examinations of participants and continue to accept new enrollees who desire initial screening. Current plans are to conduct a total of three follow-up examinations for each participant by 2009. As part of their federally funded activities, both programs provide referrals for participants who require treatment. FDNY employees and retirees can obtain treatment and counseling services from the FDNY Bureau of Health Services and the FDNY Counseling Services Unit, or they can use their health insurance to obtain treatment and counseling services elsewhere. The worker and volunteer program also provides referrals for its participants, including referrals to programs funded by the American Red Cross and other nonprofit organizations. The New York State program provided health screenings to about 1,700 of the estimated 9,800 state workers and National Guard personnel who responded to the WTC disaster. Officials sent letters to all state responders to inform them about the program and their eligibility for it. For each participant, the screening included a health and exposure questionnaire and physical and pulmonary examinations. Participants who required further evaluation or treatment after screening were told to follow up with their personal physician or a specialist. The program stopped screening participants in November 2003, in part because the number of responders requesting examinations was dwindling, and no follow-up examinations are planned. In February 2004, worker and volunteer program officials began to allow New York State responders to participate in that monitoring program. The officials determined that the worker and volunteer program would have sufficient funding to accommodate state workers who want to join the program. The state program did not notify the 9,800 state responders, including the approximately 1,700 workers it had screened that they were now eligible for continued monitoring from the worker and volunteer program. State program officials relayed this development only to those state responders who inquired about screening or monitoring examinations following the decision to permit state responders to participate in the worker and volunteer program. However, officials from the worker and volunteer program told us that they are working with state employee unions to inform state workers about their eligibility for the worker and volunteer program. For example, starting in November 2005, letters have been sent to union members telling them about the program and how they can enroll in it. According to worker and volunteer program officials, as of February 2006, 13 state workers who responded to the WTC disaster in an official capacity had received examinations from the worker and volunteer program, and as of mid-February 2006, 9 additional state workers had registered to obtain examinations through this program. Worker and volunteer program officials told us that any state worker that had been screened by the state program would need to receive a new baseline examination through the worker and volunteer program, because the screening data collected by the state program differ from the data collected by the worker and volunteer program. For example, the worker and volunteer program offers a breathing test not provided by the state program. In addition to providing medical examinations, these three programs—the FDNY program, the worker and volunteer program, and the New York State program—have collected information for use in scientific research to better understand the health consequences of the WTC attack and other disasters. A fourth program, the WTC Health Registry, includes health and exposure information obtained through interviews with participants; it is designed to track participants’ health for 20 years and to provide data on the long-term health consequences of the disaster (see table 2). Physicians who evaluate and treat WTC responders told us they expect that research on health effects from the disaster will not only help researchers understand the health consequences, but also provide information on appropriate treatment options for affected individuals. Both the FDNY program and the worker and volunteer program have been the basis for published research articles on the health of WTC responders. For example, the FDNY program reported on the injuries and illnesses experienced by firefighters and emergency medical service workers after responding to the attack. In addition, the worker and volunteer program published information on the physical and mental health of responders in 2004. Officials from both programs plan to publish additional findings as they track participants’ health over time. Although the New York State program has stopped offering examinations, program officials are continuing to analyze data from the program with plans for eventual publication. The WTC Health Registry program has collected health information through interviews with responders, people living or attending school in the vicinity of the WTC site, and people working or present in the vicinity on September 11, 2001. The registry program, with total federal funding of $23 million, completed enrollment and conducted interviews with over 71,000 participants by November 2004. Officials updated contact information for all participants in 2005, and they plan to start conducting the first follow-up health survey of participants in late March 2006. Registry officials would like to conduct subsequent follow-up surveys every 2 years until about 2023—20 years after the program began in 2003— but have not yet secured funding for long-term monitoring. The registry is designed to provide a basis for research to evaluate the long-term health consequences of the disaster. It includes contact information for people affected by the WTC attack, information on individuals’ experiences and exposures during the disaster, and information on their health. In November 2004, registry officials published preliminary results on the health status of registry participants, and officials expect to submit several research papers for publication within the next year. In addition, in May 2005, registry officials published guidelines for allowing registry information to be used in scientific research, and as of February 2006, they approved three proposals for external research projects that use registry information. These proposals include two studies of building evacuations and a study of psychological responses to terrorism. Officials from the FDNY, worker and volunteer, and WTC Health Registry programs are concerned that current time frames for federal funding arrangements for programs designed to track participants’ health over time may be too short to allow for identification of all the health effects that may eventually develop. ATSDR’s 5-year cooperative agreement with the New York City Department of Health and Mental Hygiene to support the WTC Health Registry went into effect April 30, 2003, and extends through April 29, 2008. Similarly, NIOSH awarded 5-year grants in July 2004 to continue the FDNY and worker and volunteer programs through mid-2009; the programs had begun in 2001 and 2002, respectively. Health experts involved in these monitoring programs, however, cite the need for long-term monitoring of affected groups because some possible health effects, such as cancer, may not appear until decades after a person has been exposed to a harmful agent. They noted that long-term monitoring could result in earlier detection and treatment of cancers that might develop. Health experts also told us that monitoring is important for identifying and assessing the occurrence of newly identified conditions, such as WTC cough, and chronic conditions, such as asthma. In January 2006, CDC received a $75 million appropriation for purposes related to the September 11, 2001, terrorist attacks. It is available to fund baseline screening, long-term monitoring, and health care treatment of emergency services and recovery personnel who responded to the WTC disaster. CDC is required to give first priority to funding baseline, follow- up screening, long-term medical health monitoring, or treatment programs implemented by the worker and volunteer program, the FDNY Medical Monitoring Program, the WTC Health Registry, the New York Police Foundation’s Project COPE, and the Police Organization Providing Peer Assistance of New York City. CDC is required to give second priority to funding similar programs that are coordinated by other organizations that are working with New York State and New York City. The programs that may qualify for secondary consideration are not specified in the law. In mid-February 2006, CDC officials told us that they were engaged in discussions with congressional stakeholders and the organizations specified in the law to help the agency decide how to spend the appropriated funds. Officials said that to aid their decisionmaking they were also consulting with private philanthropic organizations, including the American Red Cross, to learn more about the grant funds the organizations have provided to support the recovery needs of people affected by the WTC attack. CDC officials told us that they plan to first decide how they will allocate funds among screening, monitoring, and treatment programs and then make other decisions, such as how long the allocated funds will be available for each program. They said that they anticipated reaching a decision about the allocation of the funds by the end of February 2006, but did not know when they would reach other decisions. HHS’s OPHEP established the WTC Federal Responder Screening Program to provide medical screening examinations for an estimated 10,000 federal workers who responded to the WTC disaster in an official capacity and were not eligible for any other medical monitoring program. OPHEP did not initially develop a comprehensive list of federal responders who were eligible for the program. The program began in June 2003—about a year later than other monitoring programs—and had completed screenings for 394 workers through March 2004. No additional examinations were provided until the program resumed in December 2005, because OPHEP officials had temporarily suspended new examinations until they could resolve several operational issues. The program resumed conducting examinations for current federal workers in December 2005, and completed 133 additional examinations for current federal workers as of early February 2006. The examination process has not resumed for WTC responders who are no longer federal employees, but OPHEP recently executed an agreement with NIOSH to arrange for the worker and volunteer program to provide examinations to these WTC responders. We also identified two additional federal agencies that established screening programs for their own personnel who responded to the disaster. HHS’s WTC Federal Responder Screening Program was established to provide free voluntary medical screening examinations for an estimated 10,000 federal workers whom their agencies sent to respond to the WTC disaster from September 11, 2001, through September 10, 2002, and who were not eligible for any other monitoring program. FEMA provided $3.74 million through an interagency agreement with HHS’s OPHEP for the purpose of developing and implementing the program. OPHEP entered into an agreement with HHS’s FOH to schedule and conduct the screening examinations. The launching of the federal responder screening program lagged behind the implementation of other federally funded monitoring programs for WTC responders. For example, the medical screening program for New York State employees and the worker and volunteer program started conducting screening examinations in May 2002 and July 2002, respectively. However, OPHEP did not launch its program until June 2003. (Figure 1 highlights key actions in developing and implementing the program.) Initially, OPHEP did not develop a plan for identifying all federal agencies and their personnel that responded to the WTC disaster or for contacting all federal personnel eligible for the screening program. Although OPHEP and FEMA developed a partial list of federal responders—consisting primarily of HHS and FEMA personnel—OPHEP did not have a comprehensive list of agencies and personnel, and so could not inform all eligible federal responders about the WTC screening program. The program’s principal action to communicate with the federal responders was to place program information and registration forms on FEMA’s National Disaster Medical System (NDMS) Web site. The screening program had operated for about 6 months when OPHEP officials decided in January 2004 to place it on hold by temporarily suspending examinations. FOH officials told us that after examinations were suspended, 35 additional people requested examinations and they were placed on a waiting list. FOH officials told us that they completed 394 screening examinations from June 2003 through March 2004, with most completed by the end of September 2003. According to FOH, a total of $177,967 was spent on examinations through March 2004. OPHEP officials told us that three operational issues contributed to the decision to suspend the program. First, OPHEP could not inform all eligible federal responders about the program because it lacked a comprehensive list of eligible federal responders. Second, there were concerns about what actions FOH clinicians could take when screening examinations identified problems. Based on the examinations that had been completed before the program was placed on hold, FOH clinicians determined that many participants needed additional diagnostic testing and follow-up care, primarily in the areas of respiratory functioning and mental health. However, under the existing interagency agreement there was no provision for providing follow-up care and no direction for clinicians on how to handle the provision of further diagnostic tests, treatment, or referrals. FOH officials told us that they were concerned about continuing to provide screening examinations without the ability to provide participants with additional needed services. Third, although the screening program had been established to provide examinations to all federal responders regardless of their current federal employment status, HHS officials told us that the department determined that FOH does not have the authority to provide examinations to people who are no longer in federal service. In April 2005, OPHEP began to prepare for resuming the examination program by enlisting the assistance of ATSDR—which had successfully developed the WTC Health Registry—to establish a database containing the names of federal responders, develop a new registration Web site, and develop and implement recruitment and enrollment plans for current and former federal workers. OPHEP executed an agreement with ATSDR allocating about $491,000 of the funds remaining from FEMA for these activities. OPHEP officials told us that, as part of the program’s recruitment and enrollment efforts, in mid-October 2005, a letter was sent to about 1,700 people identified as having responded to the WTC disaster to inform them about the program. According to OPHEP, the new registration Web site was activated in October 2005, and through early February 2006, 345 additional current federal workers and 32 former workers had registered to obtain an examination. In July 2005, OPHEP and FOH executed a new agreement for providing examinations to WTC responders who are current federal workers. Under this agreement, FOH clinicians can now make referrals for follow-up care. For example, they can refer participants with mental health symptoms to an FOH employee assistance program for a telephone assessment. If appropriate, the participant can then be referred to an employee assistance program counselor for up to six in-person sessions. If the assessment indicates that longer treatment is necessary, the participant can instead be advised to use health insurance to obtain care or to contact a local Department of Labor Office of Workers’ Compensation to file a claim, receive further evaluation, and possibly obtain compensation for mental health services. The new agreement between OPHEP and FOH also allows FOH clinicians to order additional clinical tests, such as special pulmonary and breathing tests. FOH officials told us that they resumed providing examinations in December 2005 and that 133 examinations have since been completed. The examination process has not resumed for WTC responders who are no longer federal employees, but in late February 2006, OPHEP executed an agreement with NIOSH to arrange for the worker and volunteer program to provide examinations to these WTC responders. Under this agreement, former federal workers will receive a one-time examination comparable to the type of examination that FOH is now providing to current federal workers. Patients with eligible conditions will be referred to the treatment programs supported by the American Red Cross or other available programs. In addition to the OPHEP program, we identified two federal agencies that established medical screening programs to assess the health of the personnel they had sent to respond to the WTC disaster. One agency, the Army, established two screening programs—one specifically for Army Corps of Engineers personnel and one that also included other Army responders. The Army Corps of Engineers established a voluntary program to assess the health of 356 employees it had sent to respond to the disaster. The program, initiated in November 2001, consists of sending employees an initial medical screening questionnaire covering physical health issues. If questionnaire results indicate symptoms or concerns that need further evaluation, the employee is offered a medical examination. As of August 2004, 92 Corps of Engineers employees had participated in the program, with 40 receiving follow-up examinations. The Army’s Center for Health Promotion and Preventive Medicine initiated a program—the World Trade Center Support Health Assessment Survey—in January 2002. It was designed as a voluntary medical screening for Army military and civilian personnel, including contractors. From January 2002 through September 2003, questionnaires were sent to 256 employees. According to DOD, 162 employees completed and returned their questionnaires. In addition, the U.S. Marshals Service, within the Department of Justice, modified an existing agreement with FOH in 2003 for FOH to screen approximately 200 U.S. Marshals Service employees assigned to the WTC or Pentagon recovery sites. The one-time assessment includes a screening questionnaire and a medical examination. FOH officials said that as of August 2005, 88 of the 200 U.S. Marshals Service employees had requested and obtained examinations. Officials involved in the WTC health monitoring programs implemented by state and local governments or private organizations—including officials from the federal administering agencies—derived lessons from their experiences that could help officials design such programs in the future. They include the need to quickly identify and contact people affected by a disaster, the value of a centrally coordinated approach for assessing individuals’ health, the importance of monitoring both physical and mental health, and the need to plan for providing referrals for treatment when screening examinations identify health problems. Officials involved in the monitoring programs emphasized the importance of quickly identifying and contacting people affected by a disaster. They said that potential monitoring program participants can become more difficult to locate as time passes. In addition, potential participants’ ability to recall the events of a disaster may decrease over time, making it more difficult to collect accurate information about their experiences and health. However, the time it takes to design, fund, approve, and implement monitoring programs can lead to delays in contacting the people who were affected. For example, the WTC Health Registry received funding in July 2002 but did not begin collecting data until September 2003—2 years after the disaster. From July 2002 through September 2003, the program’s activities included developing the registry protocol, testing the questionnaire, and obtaining approval from institutional review boards. To expedite such information collection during the response to future disasters, ATSDR officials have developed a model data collection instrument, known as the Rapid Response Registry, to allow officials to identify and locate potentially affected individuals immediately after a disaster and collect basic preliminary information, such as their current contact information and their location during the disaster. ATSDR officials expect that using this instrument would reduce delays in collecting time- sensitive information while officials take the time necessary to develop a monitoring program for disaster-related health effects. According to ATSDR officials, state and local agencies can request the instrument and adapt it to their specific needs, and ATSDR can provide technical assistance on how to use the instrument. To date, 14 states have requested the Rapid Response Registry from ATSDR. Furthermore, officials told us that health monitoring for future disasters could benefit from additional centrally coordinated planning. Such planning could facilitate the collection of compatible data among monitoring efforts, to the extent that this is appropriate. Collecting compatible data could allow information from different programs to be integrated and contribute to improved data analysis and more useful research. In addition, centrally coordinated planning could help officials determine whether separate programs are necessary to serve different groups of people. For example, worker and volunteer program officials indicated that it might have been possible for that program to serve federal workers who responded to the disaster in an official capacity, which might have eliminated the need to organize and administer a separate program for them. Officials also stated that screening and monitoring programs should be comprehensive, encompassing both physical and mental health evaluations. This observation is supported by CDC’s recent report that about half of the adults that CDC assessed in areas heavily affected by Hurricane Katrina exhibited levels of emotional distress that indicated a potential need for mental health services. Officials from the WTC worker and volunteer medical monitoring program told us that the initial planning for their program had focused primarily on screening participants’ physical health, and that they did not originally budget for extensive mental health screening. Subsequently, they recognized a need for more extensive mental health screening, including greater participation of mental health professionals, but the program’s federal funding was not sufficient to cover such screening. By collaborating with the Mount Sinai School of Medicine Department of Psychiatry, program officials were able to obtain philanthropic funding to develop a more comprehensive mental health questionnaire; provide on-site psychiatric screening; and when necessary, provide more extensive evaluations. Many participants in the monitoring programs required additional testing or needed treatment for health problems that were identified during screening examinations. Officials told us that finding treatment sources for such participants is an important, but challenging, part of the programs’ responsibility. For example, officials from the worker and volunteer program stated that identifying providers available to treat participants became a major part of their operations, and was especially difficult when participants lacked health insurance. The officials said that planning for future monitoring programs should include a determination of how best to help participants obtain needed treatment. Federally funded programs implemented by state and local governments or private organizations to monitor the health effects of the WTC attack on thousands of people who responded to the disaster have made progress. However, the program HHS established to screen the federal employees whose agencies sent them to the WTC after the attack has accomplished little, completing screenings of 527 of the thousands of federal responders. Moreover, no examinations occurred for a period of almost 2 years, and examinations for former federal workers have not yet resumed. Because of this program’s limited activity, and the inability of federal workers to participate in other monitoring programs because of the assumption that they would have the opportunity to receive screening examinations through the HHS program, many federal responders may not have had an opportunity to identify and seek treatment for health problems related to the WTC disaster. Based on their experiences, officials involved in the monitoring programs have made a number of useful observations that will apply to future terrorist attacks and natural disasters, such as Hurricane Katrina. For example, screening for mental as well as physical health problems in New Orleans and along the Gulf Coast will be critical to the recovery of survivors of Hurricane Katrina and the responders to the disaster, as indicated by CDC’s early assessment of the extent of mental health distress among people affected by Hurricane Katrina. Another observation was the importance of quickly identifying and contacting people affected by a disaster. The model data collection instrument developed by ATSDR has the potential to enable officials to quickly and systematically identify people involved in future disasters, a necessary first step in conducting health monitoring. Finally, officials noted the value of centrally coordinated planning of health monitoring, which could improve the underlying database for research and eliminate the need for separate and sometimes incompatible monitoring programs for different populations. Mr. Chairman, this completes my prepared remarks. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For further information about this testimony, please contact Cynthia A. Bascetta at (202) 512-7101 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Helene F. Toiv, Assistant Director; George H. Bogart; Alice L. London; Roseanne Price; and William R. Simerl made key contributions to this statement. Through our work, we identified the following agencies that sent employees to respond to the World Trade Center attack of September 11, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | After the 2001 attack on the World Trade Center (WTC), nearly 3,000 people died and an estimated 250,000 to 400,000 people in the vicinity were affected. An estimated 40,000 people who responded to the disaster--including New York City Fire Department (FDNY) personnel and other government and private-sector workers and volunteers--were exposed to physical and mental health hazards. Concerns remain about the long-term health effects of the attack and about the nation's capacity to plan for and respond to health effects resulting from future disasters. Several federally funded programs have monitored the physical and mental health effects of the WTC attack. These monitoring programs include one-time screening programs and programs that also conduct follow-up monitoring. GAO was asked to assess the progress of these programs and examined (1) federally funded programs implemented by state and local government agencies or private institutions, (2) federally administered programs to monitor the health of federal workers who responded to the disaster in an official capacity, and (3) lessons learned from WTC monitoring programs. GAO reviewed program documents and interviewed federal, state, and local officials and others involved in WTC monitoring programs. This statement updates information GAO provided to Congress on September 10, 2005. Three federally funded monitoring programs implemented by state and local governments or private organizations after the WTC attack, with total funding of about $104 million, have provided initial medical examinations--and in some cases follow-up examinations--to thousands of affected responders to screen for health problems. For example, the FDNY medical monitoring program completed initial screening for over 15,000 firefighters and emergency medical service personnel, and the worker and volunteer program screened over 14,000 other responders. The New York State responder screening program screened about 1,700 state responders before ending its examinations in 2003. These monitoring programs and the WTC Health Registry, with total federal funding of $23 million, have collected information that program officials believe researchers could use to help better understand the health consequences of the attack and improve treatment. Program officials expressed concern, however, that current time frames for federal funding arrangements may be too short to allow for identification of all future health effects. CDC recently received a $75 million appropriation to fund health screening, long-term monitoring, and treatment for WTC responders and is deciding how to allocate these funds. In contrast to the progress made by other federally funded programs, the Department of Health and Human Services' (HHS) program to screen federal workers who were sent by their agencies to respond to the WTC disaster has accomplished little and lags behind. The program--which started in June 2003, about one year later than other WTC monitoring programs--completed screening of 527 of the estimated 10,000 federal workers who responded in an official capacity to the disaster, and in early 2004, examinations were suspended for almost 2 years. The program's limited activity and the exclusion of federal workers from other monitoring programs because of the assumption that they could receive screening examinations through the HHS program may have resulted in many federal responders losing the opportunity to identify and seek treatment for their WTC-related health problems. Officials involved in WTC health monitoring programs cited lessons from their experiences that could help others who may be responsible for designing and implementing health monitoring efforts that follow other disasters, such as Hurricane Katrina. These include the need to quickly identify and contact people affected by a disaster; to monitor for mental health effects, as well as physical injuries and illnesses; and to anticipate when designing disaster-related monitoring efforts that there will likely be many people who require referrals for follow-up care and that handling the referral process may require substantial effort. |
In the United States, responsibility for spectrum management is shared between two federal agencies: FCC, an independent agency, and NTIA, an administration within the Department of Commerce. FCC manages spectrum use for nonfederal users, including commercial, private, and state and local government users. NTIA manages spectrum for federal government users and acts for the President with respect to spectrum management issues. FCC and NTIA, with direction from Congress and the President, jointly determine the amount of spectrum allocated to federal and nonfederal users, including the amount to be shared by federal and nonfederal users. FCC and NTIA manage the radio frequency spectrum in the United States through allocation and assignment: Allocation involves segmenting the radio spectrum into bands of frequencies that are designated for use by particular types of radio services or classes of users. (Fig. 1 illustrates examples of services by frequency band.) In addition, spectrum managers specify service rules, which include the technical and operating characteristics of equipment. Assignment, which occurs after spectrum has been allocated for particular types of services or classes of users, involves providing users, such as commercial entities or government agencies, with a license or authorization to use a specific portion of spectrum. FCC assigns licenses within frequency bands to commercial enterprises, state and local governments, and other entities, while NTIA authorizes spectrum use through frequency assignments to federal agencies. There are several entities that advise FCC and NTIA in their spectrum management activities. FCC’s Technological Advisory Council (TAC) consists of approximately 50 telecommunications experts that provide technical advice to FCC and make recommendations on the issues and questions presented to it by FCC. TAC is a federal advisory committee organized under the Federal Advisory Committee Act. TAC is currently focused on key issues affecting the deployment of new broadband technologies and services, seeking to spur opportunities for innovation, greater efficiencies, and job creation. (CSMAC)—also organized under the Federal Advisory Committee Act—provides advice and recommendations to NTIA on a broad range of issues regarding spectrum management. CSMAC consists of approximately 25 private sector spectrum policy experts that offer insight and perspective on reforms, including long-range spectrum planning. Members are selected based on their technical background and expertise as well as to ensure diversity and balanced viewpoints. NTIA’s IRAC—an interagency advisory committee—comprised of representatives from 19 federal agencies that use spectrum, was established in 1922 to coordinate federal use of spectrum and provide policy advice on spectrum issues. It was originally organized by federal agencies that were seeking a way to resolve issues related to federal spectrum use in a cooperative manner. IRAC and its subcommittees assist NTIA in assigning frequencies for federal spectrum users and developing and executing policies, programs, procedures, and technical criteria pertaining to the allocation, management, and use of spectrum. The purpose of a communications system using spectrum is to relay information—audio, visual, and data—from a transmitter to a receiver (see fig. 2). A variety of technical methods exist to encode information such that it can be transmitted as electromagnetic radiation. Antennas are components of both transmitters and receivers used to emit and admit, respectively, electromagnetic radiation-carrying signals. A variety of factors influence the ability of a receiver to properly capture the transmitted signal and decode the information for use, including the terrain, distance, and atmospheric conditions between the transmitter and the receiver. For instance, buildings, mountains, and foliage can prevent a transmitted signal from being properly received for some types of communications systems. In other instances, the receiver may be located too far away from the transmitter. In addition, communications systems must operate in environments where a variety of natural and man-made electromagnetic radiation is present. Such undesired radiation could impede a communications system’s transmissions from reaching its intended recipients, and such an occurrence is called interference. It is impossible to eliminate all interference from communications systems and not all interference will prevent the proper functioning of a communications system. However, in some cases, the interference is considered harmful, meaning that it “endangers the functioning of a radionavigation service or of other safety services or seriously degrades, obstructs, or repeatedly interrupts a radiocommunication service.” Harmful interference can occur when two communications systems use the same or adjacent frequencies in the same geographic area (see fig. 3). In the first case, co-channel interference occurs when two communications systems operate on the same frequency assignment in the same geographic area. In the second case, adjacent band interference occurs between two communication systems operating on different, but adjacent, frequency assignments in the same geographic area. Adjacent band interference, the focus of this report, causes (see fig. 4): Transmitters emit undesired emissions into adjacent frequencies that can cause interference to receivers operating on those assigned frequencies. This is generally known as out-of-band emissions. Public Law 112-96 required GAO to study interference between adjacent spectrum uses, rather than co-channel interference between users of the same spectrum. Receivers admit undesired emissions from transmitters in adjacent frequencies, causing those receivers to experience interference. In other words, the receiver cannot reject this undesired energy, impairing its use. Concerns about harmful interference are a factor in FCC and NTIA spectrum management decisions. To prevent harmful interference, FCC and NTIA have primarily focused on setting emission limits on transmitters and establishing guard bands—spectrum that is left unused between different radio services. For instance, FCC and NTIA often place limits on the out-of-band emissions of transmitters to prevent adjacent band interference. In cases where out-of-band emissions cannot be sufficiently lowered, FCC or NTIA establishes guard bands to separate the assigned frequencies of adjacent communications systems. When a spectrum user experiences interference, it can make a claim of harmful interference to the relevant agency—FCC or NTIA—which will investigate and work to resolve the problem. According to FCC, applying the definition of harmful interference often requires case-by-case consideration, as the definition does not provide objective guidance on what level of service degradation or interruption meets the “harmful” threshold and because several factors, including the type and purpose of the service, must be taken into account. In addition to regulatory actions, well-designed transmitters and receivers can also help prevent harmful interference. Electronic components called filters are used by transmitters and receivers to help ensure that a transmitter only emits electromagnetic signals in its assigned frequency and a receiver only admits electromagnetic signals from its assigned frequency. Ideally, filters would only allow desired signals to pass and block all undesired signals and hence, adjacent band interference would never occur. However, in practice, filters are not perfect, which leads to the potential for harmful adjacent-band interference (see fig. 5). The potential for improving receivers to prevent adjacent-band interference has been receiving increased attention by FCC and NTIA. One of the most recent and high-profile examples, as described earlier, is the potential interference between LightSquared’s proposed wireless broadband network and GPS receivers. There have been a limited number of other cases where interference concerns involved receiver performance to date, including potential interference between mobile communications services and satellite radio, comprised of several proceedings over more than 10 years, and between television receivers and unlicensed devices seeking to use spectrum between channels. However, an FCC official stated in a recent testimony that receiver performance is becoming increasingly important as a limiting factor in repurposing spectrum for new uses and in packing services closer together. FCC, working with NTIA, continues its efforts to repurpose spectrum for new uses and technologies—that is, change its rules to reallocate a band of spectrum from an existing use to a new use—to accommodate the growing demand for spectrum. To improve receiver performance, manufacturers and commercial licensees have developed voluntary standards that are used to design and procure receiver equipment for some services. Manufacturers and commercials licensees have also used private negotiation, improvements to technology, and information sharing between and among spectrum users to improve receiver performance. Some industry-led organizations have adopted voluntary standards for receivers. Standards can help guide receiver design to help prevent interference from adjacent spectrum users and can be either voluntary or mandatory. If voluntary, the specific industry members voluntarily agree to meet them, but are not legally required to do so. If mandatory, the standards would have been imposed by governmental action and industry members would be legally required to meet them. Features of receiver standards often include selectivity, the ability of a receiver to separate the wanted from unwanted signals in the adjacent frequency; sensitivity, the detection limit of the receiver to admit the weakest desired signal level; and dynamic range, the range of desired signal levels from the weakest to the strongest that a receiver can admit and function properly. Several industry representatives told us that voluntary standards for receivers adopted by industry-led organizations are working well to limit interference. Some industry organizations have developed voluntary standards for receivers, including those in the aviation, satellite, and broadcast industries. The processes these organizations use to develop standards are similar and generally include bringing together a diverse group of stakeholders (both commercial and government) for open discussions of specific issues and a transparent, consensus-based decision making process. The Telecommunications Industry Association (TIA) is a trade association representing the global-information and communications- technology industry, including equipment manufacturers and commercial licensees, and is an American National Standards Institute (ANSI) accredited standards developing organization. Within TIA, there are committees that set standards for specific spectrum uses, such as the committee that sets standards for land mobile radios used by, among others, public safety agencies. According to TIA officials, developing a new standard includes the coming together of relevant industry stakeholders who have determined that the creation or amendment of standards is necessary, participation by industry and government stakeholders, and a documented, open process. Standards are typically created or amended within 1 to 3 years. In the aviation industry, RTCA, Inc. develops minimum performance standards for aviation-related receivers. According to RTCA officials, these standards typically include requirements for rejecting signals from users in adjacent and other bands. Meetings are publicly announced and open to anyone with an interest in the topic or standard under consideration. The timelines for developing a standard can vary greatly, but typically take 3 to 5 years. In the broadcast industry, the Advanced Television Systems Committee (ATSC) has established voluntary industry recommended practices for television receivers, which include requirements for rejecting signals from adjacent spectrum users. ATSC practices are created by a multi-stakeholder group—representing the broadcast, broadcast equipment, motion picture, consumer electronics, computer, cable, satellite, and semiconductor industries—and usually take at least 1 to 2 years to achieve consensus. Once developed, voluntary industry standards may be used by manufacturers and commercial licensees to design and procure equipment. Representatives of the cellular industry told us that their industry relies on a voluntary approach to receiver standards and receiver performance, pointing out that voluntary industry standards are used on a daily basis between handset manufacturers and wireless companies. Officials at the National Public Safety Telecommunications Council said that the TIA standards for land mobile radios are commonly referenced by public safety agencies when procuring equipment, including receivers. Representatives of two different land mobile radio manufacturers told us that standards form a baseline for performance and that their receivers typically exceed the relevant performance standards. When procuring equipment, compliance and certification processes can be used to assure users that equipment meets the voluntary industry standards. In some cases, there is a general certification program that covers a particular service. For example, the National Institute of Standards and Technology—in partnership with other industry representatives, the public safety community, and other government agencies—has a testing process for public-safety land mobile radios. In other cases, compliance testing might be carried out by those purchasing the equipment. For example, TIA representatives told us that many of TIA’s member companies are involved in testing equipment against standards, although TIA is not itself involved in the certification process. Negotiation. In addition to developing and using voluntary industry standards, some manufacturers and commercial licensees privately negotiate interference concerns. Representatives of the cellular industry told us that their industry deals with interference on a daily basis, as interference problems are increasing and have become a regular part of doing business. Several industry representatives highlighted the cellular industry as a sector in which the wireless companies and manufacturers work in concert to maximize efficiency; these efforts often occur within the cellular spectrum bands and among like services. According to one industry representative, voluntary negotiations helped resolve interference to fixed microwave services that occurred in the past. The interference occurred when a new service began operating next to a microwave service in the 1.9 GHz band, where the microwave receivers had been in operation for 30 years. To resolve the interference, the parties discussed whether to add filters to the transmitters or the receivers and decided to add filters to the receivers. Improvements to Technology. Some manufacturers and commercial licensees also apply technological advancements to receivers and their components to improve the performance of receivers. For example, RTCA representatives told us that advances in filtering technology have been utilized in modern aviation equipment to reduce interference. One manufacturer of land mobile radios told us the company is continually making changes to equipment to improve performance and reduce interference. During the discussions to resolve interference to public safety operations using land mobile radios in the 800 MHz band, this manufacturer examined technical advances in receiver design that could help alleviate the interference. Information Sharing. Some manufacturers and commercial licensees share information to help improve receiver performance. The creation of standards, discussed earlier, is one area where manufacturers and commercial licensees, as well as other stakeholders, share information for the purpose of improving receiver performance. One equipment manufacturer told us that bringing together different stakeholders brings forth the best ideas to solve problems. Another example of communication helping to improve receiver performance was the sharing of information by commercial licensees, manufacturers, and industry associations to create guidance in response to interference between cellular and public safety services in the 800 MHz band. In this instance, after reports of interference were made in a number of locations, representatives of the public safety community and the wireless company involved came together to help resolve the interference. An equipment manufacturer said that there was a lot of cooperation and coordination among the stakeholders as they wanted to quickly eliminate the interference adversely affecting the mission-critical communications of public safety agencies using land mobile radios. As a result of this coordination and information sharing, a Best Practices Guide was created that contained information on how to prevent interference and mitigate existing interference. To improve receiver performance, federal spectrum users have mandated use of industry standards for receivers, specified system requirements to procure equipment, and negotiated with other spectrum users to resolve interference concerns. NTIA has mandated the use of standards for many federal spectrum users while FCC has not done so for nonfederal spectrum users, but both spectrum management agencies have taken actions to resolve specific cases of interference and conducted research to improve receiver performance. Some federal spectrum users have specified, and in some cases mandated, standards for receiver performance that are often based on industry-developed standards. Federal agencies use spectrum to operate systems both for internal use and for external use by other parties. For external systems, federal agencies can mandate standards that parties must meet to use the system. For example, the Department of Transportation uses ASTM International standards for emerging communication applications among vehicles and roadside equipment to enable safety, mobility, and environmental benefits. Similarly, the Federal Aviation Administration (FAA) publishes Technical Standard Orders (TSO) for communication and navigation systems that incorporate by reference RTCA standards for aviation receivers. According to RTCA officials, compliance with TSOs can be used as one basis for FAA certification of communications, navigation, and surveillance equipment. RTCA standards for receivers typically include requirements for rejecting signals from adjacent and other bands to limit interference. In addition to using existing industry-developed standards, federal spectrum users work with standards-setting bodies and other organizations to create standards in response to particular cases of or concerns about interference. For example, the Coast Guard worked with a standards-setting body—the Radio Technical Commission for Maritime Services (RTCM)—to create new standards for receivers when its marine radio system—used for applications such as distress calls and port navigation—experienced interference from neighboring services that were operating in compliance with federal rules for transmitters.one Coast Guard official, the standards were created in a few months and equipment manufacturers began producing receivers that met the new standards in 1 to 2 years. Similarly, the National Aeronautics and Space Administration (NASA) is currently working with other space-faring nations to create standards to help maintain the spectrum performance of highly sensitive satellite downlink receivers used to obtain data from interplanetary spacecraft missions (e.g., Mars exploration rovers). One NASA official said that adjacent band interference is a concern given the sensitive natures of these receivers and that both transmitter and receiver standards can help to address a majority of interference problems. When procuring equipment, federal spectrum users also specify system and component requirements, including those for receiver performance. The Department of Defense (DOD) uses acquisition guidance that emphasizes the need to address the potential for adjacent-channel and adjacent-band interference when designing and procuring equipment. As part of this guidance, DOD has established receiver performance requirements that apply to all procurements by military departments. NOAA also specifies system performance, including receiver performance, when procuring equipment. According to NOAA officials, the agency defines a system’s expected availability—that is, the time that a system or equipment is capable of being used—when procuring equipment, which is then used to determine specifications for system components and subcomponents including receivers. The requirements are included in DOD MIL-HDBK-237D, Electromagnetic Environmental Effects and Spectrum Certification Guidance for the Acquisition Process and DOD MIL STD 461F, Requirements for the Control of Electromagnetic Interference Characteristics of Subsystems and Equipment, among other documents. between adjacent spectrum users. According to NTIA officials and IRAC members, interference cases are handled on a case-by-case basis, which can result in making changes to individual systems such as adding filters to transmitters or receivers. For example, a National Science Foundation official said the agency is currently working to resolve interference to receivers that collect and process satellite signals from a nearby TV transmitter that was operating in compliance with FCC rules. IRAC members said that negotiation among federal spectrum users is aided by access to NTIA’s database on federal spectrum use and by each agency having a designated spectrum manager. NTIA officials told us that most cases of harmful interference involving federal spectrum users are resolved between the cognizant parties rather than through NTIA. NTIA has mandated the use of standards for receiver performance for many federal users. NTIA sets mandatory standards for federal spectrum users in its Manual of Regulations and Procedures for Federal Radio Frequency Management. According to NTIA officials, these mandatory standards for receivers apply to about 60 percent of federal spectrum assignments, including land mobile radio, fixed, radar, and aeronautical mobile telemetry systems, and additional mandatory standards set by federal spectrum users like FAA cover another 10 percent of federal spectrum assignments. NTIA and federal spectrum users adopt industry- developed standards when they are available for a given service. For federal spectrum uses that have very specific applications or lack a commercial equivalent, NTIA can establish its own standards, as it did for radar systems, or choose to not establish standards. NTIA’s mandatory standards, whether set from industry standards or established by NTIA, are used to certify federal equipment, and when procuring equipment, federal spectrum users must set specifications that comply with the NTIA mandatory standards. NTIA has reported that these mandated standards have done much to prevent interference to federal spectrum users and that mandated standards establish a baseline of performance but do not prevent users from moving to more efficient or better receivers. FCC has not set mandatory receiver standards for nonfederal spectrum users. FCC has specific statutory authority to establish minimum performance standards for home electronic equipment, like televisions, but FCC officials said that the Commission lacks direct authority to impose regulations governing receiver performance in other cases outside home electronics. Therefore, FCC has generally relied on the marketplace to incentivize nonfederal licensees and manufacturers to produce receivers that can reject unwanted signals and limit interference. As noted in the previous section, manufacturers and commercial licensees have taken actions such as adopting industry standards to improve receiver performance. While FCC has generally relied on the marketplace to improve receiver performance, it has provided incentives to spectrum users to do so in specific cases. For example, FCC defined the minimum levels of performance that a receiver must meet to make a claim of harmful interference in the 800 MHz band. Specifically, FCC set minimum levels for receiver performance for non-cellular systems, primarily public safety radios, as part of the reconfiguration of the 800 MHz band to mitigate interference between non-cellular and cellular systems. Therefore, spectrum users that choose to use receivers that do not meet the minimum levels are not entitled to full protection from interference. The public safety community and manufacturers recommended that FCC set objective criteria to qualify for interference protection. In this case, FCC reported that taking further action to improve receiver performance, like requiring public safety radios to fully comply with industry standards to claim harmful interference, would impose costs that outweighed the resulting interference protection. 47 U.S.C. § 302a(a)(2). FCC and NTIA have also taken specific actions in response to cases of interference involving receiver performance. FCC and NTIA officials said that interference cases, both potential and realized, are generally handled on a case-by-case basis, so actions are taken that address the particulars of each case. In rulemakings for new services, FCC often invites comment about any receiver characteristics that should be taken into account, particularly for receivers currently in use in adjacent bands. In some cases, FCC has required that a new spectrum user protect the incumbent services in the adjacent band from harmful interference regardless of the performance of the receivers in use, a policy known as “first-in-time rights.” The Advanced Wireless Services (AWS)-1 spectrum, for example, was reallocated for flexible fixed or mobile service, such as voice and data content, which raised concerns about potential interference with incumbent licensees in the adjacent band used for broadcast auxiliary services. The new AWS-1 licensee purchased and arranged for the installation of new filters for the incumbent’s receivers to avoid causing harmful interference as required by FCC rules for AWS-1 spectrum. NTIA’s Institute for Telecommunication Sciences (ITS) investigates cases of potential or realized interference and identifies strategies to mitigate interference. ITS officials told us that ITS tends to work on interference cases that are particularly complicated or difficult to remedy rather than on more general interference or receiver concerns. For example, ITS investigated interference to fixed-satellite service (space-to-earth) stations in the 4 GHz band from radar systems operating in the adjacent band. The ITS report identified the source of the interference, in this case primarily because of poor filtering, and listed various solutions to remedy the interference. Moreover, NTIA is currently leading an interagency working group to examine receiver performance standards for GPS devices for federal users and the feasibility of accommodating terrestrial broadband systems in the bands adjacent to GPS. However, based on a 2011 Silicon Flatirons Center workshop of experts from government, industry, and academia, one theme drawn from reviewing case studies of adjacent band interference was that information can be lost when interference problems are resolved on a case-by-case basis, as other operators with similar problems might not have access to the resolution of other cases. In addition to mandatory standards and case-by-case actions, NTIA and FCC have completed reports, requested research, held workshops, and taken other actions on receiver performance. Reports. NTIA’s reports on the topic include a 2003 report summarizing existing receiver standards, both voluntary and mandatory, established by federal agencies, industry associations, and international groups, and a 2005 report compiling existing interference protection criteria for various radio services used by the federal government. Requested research by federal advisory committees. In 2010, NTIA’s CSMAC studied interference and dynamic spectrum access that resulted in recommendations on guard bands, equipment standards, and enforcement aimed primarily at NTIA and federal spectrum users. In 2012, FCC’s TAC studied receivers and spectrum and presented recommendations to FCC. Public notices and workshops. FCC has taken steps to seek stakeholder input and encourage dialogue on receiver performance though public notices and workshops on receivers and spectrum efficiency. In 2003, FCC issued a Notice of Inquiry seeking public input on whether and how it should incorporate receiver performance specifications into its spectrum policy. Stakeholders submitted comments that varied in their support for greater action, whether taken by FCC or the market, though a majority of comments favored a market driven approach, like voluntary industry standards for receivers. In 2007, FCC closed the proceeding stating that action did not appear to be needed at that time; FCC officials said that the comments from the rulemaking presented no clear solution, particularly as no single solution fit all interference problems. More recently, FCC held a 2-day workshop in March 2012 dedicated to spectrum efficiency and receivers. Participants, including licensees, federal agencies, equipment manufacturers, and component providers, examined the characteristics of receivers and how their performance can affect the efficient use of spectrum and opportunities for the creation of new services. Most participants in the workshop’s concluding panel, which was comprised of a mix of stakeholders, noted that the status quo, as it relates to receiver performance, was not sustainable and that further action is needed. Although manufacturers, commercial licensees, and the federal government have taken steps to improve receiver performance, many of those with whom we spoke commented on what they perceive to be the challenges to further improvements. These challenges include the lack of coordination across industries when developing receiver standards, the lack of incentives to improve receivers, and the difficulty accommodating a changing spectrum environment. Standards are often developed for a single industry operating within a defined area of spectrum, such as the cellular industry. While members of a particular industry coordinate with each other, they may have little or no communication with services operating in adjacent spectrum bands. Although standards-setting bodies have published receiver standards for many services, FCC officials and several other stakeholders we interviewed told us that standards are often not developed in coordination with stakeholders representing adjacent services. For example, some aviation equipment operates in the frequencies above the FM radio band. The aviation community sets its own standards, while the FM radio community develops its own standards. Each group has its own representatives who communicate within their own industry but not with those in the other industry. This lack of coordination and lack of information sharing means that the impact of standards set by one group upon other groups is not always assessed, and immunity to adjacent- band interference is not necessarily addressed. There is also no publicly available compendium of current receiver performance standards or specifications to facilitate coordination or understanding across spectrum uses. FCC officials explained that while receiver performance standards have been developed for some services, there is no catalog of these standards, making it difficult to locate this information. FCC officials also told us that information about actual receiver performance is not readily available. Typically, this information only surfaces in the context of a rulemaking. For example, FCC made requests to equipment manufacturers for information about their receivers and was told by several that the information was proprietary, as companies did not want competitors to acquire information about signal strength or the energy level of devices, among other information. In addition, since standards are largely voluntary for commercial users, the extent that industry standards are used remains unknown. In fact, just because voluntary standards exist does not mean that licensees and manufacturers will use them. For example, with regard to the recommended practices for the manufacture of television sets, it is not known whether all manufacturers adhere to the recommended practices since the practices are voluntary and television licensees that broadcast television signals do not manufacture television sets or specify their operating characteristics—a situation known as decoupled receivers. A lack of incentives for spectrum users and equipment manufacturers to improve receiver performance was another challenge that many industry associations and other stakeholders we interviewed cited. As one industry researcher explained, there are no incentives for manufacturers to build more robust receivers, primarily because the manufacturers will not receive the benefits. Rather, those who want to make more spectrum available or share spectrum will benefit. One industry representative stated that the real question is one of motivation. Improved receiver performance to limit adjacent band interference often requires the addition of filters and can increase the size, weight, power consumption, and cost of the receiver. Additionally, there is no business case that can explain why a business would accept these downsides so that others can benefit when implementing a new system in an adjacent band. Another industry representative echoed that position and told us that, independently, the private sector has no motivation to spend its time and resources to protect spectrum for other users, allow enhancements to other services, or accommodate new entrants. Instead, as one stakeholder commented, companies have an incentive to make the cheapest receiver possible—that is, a receiver with poor filtering capabilities that is more sensitive to emissions from other bands—and no incentive to work with licensees in neighboring spectrum bands. Similar to commercial users, federal users also lack incentives to improve receiver performance. The PCAST report stated that federal users currently have no incentive to improve the efficiency with which they use their own spectrum allocation, nor does the federal system as a whole have incentives to improve its overall efficiency. Further, we have previously reported that federal users often use proven, older technologies that were designed to meet a specific mission and may be less efficient than more modern systems. NTIA’s CSMAC also recently recommended that the federal government, including NTIA, consider incentives, rules, and policies to, among other things, improve the capability of receiving devices to reject adjacent channel interference. Lastly, the use of private negotiation, which has been used regularly to resolve interference within the cellular industry, can be difficult in cases where there are dissimilar services or many parties. When disputes involve similar services in the same band, operators have similar incentives that facilitate private dispute resolution. This may not be the case when licensees with different services or in adjacent bands are involved. Also, when the number of users is relatively small, negotiation between parties may be able to resolve the problem. However, when there are many users involved, the transaction costs may be prohibitively high. For example, for services where receivers are decoupled from licensees, as in the case of television, the large number of receivers and potential lack of coordination among individual parties makes private negotiation a less feasible option. Stakeholders we interviewed cited accommodating changes to the spectrum environment as a challenge to improving receiver performance. As FCC attempts to accommodate new services and users, the Commission often alters how licensees can use spectrum bands. This repurposing of spectrum, either from a prior use or from no use, often gives rise to concerns over interference, concerns that involve receiver performance because incumbent services have manufactured receivers to operate without interference problems in the current environment. However, if that environment changes, receivers currently in use may experience increased interference. This was the case in the 2.3 GHz band, where interference concerns arose between two different services—Wireless Communications Service (WCS) and Satellite Digital Audio Radio Service (SDARS)—allocated to adjacent spectrum. Although the WCS allocation allowed for mobile service, the rules limiting out-of- band emissions for transmitters made mobile service impractical. After years of attempting unsuccessfully to deploy a mobile service, the WCS licensees petitioned for rule changes. In considering the WCS licensees petition, the performance of the SDARS receivers was one of the critical areas of contention. SDARS receivers were not capable of filtering out stronger signals in adjacent spectrum. These receiver concerns required technical rules that effectively created guard bands on each sides of the SDARS spectrum to prevent interference. Innovation on the part of current spectrum users is another factor that can change the spectrum environment. Current spectrum users may decide to make changes to the configuration of their system based on business needs. Such changes can result in interference to adjacent spectrum holders. For example, both Nextel, a cellular provider, and public safety agencies had licenses to operate in the 800 MHz band. Nextel decided to convert its mobile radio architecture from one that used high antenna sites atop buildings or towers to one that used a short‐range cellular architecture with low antenna sites to provide more capacity in crowded urban areas. Even though Nextel’s transmitted power was below customary levels, the low antenna sites caused interference for other users, including public-safety land mobile-radio users, when they were close to Nextel transmitter sites. In 2004, FCC, in response to input from stakeholders including Nextel and the public safety community, proposed reconfiguring the 800 MHz band to separate the two systems. Current practices and policies related to receiver performance may constrain repurposing of spectrum going forward. Representatives from three industry associations told us that it is difficult to build receivers to accommodate an unknown future. As one industry association member told us, the problem is not a lack of information on the current environment but a lack of predictability about the future environment. To make meaningful decisions about current receiver performance, the future use of spectrum would need to be better defined. FCC’s TAC stated that part of the problem of increased receiver interference was the result of receivers having been built without adequate knowledge of future environmental performance constraints. Moreover, several stakeholders we interviewed also said that it would be difficult and could take considerable time to upgrade or replace receivers and equipment currently in use once deployed. In November 2012, an FCC official also testified that receiver performance is increasingly becoming a limiting factor in the repurposing of spectrum for new uses and in packing services closer together, and the official said that a continuing challenge for FCC will be to maximize the amount of usable spectrum for cost effective deployment of new communication services while sufficiently protecting incumbent receivers. FCC officials also said that interference problems between adjacent bands are growing and are more common in rulemakings; as noted in a recent proceeding on wireless innovation and investment, FCC stated that these rulemakings can be protracted, create uncertainty, and discourage investment. Given the challenges that exist to improving receiver performance, stakeholders we interviewed identified options that could be taken or led by FCC and NTIA, with the aim of increasing spectrum efficiency. Below we list several recurring options based on our interviews with industry associations, manufacturers and commercial licensees, federal agencies, and representatives from academia and research organizations, as well as our review of reports from federal advisory committees and workshops on this topic (see table 1). This list of options is not exhaustive, but provides information on options that could be implemented alone or in combination. Moreover, the options could be applied to varying degrees; that is, applied to specific spectrum uses or boundaries between two different uses or on a wider scale. In fact, many stakeholders we interviewed indicated that each case of adjacent-band interference is unique, so a one-size-fits-all solution is likely not desirable or possible. Each of the options listed below entail advantages and disadvantages, as identified by stakeholders and reports, and thus implementing any of these options would involve trade-offs. Through our interviews, a commonly mentioned disadvantage of improving receiver performance was that receivers would cost more, as more components are used in their manufacture. In addition, many stakeholders also said that actions to upgrade or replace existing, legacy receivers could be costly or take a long time, particularly in the case of equipment designed to last for many years. Stakeholders frequently stated that an advantage of improving receiver performance was increased spectrum efficiency. It is difficult to quantify or estimate the overall costs and benefits of improving receiver performance or the amount of spectrum to be gained through improving receiver performance, especially given the numerous and varied uses of spectrum. FCC, NTIA, and others have studied many of these options, and at a conceptual level, the advantages and disadvantages are well known. However, some of these options have not been implemented, while others have only been implemented in limited cases. Therefore, the practical effects of each option, that is, what would happen if the option is implemented, are not well known. Many of the stakeholders we interviewed indicated that receiver performance is an important aspect of spectrum efficiency, so that it warrants further consideration as spectrum management agencies and spectrum users look for ways to make more efficient use of spectrum. For example, a group of experts convened by the Silicon Flatirons Center agreed that the advance of wireless technologies and maturing infrastructures had reached an inflection point where past methods of governance were no longer adequate and generally supported that spectrum management agencies more explicitly consider receivers when drafting rules. Many stakeholders we interviewed and reports we reviewed stated that greater use of industry standards could take different forms. Use of standards, voluntary and mandatory, has been long discussed and is widely understood since standards are currently used by nonfederal and federal spectrum users to varying degrees. For instance, NTIA has mandated use of industry standards for receivers for many federal spectrum uses, as noted earlier. Stakeholders offered three main ways that industry-developed standards could be used: voluntary, a safe harbor allowing spectrum users that meet standards to receive protection from harmful interference, and mandatory, all described in greater detail in table 1. Stakeholders we interviewed varied in their support for these options, but many opposed mandatory standards. Many stakeholders told us they prefer the industry-led, voluntary standard-creating process, in contrast to government-created and mandated standards, as they believe industry- developed voluntary standards can be responsive and flexible to changing conditions. While mandatory standards were often opposed as an overall option to improve receiver performance, some stakeholders told us mandatory standards could be used in limited cases, such as when market forces do not sufficiently incentivize the production of robust receivers or when receivers are not tied to the licensee (i.e., decoupled receivers). For the safe harbor option, whereby compliance with extant industry standards would serve as a prerequisite for receiving protection from interference, several industry associations and individual stakeholders we spoke with supported this option. In 2003, NTIA recommended that FCC adopt industry standards on a voluntary or recommended basis, with FCC only granting protection for services with receivers that meet standards, and in doing so, NTIA suggested that FCC give priority to bands being reallocated to avoid problems encountered with legacy systems. In addition, federal programs have encouraged compliance with industry standards among nonfederal spectrum users through grants or other funding. For example, the Department of Homeland Security’s Emergency Communications Grant program requires that funds used by public safety agencies to purchase land mobile radio systems comply with industry standards set by TIA. As part of the transition from analog to digital television, NTIA offered coupons to subsidize consumer purchases of converter boxes, but the coupons could only be used to purchase converter boxes that met minimum performance standards. Regardless of the form of standards used, there are some overarching advantages and disadvantages. Advantages of greater use of industry- developed standards are that this option makes use of existing standards and that the open, consensus-based process used to develop most industry standards helps ensure they reflect the knowledge and input of a range of industry and government perspectives. Disadvantages are that standards may not keep pace with industry change and can be prescriptive, limiting flexibility of manufacturers and licensees. Also, industry standards do not exist for all services. Further, as discussed earlier, industry standards tend to focus within a band or service rather than looking across boundaries; however, a few stakeholders we interviewed said FCC or NTIA could encourage standards-settings bodies to address particular problems, such as interference, or encourage more cross-industry bodies to help enhance industry standards. Some additional advantages and disadvantages apply to different forms of greater use of industry-developed standards, as listed in table 1. Another option identified by stakeholders we interviewed and cited by reports was interference limits. The interference limits approach would explicitly set the level of energy—that is, the strength of the unwanted signal from adjacent bands—that a receiver would have to tolerate before making a claim of harmful interference. This is in contrast to the current situation where expectations of receiver performance have almost always been implicit—that is, receivers have been expected to operate within the same parameters as their associated transmitters—which can lead to conflicts when parties have a different understanding of these expectations. Interference limits differ from the safe-harbor standards option discussed above, since under the interference limits approach, the prerequisite for claiming harmful interference is demonstrating that the level of energy the receiver is exposed to exceeds a predetermined level, rather than demonstrating a receiver complies with industry standards, typically stated as specific performance characteristics. Recently, reports from both a working group within FCC’s TAC and PCAST recommended that FCC test interference limits. The TAC working group recommended that FCC identify one or more pieces of spectrum, specifically adjacent spectrum allocations, that are good candidates for interference limits and initiate multi-stakeholder groups of relevant industry and government representatives to work out issues and implementation choices for these pieces of spectrum. PCAST recommended that NTIA and FCC take steps to consider both transmitters and receivers in its spectrum management policies and specified that an initial step should be trying interference limits. Other approaches to interference rights have been proposed, including one that aims to replace FCC’s current policy of first-in-time rights—whereby FCC protects incumbent users and thus receivers from interference as a result of rule changes—to a policy where users would have to self-protect against interference in adjacent bands. The advantages of this option cited by stakeholders and reports that we reviewed are that it would provide spectrum users with greater certainty, as it sets a criterion for harmful interference. In addition, this option could help enable private negotiation rather than FCC involvement to resolve potential and realized interference. Another advantage is that this option does not mandate a specific technology or design, leaving such choices to manufacturers and others. Among the disadvantages cited by stakeholders and reports are that this option would be more complex to develop and enforce compared to standards. Also, this option has not been used or tested, so it could take considerable time and resources to test and considerable time to implement and the practical effects and outcomes are unknown. Many stakeholders we interviewed said that additional transparency and sharing of information on spectrum use and system characteristics by FCC and NTIA could help mitigate interference problems involving receivers by facilitating greater understanding of the systems already in place and thus the potential for interference to arise from the deployment of a new system in adjacent spectrum. In general, stakeholders identified two ways that additional information could be made available. First, FCC and NTIA could make more information available on the characteristics of transmitters and receivers in use, potentially in conjunction with a spectrum inventory. The CSMAC Interference and Dynamic Spectrum Access Subcommittee recommended that NTIA, FCC, or other government entities responsible for managing spectrum establish a clearinghouse to make information available to those seeking to obtain access to spectrum; this will give entities considering new services visibility about the potential for interference for such equipment before they acquire spectrum and deploy equipment. NTIA said that resources for implementing this and other CSMAC recommendations were not included in recent budgets, but some of its ongoing band-specific and other initiatives correspond with these recommendations. Second, FCC could also compile and share information on existing industry standards for receivers. The TAC working group on receivers and spectrum found that industry and government receiver standards and recommended practices may exist but are often unknown to manufacturers and users operating in adjacent bands. Therefore, the TAC working group recommended that FCC enhance its Spectrum Dashboard—a tool on FCC’s website that provides information on how different frequency bands are being used and allows the public to search, map, and download data on licenses—to include receiver standards. In terms of advantages, making additional information available would enable more informed decision-making by NTIA, FCC, and spectrum users through enhanced planning and testing. Moreover, additional information could help new entrants better understand the spectrum environment and potential interference concerns before committing resources; as part of its report, the CSMAC Interference and Dynamic Spectrum Access Subcommittee stated that some of the interference problems in recent years were not anticipated by new entrants and might have been avoided by providing new entrants with the interference characteristics of receiving and transmitting equipment used in adjacent bands. FCC and NTIA could also take action on this option quickly, compared to other options. However, several stakeholders said that concerns about proprietary and classified information could make it difficult to implement this option. In addition, it would require resources, both from federal spectrum management agencies and spectrum users, to implement this option and keep it up to date. Finally, making additional information on spectrum use available may be insufficient on its own to address the challenges to improving receiver performance. To improve receiver performance and increase spectrum efficiency, the federal government could promote research and development for receiver technologies and modeling. FCC officials told us that not much is known about the actual performance of receivers. A 2010 report prepared by the CSMAC Interference and Dynamic Spectrum Access Subcommittee found that more research is needed to evaluate advances in technologies and what standards will yield more spectrally efficient equipment, since such advances may significantly alter cost-performance trade-offs. More specifically, it recommended that the federal government could fund research to accelerate the development of filters to improve the performance of receivers, such as the ability to reject an undesired signal at frequencies close to the desired signal frequency, without affecting size or power consumption. The report also noted that a better dialogue between the filter community and spectrum managers is essential as filter performance has a large impact on spectrum efficiency. One licensee we interviewed said that more investment is needed to improve technology for receivers and that new technology could help lower the cost and increase the flexibility of devices. A group like the Wireless Spectrum Research and Development’s Senior Steering Group could be used to help coordinate research in this area across the federal government. Additional information gleaned through research could enable spectrum managers and users to better understand the current state of receiver performance and help inform future choices. Stakeholders said that an advantage of research and development would be to provide FCC and NTIA with more accurate information on interference mitigation technologies that are feasible. Stakeholders said that research could also help enable improvements to receivers to help prevent interference problems across bands. Conversely, this option would require resources, not only to fund but also to coordinate research efforts across the federal government and the private sector. Also, certain advancements through research, like those in filtering, may not be applicable across spectrum uses. A few members from one industry association we interviewed also said that filters can help solve some but not all problems of adjacent band interference. Finally, the federal government may lack infrastructure (e.g., test bed, labs) to directly support research. As demand for and use of spectrum continues to increase, improving the performance of receivers is one of several ways to more efficiently use spectrum and accommodate new services. To date, there have been a limited number of instances where interference concerns driven by receiver performance have impeded a licensee’s planned use of adjacent spectrum. Even so, PCAST and FCC, among others, have recognized the growing impact of receivers on efficient spectrum use, and adjacent-band interference concerns may increase in years to come as spectrum management agencies look to allocate additional spectrum for wireless broadband and other new services in an already crowded environment. Therefore, many stakeholders feel that more can and should be done to improve receiver performance in concert with other efforts to increase spectrum efficiency—the status quo is increasingly becoming untenable. Stakeholders have identified and studied several options to improve receiver performance and the efficient use of spectrum. In some instances, these options entail direct federal intervention, such as imposing mandatory standards for receivers, whereas in others, federal policy creates an environment where industry participants’ individual and collective actions can improve receiver performance. Each of these options entail advantages, including reduced actual and potential interference and improved spectrum efficiency, and disadvantages, including possibly higher equipment costs. FCC and NTIA have each explored receiver performance in the past, and recent recommendations from advisory committees specific to this topic provide Congress, NTIA and FCC, and industry stakeholders with options for further consideration and testing. Since the topic has been the subject of considerable study, the potential advantages and disadvantages of various options are generally understood. However, less is known about the practical effects of implementing these options to address interference. Several options have not been implemented, such as safe harbor standards and interference limits, and others, such as mandatory standards, have only been implemented for certain federal users, and it is unclear how these experiences would translate to nonfederal users. Greater understanding of the practical effects of these options will allow FCC to make more informed spectrum-management decisions moving forward to ensure the efficient and effective use of spectrum. To improve receiver performance and spectrum efficiency, we recommend that the Chairman of the Federal Communications Commission consider collecting information on the practical effects of various options to improve receiver performance, including consideration of small-scale pilot tests of these options. We provided a draft of this report to the Department of Commerce (Commerce) and FCC for review and comment. In response to the draft report, Commerce and FCC provided written comments, which are reprinted in appendixes II and III, respectively. In its letter, Commerce said that it will continue to work with FCC on issues of potential interference. Commerce also emphasized the federal government’s use of standards to improve receiver performance and the benefits of receiver performance characteristics as a factor in improving spectrum efficiency. In its letter, FCC said that the Commission has already initiated a process to gather information on the effects of options to improve receiver performance; FCC discussed various actions under way, which we describe in this report, including that FCC’s TAC recently submitted to the Commission for consideration recommendations to improve receiver performance. These actions will help FCC to understand the potential advantages and disadvantages of various options to improve receiver performance. However, we do not believe that these actions will provide information on the practical effects of options that FCC might get from a pilot test or other information-collection efforts, which, as we note in the report, will allow FCC to make more informed spectrum-management decisions. Commerce and FCC also provided technical corrections to the draft report that we incorporated as appropriate. We are sending copies of this report to the Secretary of Commerce, the Chairman of the Federal Communications Commission, and the appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or members of your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. This report examines efforts by commercial licensees, manufacturers, and the federal government to ensure that transmission systems are designed and operated so as to not compromise reasonable use of adjacent spectrum, with a focus on receiver performance as it relates to increasing the efficient use of spectrum. In particular, the report provides information on (1) actions taken by selected manufacturers and commercial licensees to improve receiver performance; (2) actions taken by the federal government to improve receiver performance; (3) challenges, if any, to improving receiver performance; and (4) options identified by stakeholders and reports to improve receiver performance. To address the engagement’s objectives, we reviewed relevant statutes and regulations, and Federal Communications Commission (FCC) and National Telecommunications and Information Administration (NTIA) documents related to spectrum management, interference, and transmission systems, with a focus on receivers. FCC and NTIA documents included presentations from FCC’s March 2012 workshop on Spectrum Efficiency and Receiver Performance and NTIA’s Receiver Spectrum Standards Phase 1–Summary of Research into Existing Standards, as well as various notices, orders, advisory committee reports, and other publications. We interviewed FCC and NTIA officials to learn about actions the spectrum management agencies have taken to improve receiver performance, with a focus on interference between adjacent spectrum users, and reviewed reports and workshops held by the agencies on the topic of receiver performance. We also interviewed members of NTIA’s Interdepartment Radio Advisory Committee (IRAC), which consists of representatives from federal departments and agencies, and officials from the National Oceanic and Atmospheric Administration (NOAA) to learn about federal spectrum users’ experiences with adjacent- band interference and actions to improve receiver performance to increase spectrum efficiency. We also interviewed a variety of other stakeholders outside the federal government—specifically industry associations, commercial licensees and manufacturers, and academics and representatives from research organizations—to learn about receiver performance and spectrum efficiency. We selected industry associations to cover a variety of spectrum uses and provide the perspectives of both licensees and manufacturers. We selected commercial licensees and manufacturers to ensure variety in frequency and application and to correspond to interference cases involving receiver concerns. We selected academics and representatives from research organizations based on participation in workshops on receiver performance, service on FCC or NTIA advisory committees, and recommendations from other interviewees, among other criteria. In addition, we interviewed officials from spectrum management agencies in Canada and the United Kingdom to learn about steps taken in those countries to improve receiver performance to increase spectrum efficiency. Across these interviews, we discussed the advantages and disadvantages of improving receiver performance to increase spectrum efficiency; actions taken by the commercial licensees, manufacturers, and the federal government to improve receiver performance; and options for additional action to improve receiver performance. Table 2 provides a complete list of those we interviewed for this report. The information and perspectives that we obtained from the interviews may not be generalized to all industry stakeholders that have an interest in spectrum efficiency and receiver performance. Rather, comments and views are reviewed in context with current literature on this topic. We also analyzed literature on receiver performance, standards, and interference from academic journals as well as workshop proceedings and reports on receiver performance and interference conducted by FCC, NTIA, and organizations like the Silicon Flatirons Center, Brookings Institution, and Aspen Institute. To identify relevant literature, we performed a search of papers and studies (from 2002 through 2012) from major electronic databases, such as ProQuest and SciSearch. We included studies that focused on interference between adjacent spectrum users and spectrum efficiency. We only included papers from scholarly peer-reviewed journals, government reports, conference papers, and other working papers. We also conducted a targeted search for proceedings from conferences that occurred between 2008 and 2012 that covered receiver performance, spectrum, and interference. We reviewed the relevant papers, studies, and conference proceedings to identify options to improve receiver performance as well as the advantages and disadvantages of taking action to improve receiver performance. We also studied a judgmental sample of cases where receiver performance played a role in cases of potential or realized interference between adjacent spectrum users, to better understand the actions of commercial licensees, manufacturers, and the federal government as well as the trade-offs of solutions sought in each case. We selected cases to ensure variation in application or use (e.g., cellular telephone, navigation), federal and nonfederal users, and existence of receiver standards, among other characteristics. To compile a list of possible cases, we reviewed FCC and NTIA reports, including the FCC’s Technological Advisory Council white paper on spectrum efficiency metrics that included an appendix on receiver performance and NTIA published reports on interference involving federal users, and discussed potential cases with officials from FCC and NTIA. We selected two cases that primarily involved nonfederal spectrum users: (1) interference between cellular and public safety services in the 800 MHz band, and (2) potential interference between satellite radio and wireless communication in the 2.3 GHz band. We sought to select a third case of interference involving federal spectrum users of radar systems. However, since our aim was to select cases that involved receiver performance problems, and since NTIA officials told us that many cases of interference involving federal spectrum users are resolved without substantial NTIA involvement, we did not select a particular interference case. Rather, we chose to study radar systems more generally. For each case, we analyzed relevant rulemaking proceedings, reports, and other documents to describe the types and extent of actions taken by stakeholders to limit interference and increase spectrum efficiency. In addition, we interviewed select licensees, industry associations, and manufacturers to better understand the circumstances of each case, actions taken to address interference, and the trade-offs of these actions. WCS/SDARS: In the 2.3 GHz band, interference concerns arose between two different services—terrestrial-based Wireless Communications Service (WCS) and chiefly satellite-based Satellite Digital Audio Radio Service (SDARS, or satellite radio)—allocated to adjacent parts of the bands with no guard bands separating the services. WCS allows a range of terrestrial-based services, including fixed, mobile, portable, and radiolocation services, though initial rules for the band made mobile service impractical. SDARS is primarily a satellite-based service in which programming is sent directly from satellites to receivers that are either at a fixed location or in motion; however, SDARS licensees deployed terrestrial repeaters to address situations where skyscrapers and other impediments prevented a line-of-sight connection between satellites and receivers. In general, SDARS receivers must be highly sensitive to receive weak, satellite signals. Moreover, SDARS deployed before WCS, so SDARS receivers did not have to tolerate high-powered mobile operations in the adjacent WCS bands. SDARS licensees raised concerns over interference from a request to change rules to enable mobile services in the WCS spectrum, while WCS licensees and others raised concern over potential interference from the SDARS terrestrial repeaters. FCC issued several notices of proposed rulemaking for both services since their allocation in the mid 1990s. FCC also encouraged the parties to negotiate an agreement to facilitate the adoption of rules for both services that would resolve the potential for interference. However, negotiations did not lead to a private resolution, and in 2007, FCC opened a new proceeding to update the record on the potential interference. This culminated in a 2010 Report and Order and Second Report and Order that adopted rules addressing the potential interference. FCC adopted rules that, among other things, effectively created 5 MHz guard bands on each side of the SDARS spectrum to protect legacy SDARS receivers; the rules did not include any requirements specific to receiver performance. In 2012, FCC responded to petitions for reconsideration of that decision with a new order that adopted an agreement reached by AT&T and Sirius XM, which are now the primary license holders in the band. 800 MHz band: Interference occurred between adjacent spectrum users in the 800 MHz band primarily because of the use of two different types of communications systems on interleaved channels. The two types of communications systems were (1) cellular-architecture multi-cell systems used by cellular telephone and enhanced specialized mobile radios and (2) high site systems used by public safety and well as private wireless and non-cellular specialized mobile radio licensees. Beyond the difference in systems, two other factors exacerbated the potential for interference: the close proximity of frequencies used by the commercial and public safety users and insufficient selectivity of many land mobile radios used by public safety users. Interference from cellular systems to public safety systems began to occur in the late 1990s as cellular licensees began deploying multi-cell systems. Interference occurred even though all the licensees were operating in compliance with FCC rules for transmitters. After interference problems emerged, FCC brought together the relevant parties and encouraged them to develop more definitive information on the scope and severity of the problem and to recommend steps to mitigate interference. As a result, a working group created a Best Practices Guide to help prevent and mitigate interference in the band. Seeking a more comprehensive solution to the interference problem, Nextel—the primary license holder in the 800 MHz band—proposed reconfiguring the band to separate the different systems and then worked with many public safety and other industry associations to create an updated reconfiguration plan. In addition, the public safety community and equipment manufacturers recommended receiver performance specifications going forward that would serve as criteria for qualifying for interference protection. FCC adopted these recommendations in the rules it adopted for the 800 MHz band in 2004, which reconfigured the band and defined a technical standard for determining if a public safety or other non-cellular licensee is entitled to interference protection. In addition to the contact person named above, Michael Clements, Assistant Director; Nabajyoti Barkakati; Stephen Brown; Leia Dickerson; Sharon Dyer; Ryan Eisner; David Goldstein; Richard Hung; Bert Japikse; Joanie Lofgren; Maren McAvoy; Joshua Ormond; Amy Rosewarne; Hai Tran; Elizabeth Wood; and Nancy Zearfoss made key contributions to this report. | The growth of commercial wirelessbroadband services and government missions, including public safety and defense, has increased demand for radio-frequency spectrum. FCC and NTIA attempt to meet this demand while protecting existing users from harmful interference that can arise as new services and users come on line. To manage harmful interference, FCC and NTIA have historically focused on transmitters—the equipment that emitssignals. But, receivers also play a role. Congress and others are considering if further action to improve receiver performance to reduce harmful interference could help enhance spectrum efficiency and meet the growing demand for spectrum. The Middle Class Tax Relief and Job Creation Act of 2012 directed GAO to study spectrum efficiency and receiver performance; GAO studied four areas related to improving receiver performance, including (1) actions taken by manufacturers and commercial licensees, (2) actions taken by the federal government, (3) challenges, and (4) options identified by stakeholders. GAO reviewed federal regulations and reports prepared by FCC, NTIA, industry stakeholders, and other researchers, and interviewed spectrum users, industry associations, and other stakeholders. Manufacturers and commercial licensees have taken a variety of actions to improve receiver performance. For some services, industry associations-- comprised of manufacturers, commercial licensees, and others--have developed voluntary standards that are often used to design and procure receivers, such as those in cell phones and televisions, and to help improve receiver performance. Stakeholders also reported privately negotiating to resolve interference problems and sharing of information as having helped improve receiver performance. The federal government has used standards and taken other actions to improve receiver performance. Some federal spectrum users, like the Coast Guard and Department of Transportation, have specified or mandated use of industry standards for receivers using certain agency spectrum-based services. The National Telecommunications and Information Administration (NTIA), which manages the federal government's use of spectrum, has also mandated receiver standards for many federal spectrum assignments, such as those for land mobile radios used by emergency responders and radar systems. The Federal Communications Commission (FCC), which manages commercial and other nonfederal spectrum use, believes it lacks general authority to impose receiver standards and rather relies on the marketplace to improve receiver performance. In specific cases, FCC has provided incentives for nonfederal spectrum users to improve receivers. Both NTIA and FCC have taken additional actions to improve receiver performance, like undertaking studies and hosting public workshops. Although industry and government have taken actions, stakeholders identified three challenges to improving receiver performance: Lack of coordination across industries when developing voluntary standards: Standards are often developed for a single industry and not coordinated with those using adjacent spectrum. Lack of incentives for manufacturers or spectrum users to incur costs associated with using more robust receivers: The benefits of improved receiver performance, namely freed-up spectrum for new services and users, often accrue to others and not those incurring the costs to improve receivers. Difficulty accommodating a changing spectrum environment: When spectrum is repurposed for a new use, upgrading or replacing receivers currently in use to mitigate interference can be difficult and take considerable time. In addition to greater use of voluntary industry standards, stakeholders GAO interviewed identified several other options to improve receiver performance. For example, interference limits would explicitly set a level of interfering signals that a receiver must tolerate before a user could seek government action to resolve interference problems. Each option entails trade-offs, and many stakeholders noted that a one-size-fits-all solution is likely not desirable or possible. Further, some options, such as interference limits, have not been implemented, and others, such as mandatory standards, have only been implemented for a limited number of users, primarily federal users. Therefore, the practical effects of these options--that is, what would happen if these options were individually or collectively implemented--are not well known, particularly for nonfederal users. FCC should consider collecting information on the practical effects of options to improve receiver performance. FCC replied that it had initiated such a fact-gathering process; GAO believes FCC's process to date may not provide information on the practical effects of these options. |
Admitted insurers can be licensed to sell several lines or types of coverage to individuals or families, including personal lines—e.g., homeowners, renters, and automobile insurance—and commercial lines—e.g., general liability, commercial property, and product liability insurance. Admitted insurers can sell insurance in one or more states but must be licensed to operate in every state in which they sell coverage. Their activities are regulated primarily by the states, which make and enforce their own laws and regulations. State regulators license agents, review insurance products and premium rates, and examine insurers’ financial solvency and market conduct. To help ensure adequacy and fairness in pricing and coverage, state regulators oversee the insurance rates and forms of admitted insurers. Admitted P/C insurers have two primary sources of revenue: premiums from selling insurance and investment income. They collected $293.9 billion in premiums in 1998 and $523.9 billion in 2012. Compared to the liabilities held by life insurers, their liabilities tend to be fairly short-term, so they generally invest in a mix of relatively low-risk, liquid, conservative instruments, such as government and municipal bonds, higher-grade corporate bonds, short-term securities, and cash. State regulators require admitted insurance companies to maintain specific levels of capital in order to continue to conduct business. To help ensure that policyholders continue to receive coverage if their insurer becomes insolvent or unable to meet its liabilities, states have guaranty funds (separate for life and P/C insurance), which are funded by assessments on insurers doing business in their state. All states have laws that permit surplus lines insurers to sell coverage in their states, but do not cover their policies with state guaranty funds (which are funded by assessments on admitted insurers doing business in their state), leaving policyholders unprotected if their surplus lines insurer becomes insolvent. P/C insurers provide coverage for numerous property and casualty risks, but there are some risks that they will not cover—for instance, risks that are difficult to assess, occur too frequently to be acceptable to admitted insurers, are specialized or unusual, or require coverage that exceeds the capacity of admitted carriers. In these cases, potential insureds may turn to the nonadmitted market. This market functions as a safety valve, offering insurance products for risks that the admitted market will not cover. Among the nonadmitted insurers are surplus lines insurers. These insurers provide coverage for general, management, and professional liabilities and commercial automobile, environmental, and property risks, among other things, and tailor their products to meet the needs of the insured. For example, they may write policies to cover a research laboratory working on an unproven drug, special sporting or other events, or liabilities arising from environmental impairment. They may also write policies on new and innovative products that have little loss history— something admitted insurers would typically require in order to adequately price the risk. Some of the carriers also write coverage for personal lines, such as homeowners insurance in catastrophe-prone areas. Figure 1 shows that over the 5-year period from 2008 through 2012, these insurers provided relatively larger amounts of coverage for commercial lines and property liabilities, with lesser amounts of coverage for reinsurance and personal lines. There was also a sizeable amount of coverage provided by companies writing less than $1 million in premiums where the liabilities covered were not clearly identified. Surplus lines insurers have more flexibility with respect to the terms and pricing of the coverage offered because, according to NAIC, unlike admitted insurers, they do not need state approval to modify policy exclusions and coverages. This flexibility to tailor the coverage provided and the price at which they provide it enables nonadmitted insurers to manage unique or large risks. Surplus lines insurers can be domiciled—that is, headquartered—either in the United States or abroad. Those domiciled in the U.S and licensed in at least one U.S. state are known as “foreign” insurers in states other than their state of domicile, while those domiciled in a foreign country are known as “alien” insurers. A foreign surplus lines insurer is regulated as an admitted insurer in the state in which it is domiciled, and that state is its financial solvency regulator. Alien insurers wishing to enter the U.S. surplus lines market must apply to NAIC’s International Insurers Department (IID) for inclusion in the Quarterly Listing of Alien Insurers, which reviews their financial condition as part of the alien insurer application process. These insurers then become an eligible surplus lines insurer in all states. Alien insurers may also apply to an individual state to become authorized only in that jurisdiction. Table 3 in appendix II lists the alien insurers approved by NAIC from 2008 through 2012, and table 4 shows their location—mostly in England and Bermuda. Surplus lines insurers must sell their insurance through a broker. These brokers are licensed retail agents that represent insurance buyers and place coverage with surplus lines insurers. They are responsible for (1) selecting an eligible surplus lines insurer, (2) reporting the surplus lines transaction to insurance regulators, (3) remitting the premium tax due on the transaction to state tax authorities, and (4) assuring compliance with all the requirements of the surplus lines state regulations. To place coverage in the surplus lines market, brokers must follow state due diligence requirements. Although they vary from state to state, according to an association representing surplus lines insurers and brokers, these requirements generally call for brokers to establish that three admitted companies licensed to write the kind and type of insurance requested have declined to provide it before turning to a surplus lines insurer. Historically, brokers and the surplus lines insurers have had to comply with a multitude of state laws and regulations dealing with taxation, oversight, and market access. For example, brokers have had to comply with differing rules across states for things like providing policyholders with important notices and determining which insurers were eligible for placing business. Brokers also have had to remit taxes on all surplus lines insurance transactions. Historically the revenue from surplus lines transactions that took place outside the insurer’s state of domicile had to be among all states where parts of the risks were located. NRRA, which generally took effect on July 21, 2011, contains provisions that are intended to address the various regulatory concerns discussed earlier and, in turn, streamline the brokers’ placement of insurance coverage in the surplus lines market. The act creates a “home-state” system of taxation and regulation to resolve difficulties in determining how to tax and regulate surplus lines transactions. NRRA defines the home state as the state in which the insured maintains its principal place of business, or in the case of an individual, the individual’s principal residence. If all the insured risk is outside this state, NRRA defines the home state as the state to which the greatest percentage of taxable premium taxed for that insurance contract is allocated. Under NRRA, the home state has sole and exclusive authority to tax and regulate a sale of surplus lines insurance. In addition, only the insured’s home state may require a surplus lines broker to be licensed, even for transactions that involve multiple states. NRRA also permits but does not require states to form an interstate compact or otherwise establish procedures to allocate premium taxes collected by an insured’s home state for transactions. These compacts, if instituted, are intended to help states adopt nationwide uniform requirements, forms, and procedures for reporting, paying, collecting, and allocating premium taxes. NRRA further seeks to simplify and make more uniform the criteria states use to determine which insurers can be eligible to sell insurance on a surplus lines basis. Specifically, NRRA states that a state may not impose eligibility requirements on or establish eligibility criteria for surplus lines insurers domiciled in the United States that are authorized to write policies in their state of domicile. The act also states that the insurer generally must maintain minimum capital and surplus levels as required by NAIC’s Non-Admitted Insurance Model Act, which is the greater of $15 million or the minimum capital and surplus requirements of the insured’s home state, or in certain situations, less than the minimum capital and surplus when a state’s insurance commissioner finds an insurer acceptable. This model act, among other things, is intended to protect persons seeking insurance, assure such insurance is placed only with reputable and financially sound insurers, and establish state regulation of surplus lines insurance. NRRA also specifies that a state may not prohibit a broker from placing surplus lines insurance with an alien insurer listed on NAIC’s Quarterly Listing of Alien Insurers. Under NRRA, Congress delegated to NAIC responsibility for reviewing and approving the financial information of alien surplus lines insurers. Finally, NRRA contains a provision specifying that, after July 21, 2012, a state may not collect any fees for licensing a surplus lines broker unless it participates in NAIC’s national producer database or an equivalent. Although NAIC does not regulate admitted or surplus lines insurers, according to NAIC officials it does provide services designed to make certain interactions between insurers and regulators more efficient. According to NAIC, these services include providing detailed insurance data to help regulators analyze insurance sales and practices; maintaining a range of databases useful to regulators; and coordinating regulatory efforts by providing guidance, model laws and regulations, and information-sharing tools. From 2008 to 2012 premiums written by surplus lines insurers grew by 1.6 percent (from $24.8 billion to $25.2 billion) and remained around 5 percent of the property casualty market as a whole. Over this time, even though insurers’ claims and underwriting expenses generally exceeded premiums, their net investment income allowed them to remain profitable. As a result, surplus lines insurers also saw growth in capital holdings. From 2008 to 2012, premiums written by the population of surplus lines insurers that we examined rose modestly, mirroring trends in the overall P/C market. While the total amount of premiums written (the sum of premiums written by the surplus insurers on an admitted basis in their states of domicile and on a surplus lines basis) declined following the 2008 financial crisis in both markets, it began increasing in 2011 and continued to rise through 2012. In the surplus lines market, premiums written increased about 3.2 percent in 2011, rising from about $22.2 billion in 2010 to $22.9 billion in 2011, and rose another 10.0 percent to $25.2 billion in 2012. Premiums written in the overall P/C market rose from $498.6 billion in 2008 to $523.4 billion in 2012. Surplus lines premiums, which account for a small fraction of this overall P/C market, declined slightly as a share of total P/C premiums in 2009 and 2010 but began rising in 2011. In 2012, they accounted for 4.8 percent of the overall P/C market (see fig. 2). According to market analyses and surplus lines insurers and representatives of industry we interviewed, surplus lines premiums can fall for several reasons, including an overall slowdown in the economy and a “soft” insurance market, which tends to occur after losses have been relatively low. Economic slowdowns or downturns, such as the one during the 2007-2009 financial crisis, can result in fewer new business ventures of the type that would use surplus lines insurance. And in a softening insurance market, generally after losses have been relatively low, admitted insurers are often willing to write higher-risk coverage for lower prices—coverage that might at another time be written by surplus lines insurers. In contrast, during a “hard” insurance market, generally after losses have been higher, admitted insurers are often less willing to write high-risk coverage, and surplus lines insurers are more likely to step in to write such coverage. With the U.S. economy in recovery, pricing evidence suggests that since 2010 insurance markets have likely been hardening, creating a flow of business from the admitted market into the surplus lines market. Consistent with this trend, insurers that sell surplus lines coverage saw the proportion of their total premiums from surplus lines transactions increase slightly in 2011 and 2012, rising from 63.9 percent to 65.3 percent. Most of the largest surplus lines insurers sell surplus lines coverage almost exclusively, while smaller insurers that sell this coverage write most of their premiums in the admitted market. Because U.S. surplus line insurers are licensed as admitted insurers in their states of domicile, they may sell some coverage on an admitted basis (see table 5 in app. III). Of the top 25 surplus lines insurers as measured by total premiums written (both admitted and surplus lines), 20 wrote over 98 percent of their premiums as surplus lines in 2012, on average. Only three wrote less than 95 percent. By comparison, the remaining smaller surplus lines insurers received on average 38.9 percent of their premiums from the surplus lines business during this period. Since 2008, the list of top 25 surplus lines insurers has remained largely unchanged, and these insurers have sold the bulk of surplus lines coverage. Most of the top 25 insurers from 2012 were also among the top insurers from 2008 through 2011 and in 2012 wrote about $16.9 billion in surplus lines premiums, or about 66.9 percent of the market (see fig. 2). The largest five surplus lines insurers each wrote over $1 billion in premiums, with the largest U.S. based surplus lines insurer, Lexington Insurance, writing $4.2 billion in premiums in 2012. Lloyd’s of London, which is licensed as an admitted carrier in Illinois, Kentucky, and the U.S. Virgin Islands, is the largest surplus lines insurer and wrote $6.3 billion in surplus lines premiums in 2012, or about 25 percent of the total 2012 premiums written for all U.S. based surplus lines insurers. Lloyd’s surplus lines premiums are written primarily to cover property exposures, and in 2012 their share of the surplus lines market premium was largest in Texas (30 percent) and South Carolina (29 percent) and was at least 20 percent each in Florida, Georgia, Louisiana, Massachusetts, and North Carolina. Table 6 in appendix III includes detailed information about premiums written by the top 25 surplus lines insurers. A majority of surplus lines premiums were written in a small number of states. For example, from 2010 through 2012 just under half of the surplus lines premiums written in the United States were written in 4 states, and almost two-thirds of the premiums were written in 10 states (see table 7 in app. III). For each of these years, the top four states with the most surplus lines premiums written were, respectively, California, Florida, Texas, and New York—the four states with the most P/C premiums written in the admitted market as well. Insurer’s net income, which takes into account all aspects of the company’s operations, including premiums written, losses, other costs, and investment income, is a measure of profitability. As shown in figure 3, the net income for P/C and surplus lines increased overall from 2008 to 2012. Net income for surplus lines insurers more than doubled over the period, from $4.3 billion to $9.5 billion, but for P/C insurers it increased nine-fold, from $3.7 billion to $37.3 billion, which is below the pre-crisis highs of $66.4 billion and $63.6 billion in 2006 and 2007, respectively, but has generally been trending upward. Income from investments is important for surplus lines insurers because it can allow them to operate at an overall profit despite unprofitable underwriting operations. As we discussed earlier in this report, investments of the broader P/C industry, which include surplus lines insurers, were negatively affected by the 2008 financial crisis, and the net investment income decreased for both surplus lines insurers and the P/C industry from 2008 through 2012 (see table 1). From 2008 through 2012, the surplus lines insurers that we examined saw their invested assets increase from $152.7 billion to $233.1 billion, respectively. These assets were held in a broad range of instruments, but the bulk was in bonds. An increasing percentage was invested in industrial bonds and decreasing percentages in government and other bonds. These insurers also held more in cash and other short-term investments from 2009 through 2012, but less so than in 2008. They have also tended to increase their investments in common stock, but that amount remains below 7 percent of total cash and investments (see table 8 in app. III). Our analysis of data on net investment income earned (income earned from all forms of investments) shows that from 2008 to 2012 the net investment income increased for surplus lines insurers and decreased slightly for the P/C market (see table 1). For surplus lines, this net investment income rose from $6.1 billion to $10.22 billion, while for P/C insurers it started the period at $53.1 billion and finished the period at $50.1 billion. Over this 5-year period, surplus lines and P/C insurers saw an increase in their capital gains each year until 2011 when it dropped and then increased again in 2012. Surplus lines insurers had capital losses of $13.1 billion in 2008, rising to $9.6 billion in 2010, and dropping to $0.8 billion in 2011 because of a significant decrease in unrealized capital gains. Capital gains increased to $8.7 billion in 2012 due to a rebound in these capital gains. P/C insurers saw capital losses of $78.3 billion in 2008, but capital gains of $23.7 billion in 2012. According to an A.M. Best report, in 2008 and 2009, the P/C industry’s assets were strained by write-downs of devalued assets, mortgage-backed securities, and other related assets, and the changes in 2010 and 2011 for both the P/C industry and surplus lines insurers are largely attributable to their increased investment in relatively short-term fixed income securities that carry low interest rates. From 2008 through 2012, surplus lines insurers generally have made money on their underwriting operations. Insurers’ underwriting operations generally consist of the premiums it collects and the expenses it incurs related to insurance activities. Expenses can generally be divided into two categories: (1) underwriting expenses and (2) loss and loss adjustment expenses. Underwriting expenses are those incurred by an insurer as it obtains business, such as brokers’ commissions, employee incentive programs, and marketing. Loss and loss adjustment expenses are losses insurers incur as they investigate and settle claims, including actual claim payments as well as legal fees. Expressed as a percentage of premiums written, these ratios are referred as the “expense” and “loss” ratios, respectively. The insurer’s underwriting profitability can be measured by summing the loss and the expense ratios. This sum, measured relative to premiums written is referred to as the “combined” ratio, where a value of less than 100 percent indicates that an insurer’s underwriting is profitable, and a value of more than 100 percent indicates an underwriting loss. The expense ratio for the surplus lines industry as a whole was greater than the ratio for the broader P/C market (see fig. 4). In part, this could reflect the higher costs associated with assessing and pricing the more complex risks often covered by surplus lines insurers. The expense ratios for the surplus lines industry increased slightly over this period, rising from 28.2 percent in 2008 to 29.4 percent in 2012. Since 2008, surplus lines insurers’ loss ratios have increased but remained below those of the broader P/C market (see fig. 4). In 2012, these expenses for the surplus lines insurers in our analysis were 70.6 percent of premiums written. According to an A.M. Best report, the loss and loss adjustment cost percentage increased from 2010 to 2011 for both admitted and surplus lines insurers because of large losses from weather-related catastrophes and a weak economy. In 2012, surplus lines insurers experienced losses because of Superstorm Sandy. Figure 4 shows that from 2008 to 2012, the overall underwriting activity of surplus lines insurers in our analysis was profitable every year except in 2011 (the industry was profitable in 2012 because the unrounded combined ratio for 2012 was 99.94). In contrast, the 25 largest surplus lines insurers had underwriting profits in 2008 and 2009, with an average combined ratio of less than 100 percent in each of those years, but showed losses for the remaining 3 years. An A.M. Best report noted that the competitive insurance marketplace and the recent recession combined to weaken surplus lines insurers’ underwriting performance in 2010 and 2011 and led to overall underwriting losses in the P/C market. And according to the Insurance Journal, catastrophic losses especially in New York and New Jersey, due to Superstorm Sandy in 2012, drove up surplus lines insurers’ loss and combined ratios for 2012 to the point where they exceed those of the total P/C market. Surplus lines insurers’ capital has increased since 2008, adding to their bottom line. A key measure of their capital is “policyholders surplus.” A higher level of policyholder surplus means that an insurer has more capital available to pay claims. Figure 5 shows that the capital for surplus lines insurers has been increasing since 2008, similar to an increase also seen in the P/C market. Table 2 shows that from 2008 to 2012 the surplus level for these insurers was $71.3 billion in 2008 and rose to $132.4 billion in 2012, and for the top 25 surplus lines insurers, their total policyholder surplus rose from $15.6 billion in 2008 to $17.6 billion in 2012. Surplus lines insurers have also experienced few impairments of their capital (see fig. 6). Impairments occur when an insurer’s assets are too low in comparison to their liabilities. According to an A.M. Best report, accounting for the number of insurers in the industry, the surplus industry’s average rate of impairment from 1977 to 2011 was 0.99 percent, slightly higher than the impairment rate of 0.90 percent for P/C insurers. A.M. Best added that from 1998 to 2003, impairments occurred because of the increased failure rate of insurers that relied heavily on underwriting done by managing general agents and because some parents of surplus lines subsidiaries became insolvent. A.M. Best added that more recently surplus lines insurers have not experienced any impairments since 2004, because of better underwriting performance, improved financial market conditions, and more favorable pricing. Thus, the recent experience of surplus lines insurers compares favorably to that of admitted insurers, which experienced a higher percentage of impairments. Almost all states have begun to implement NRRA, which our review of the legislative history shows that Congress passed in 2010 to make it easier for surplus lines insurers and brokers conducting business across states. According to surplus lines insurers, brokers, and representatives of industry associations, the act has simplified the collection of premium taxes for multistate risks. Market participants we spoke with said that NRRA has had little, if any, effect on the prices or availability of coverage, as NRRA was not intended to affect these areas. The participants noted that any changes in price and availability would be due to the insurance cycle rather than NRRA. As noted, NRRA created a home state system for the regulation and taxation of surplus lines insurance. According to our review of state insurance laws in all 50 states, as of December 2013, all states except Michigan have amended their laws to address this provision, which regulators and market participants we spoke with consider to be the key provision of NRRA. We found that the states had largely adopted a definition of the home state that closely matched the term as defined in NRRA—that is, the state in which the insured maintains its principal place of business or residence or, if all the risk resides outside that state, the state to which the greatest percentage of taxable premium is allocated for that insurance contract. According to officials of an association representing surplus lines insurers and brokers, the home state provision has produced significant benefits for the surplus lines industry by reducing the need for insurers to comply with differing sets of rules, disclosures, and requirements. As shown in figure 7, the act has also simplified the payment of premium taxes for transactions involving a multistate risk. Most market participants we interviewed agreed that the home state provision has brought needed clarity to the market. Under the home state system, as an option for collecting and allocating premium taxes, NRRA permitted but did not require states to form an interstate compact or otherwise establish procedures to allocate premium taxes paid to a home state. These compacts, if instituted, were intended to help states adopt nationwide uniform requirements, forms, and procedures for reporting, paying, collecting, and allocating premium taxes for multistate risks. To date, only a few states have chosen to participate in an interstate compact. Two compacts exist, but only one was operational as of November 2013. The operational compact, called the Nonadmitted Insurance Multi-State Agreement (NIMA), includes five participating states—Florida, Louisiana, South Dakota, Utah, and Wyoming—and Puerto Rico (fig. 8). The other compact is the Surplus Lines Insurance Multi-State Compliance Compact (SLIMPACT). It comprises nine states—Alabama, Indiana, Kansas, Kentucky, New Mexico, North Dakota, Rhode Island, Tennessee, and Vermont—but requires 10 states (or 40 percent of surplus lines premiums written), whichever comes first—to take effect. Having not reached the minimum threshold, SLIMPACT is not currently operational. Rather than joining an interstate compact, most states have opted to levy (at their own state tax rate) and then retain 100 percent of the premium tax when they are the home state of the insured (see fig. 8). Although data were unavailable to analyze the net effect on taxes of states adopting this approach, representatives from two industry associations we spoke with have expressed support for it because of its simplicity and view it as most consistent with the intent of NRRA. Market participants, such as surplus lines insurers, and representatives of the industry associations we interviewed said that a state’s decision of whether to join a compact depended largely on the amount of revenue involved. One analysis of the tax-sharing effect of NIMA, performed by an association representing surplus lines insurers and brokers, showed that a small amount of tax revenue, roughly $125,000 across the participating states, was eligible for sharing among NIMA’s members. The largest states in terms of premium, such as California, New York, and Texas, are currently not members of an interstate compact. Rather, these states tax and retain 100 percent of premium when they are the home state of the insured, as discussed above. Market participants told us it is unlikely these large states would join an interstate compact because they stand to collect more revenue by using their current approach rather than by joining an interstate compact. In addition, states apply a range of tax rates to both single and multistate risks when they are the home state of the insured (see table 9 in app. IV). For example, some states will use the tax rate in their own state, while others will use the applicable tax rate from the respective states where each portion of the multistate risk resides. In addition, according to NAIC, all states except for Washington are now participating in its national insurance producer database for licensing surplus lines brokers, which NRRA made a prerequisite for collecting fees for licensing an individual or entity as a surplus lines broker. The database contains information on insurance producers’ (agents’ and brokers’) name, address, state of license, and any regulatory actions taken, among other elements. According to NAIC, the database, which is updated daily, links state regulatory licensing systems into a single common system. NAIC officials told us that they are working with Washington on the implementation of the state’s surplus lines for both initial licensing and renewals of surplus lines brokers and the state will be participating in the database in December 2013. Section 524 of NRRA establishes nationwide criteria for determining whether insurers are eligible to sell in states where they are not domiciled. In particular, the section discourages states from imposing eligibility requirements on these insurers if they are authorized to write coverage and meet minimum capital and surplus requirements in the insured’s home state. However, a range of industry associations we spoke with representing insurers, and insurance agents and brokers, among others, noted that some states were taking actions that they believed were inconsistent with Section 524. For example, the associations said that some states were requesting information—such as business plans, disaster plans, and policy forms—that the associations believe were not relevant to confirming the criteria set forth in Section 524. They said that some states were also asking for fees or other charges that are not specified in Section 524. According to the associations, in some cases, states are applying different standards for single and multistate risks, a particular issue for alien insurers. For example, according to the associations, one state may consider an alien insurer listed on NAIC’s Quarterly Listing of Alien Insurers as eligible to sell coverage if the covered risk resides in more than one state; however, if the policy is a single state policy, then the same alien insurer must meet state-specific eligibility standards to write the policy. The associations say that these actions place an additional administrative burden on surplus lines insurers and result in a lack of consistency across states, which NRRA was intended to address. To reduce the requesting of additional information by state insurance regulators, NAIC formed a subgroup that in August 2013 developed options for improving access to financial information by regulators, brokers, and the insured. These options, which received input from the surplus lines industry, include the following: NAIC will provide quarterly and annual summaries of insurer financial data (surplus lines total direct premium written and policyholders’ surplus) to help states confirm an insurer’s financial health, states will make greater use of NAIC’s automated systems to verify insurer financial data, and states will provide a link to NAIC’s Quarterly Listing of Alien Insurers on their insurance department websites. According to NAIC officials, they will continue to encourage states to implement these suggestions and in 2014, the association may consider a survey to gauge state implementation. According to surplus lines insurers and brokers and representatives of the industry associations that we contacted, NRRA has had little, if any, effect on the price and availability of coverage in the admitted and surplus lines markets. Market participants noted that the changes under NRRA were intended to streamline and simplify the taxation and regulation of surplus lines insurance, and these changes would not likely affect the price or availability of coverage. Market participants noted that the insurance cycle of soft and hard markets (rising and falling premiums), as discussed in greater detail later, was the primary factor affecting the price and availability of coverage. They added that in their view it would be difficult to determine the precise effect, if any, of NRRA because it generally would be difficult to separate the effects of laws and regulations from the normal functioning of the insurance cycle. While surplus lines insurers generally write tailored products, according to a market participant, in some instances the insured (individuals or groups covered by an insurance policy) may move from the surplus lines to the admitted market. Insureds may do so because an admitted insurer changed its underwriting and pricing standards to accept risks that it previously did not accept. For example, an admitted insurer that did not cover commercial apartment risks may decide to do so. According to a study of surplus lines insurance, whether this strategy succeeds depends on the admitted insurer’s underwriting and pricing expertise and ability to comply with the state’s form and rate filing regulations. The study noted that insureds may also migrate from one market to another because of the insurance cycle, a concept noted by many of the market participants that we interviewed. As we have discussed, in a soft market, admitted insurers may lower their acceptability standards, broaden their coverage, and decrease prices to retain customers and attract new ones. In a hard market, by contrast, admitted insurers may raise their acceptability standards, restrict coverage, and raise prices, so insureds may not find the coverage they want in the admitted market and thus move to the surplus lines market. According to market participants, any changes in coverage would be due to the insurance cycle rather than NRRA and there has been little noticeable shifting in coverage between the admitted and surplus lines markets as a result of NRRA. We provided a draft report to NAIC for review and comment. NAIC provided technical comments, which we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points our Offices of Congressional Relations and Public Affairs are listed on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. The objectives of this report were to (1) describe the size and condition of the surplus lines insurance market and (2) examine actions states have taken to implement the Nonadmitted and Reinsurance Reform Act’s (NRRA) provisions and the effects of the act, if any, on the price and availability of surplus lines coverage. To describe the size and condition of the surplus lines market, we obtained and analyzed, but did not independently verify, end-of-year financial and corporate data, from 2008 through 2013, on 199 surplus lines insurers from the following sources: SNL Financial, A.M. Best, National Association of Professional Surplus Lines Offices (NAPSLO), National Association of Insurance Commissioners (NAIC). We identified surplus lines insurers as those that NAIC recognized as eligible or approved to sell surplus lines insurance in calendar year 2012 and actually did sell it in that year. Specifically, we examined lists of eligible surplus lines insurers from state department of insurance websites, state surplus lines association websites, and NAIC’s website. Several states published lists of insurers that are permitted to write surplus lines coverage in their state. The lists provided various types of information on each insurer but were not consistent in all cases. To address this inconsistency, we cross-checked the lists with insurer profiles from NAIC and SNL Financial and resolved any discrepancies. We did not include risk retention groups, since the nonadmitted insurer provisions of NRRA specifically omitted these groups. Using the corporate and financial data described above, we analyzed data on these surplus lines insurers’ financial characteristics, such as their premiums written, expense ratios, and capital. We also examined the general focus of their activities, such as whether they are involved in commercial property, commercial general liability, personal lines, or commercial medical malpractice. We used the types of coverage identified in SNL Financial’s “designated focus,” which identifies the types of coverage or broad categories of business provided by the insurer. Specifically, if a company writes 60 percent or more of its overall P/C premiums in a specific category, it is given a corresponding SNL- designated focus. SNL uses the following methodology to assign a focus to a company: Companies with less than $1 million in net premium written are not given a line of business designated focus. Companies whose reinsurance business is greater than 64 percent (a figure determined by SNL Financial) of their premiums written are classified as reinsurers. Reinsurance companies with more than $1 billion in assets are designated as large reinsurers. All other insurers are designated as having a personal, commercial, or accident and health focus. A company is designated as having a commercial lines focus if it has very diversified operations within the commercial lines. Otherwise, SNL designates a company as focusing on one of the following commercial lines: commercial property (auto, multiperil, fire and allied, and inland and ocean marine), commercial medical malpractice, commercial workers’ compensation, commercial financial lines (financial lines such as fidelity, surety, and mortgage guaranty), and commercial general liability (other and general liability). To provide a comparison of the surplus lines market to the admitted market, we drew upon a recent report we issued on the insurance markets and the impacts of and responses to the 2007-2009 financial crisis. This recent report examined, among other things, admitted insurers’ investments, underwriting performance, and premium revenues. To assess issues related to data reliability for the current report, we reviewed related documentation, conducted interviews with knowledgeable officials, and reviewed previous related reliability assessments and determined that no material changes had been made in how they collected and tabulated the data. We determined that, for the purpose of describing the size and condition of the surplus lines insurance market, for this review, the data were sufficiently reliable. To examine actions taken by states to implement NRRA and the effect of the act, if any, on the price and availability of coverage, we reviewed and analyzed the insurance laws of the 50 states and the District of Columbia, focusing on each state’s implementation of the “home state” provision of NRRA. We focused on this provision because regulators and market participants consider it to be the key provision of NRRA. To supplement our analysis, we consulted a law firm’s law manual on the surplus lines laws in the United States. In addition, we interviewed staff from NAIC and the Federal Insurance Office within the U.S. Department of the Treasury. We also interviewed officials from two state departments of insurance (Delaware and Illinois), based on these states having a large number of surplus lines insurers domiciled in their state. We interviewed officials from two state surplus line associations (New York and California) for the same reason. Finally, we contacted relevant associations of the insurance industry, including: NAPSLO, Property Casualty Insurers Association of America, American Insurance Association, Council of Insurance Agents and Brokers, American Academy of Actuaries, and American Association of Managing General Agents. We selected these associations for interviews because of their role in the surplus lines market; in representing the interests of insurers, underwriters, brokers, and others; or for their expertise. We also interviewed four insurer parent companies, whom we selected based on their large amounts of surplus lines premiums written and having subsidiaries that operate in both the admitted and surplus lines markets. To obtain the perspectives of those who serve as intermediaries between insurers and the insured, we interviewed two large brokers. In addition, we reviewed documents from NAIC including a compendium of state laws on surplus lines, a sample bulletin for states’ implementation of NRRA, and records from recent national meetings. We also reviewed related industry reports and analyses of the surplus lines market. NRRA mandated that we determine and analyze the extent to which there has been a change in the number of individuals who have nonadmitted insurance policies, the type of coverage provided under such policies, and whether such coverage is available in the admitted insurance market. We solicited these data from industry associations, surplus lines insurers, and a large insurance broker, but were told that insurers do not track such information. Representatives of industry associations and a few insurers told us that coverage for individuals represents only a small segment of the market, as the market largely focuses on coverage for businesses. These representatives also said that any change in the number of individuals with coverage before and after NRRA is likely to be small. They added that any change in coverage would be due to the insurance cycle rather than NRRA. We conducted this performance audit from April 2013 to January 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Section 524 of the Nonadmitted and Reinsurance Reform Act of 2010 (NRRA), as part of the Dodd-Frank Wall Street Reform and Consumer Protection Act (the Dodd-Frank Act), says that states may not prohibit a surplus lines broker from placing nonadmitted insurance with, or procuring nonadmitted insurance from, a nonadmitted insurer domiciled outside the United States (also called an alien insurer) that is listed in National Association of Insurance Commissioner’s (NAIC) International Insurers Department (IID) Quarterly Listing of Alien Insurers. This publication includes a list of alien insurers that have filed financial statements, copies of auditors’ reports, the name of their U.S. attorneys or other representative, and details of U.S. trust accounts with the NAIC IID and, based upon these documents and other information, appear to fulfill the criteria set forth in the International IID’s Plan of Operation for Listing of Alien Nonadmitted Insurers. According to NAIC, the listing provides brokers, exempt commercial purchasers, and insureds with assurance as to the eligibility of non-U.S. insurers with which excess and surplus lines insurance business is being quoted or placed. Alien insurers listed by NAIC must observe applicable state insurance laws and regulations, including those covering trust funds. NAIC cautions brokers that the appearance of an alien insurer’s name on this list is not an endorsement by NAIC. In order to be considered for inclusion in the NAIC publication, an insurer must demonstrate that it meets IID’s standards on capital and surplus, U.S. trust accounts, and integrity. First, a company must continually maintain enough capital or surplus to meet its obligations. IID set the minimum amount for April 2012 at $30 million and increased it to $45 million as of January 2013. IID also considers the size of the company, the type of business underwritten by the company, and trends in the company’s capital and surplus. Second, the insurer must establish a U.S. trust account in a U.S. financial institution, consisting of cash deposited with the trustee, securities, or an acceptable letter of credit on behalf of U.S. policyholders. IID provides applicants details on the terms and conditions of such accounts. Third, an insurer is to have an established reputation of financial integrity and satisfactory underwriting and claims practices. According to A.M. Best, as of August 2012, 34 states maintained a list of regulated aliens, 11 states used IID’s “white list,” and 12 states used IID’s Quarterly Listing of Alien Insurers to qualify aliens as surplus lines Insurers. From January 2010 through April 2013, 72 surplus lines insurers were listed in NAIC’s Quarterly Listing of Alien Insurers (see table 3). This count combines the over 80 Lloyd’s of London syndicates into one entry. Also over this time, several companies were added to the list while others were dropped, and some companies changed their name. The NAIC-listed alien insurers are domiciled in several countries, but half of them were domiciled in England (see table 4). Another 14 were domiciled in Bermuda. The rest are domiciled in other countries, most of which are western European countries. Table 5 shows that the largest surplus lines insurers write most of their premiums in the surplus lines market, while smaller surplus lines insurers write most of their premiums in the admitted market. Table 6 provides detail on the surplus lines premiums written from 2008 through 2012 by the top 25 surplus lines carriers in each of those years. The carriers are listed in order of descending surplus lines premiums written for 2012. Surplus lines premiums are concentrated in a few states (see table 7). About half of the surplus lines premiums are written in 4 states, and just under two-thirds of the premiums are written in 10 states. From 2010- 2012, the 10 states with the most surplus lines premiums written in their states accounted for 62.8 percent, 63.3 percent, and 63.6 percent of the total premiums in 2010, 2011, and 2012, respectively. From 2008 through 2012, the 199 SL insurers that we examined had cash and invested assets from $152.7 billion to $233.1 billion, respectively, in several asset classes, and are mostly invested in bonds. An increasing percentage was invested in industrial bonds and decreasing percentages in government and other bonds. These insurers also invested somewhat heavily in cash and other short-term investments, but less so than in 2008. They have also tended to increase their investments in common stock, but that amount remains under 7 percent of total cash and investments (see table 8). Appendix IV: States’ Premium Tax Rates on Surplus Lines Sales Member of interstate compact SLIMPACT (when state is insured’s home state) Member of interstate compact no (when state is insured’s home state) The potential for business to shift between admitted and surplus lines insurance companies owned by a given parent company depends in part on the extent to which that parent’s subsidiaries offer the same coverages on both an admitted and surplus lines basis, and in the same markets. While we could not determine if the admitted and surplus lines insurance subsidiaries owned by the parent companies listed in table 10 sold the same coverages or in the same markets, we did examine whether there was a change in the proportion of a parent’s subsidiaries business that was done in the surplus lines market. We also examined the surplus lines companies whose parents did not have other property and casualty (P/C) subsidiaries. As shown in table 10, SNL Financial, Inc. (SNL) P/C Groups (a proxy for parents, including only P/C insurers) with surplus lines insurer subsidiaries tend to have admitted subsidiaries whose premiums written exceed that of the surplus lines subsidiaries, often by a large percentage. Of the 25 companies with the highest premiums written in 2012, 4 had over half of their premiums provided by their surplus lines business lines, and of those, 1 (Ironshore, Inc.) had a significant portion of its premiums (79.6 percent) provided by these business lines. American International Group, Inc., the largest surplus lines provider as measured by 2012 surplus lines premiums written, wrote $5 billion in surplus lines premiums in 2012, but that was 21.4 percent of its total premiums written. It shows that surplus lines insurers whose parents have no P/C subsidiaries wrote 51.3 percent of their total premiums written in the surplus lines business lines in 2012, which was more than double the 22.4 percent for parents with P/C subsidiaries (see table 10). In addition to the contact named above, Patrick Ward (Assistant Director), Emily Chalmers, Pamela Davidson, John Forrester, Ronald Ito, Marc Molino, and Jessica Sandler made important contributions to this report. | Surplus lines insurers are critical to ensuring that businesses and individuals with difficult-to-insure risks can manage those risks. These insurers provide coverage for risks that the traditional, or admitted, insurance market is unwilling or unable to cover. Historically, insurance brokers who sell such coverage have found it to be time consuming and often difficult to allocate state taxes brokers collect and remit on insurance premiums on behalf of policyholders when the risks covered reside in multiple states. To help address this issue, Congress passed NRRA as part of the Dodd-Frank Wall Street Reform and Consumer Protection Act. The act mandates GAO study changes in the surplus lines insurance market. This report (1) describes the size and condition of the surplus lines insurance market and (2) examines actions states have taken to implement NRRA's provisions and the effects of the act, if any, on the price and availability of coverage. To address these questions, GAO analyzed end-of-year financial data for 2008 through 2012 for insurers who sold surplus lines insurance in 2012 and interviewed insurance regulators from states with a large number of surplus lines insurers, industry associations representing interests in the surplus lines market, and large insurers and brokers. GAO also reviewed related studies and analyses. In commenting on a draft of the report, the National Association of Insurance Commissioners provided technical comments, which GAO incorporated as appropriate. Surplus lines insurers' premiums written have increased modestly and the companies have generally remained profitable. From 2008 through 2012 premiums written by surplus lines insurers, who sell property/casualty insurance through brokers in states where they are not licensed, grew slightly from $24.8 billion to $25.2 billion and remained stable at around 5 percent of the property casualty market as a whole. Over this time, surplus lines insurers' premiums generally exceeded their claims and underwriting expenses and they remained profitable. Surplus lines insurers also saw capital gains over this period. Almost all states have modified their laws to implement at least portions of the Nonadmitted and Reinsurance Reform Act of 2010 (NRRA). In 2010, Congress passed the act, which supporters argued would make it easier for surplus lines insurers and brokers conducting business across states. Under NRRA, only the "home state" of the insured--the state where the insured maintains its principal business or residence or, if the risk is 100 percent outside that state, the state to which the greatest percentage of the insured's taxable premium is allocated--may tax or regulate surplus lines insurance transactions. According to market participants GAO interviewed, the changes in states' laws have simplified compliance for multistate risks. While most home states are collecting and retaining 100 percent of premium taxes, a few states are participating in a tax-sharing agreement, as permitted under NRRA. According to industry associations, including those whom GAO interviewed, some states are making additional requests of surplus lines insurers beyond the requirements specified in NRRA. The National Association of Insurance Commissioners formed a subgroup to address this issue and in August 2013 issued options for improving compliance. Market participants said that NRRA has had little, if any, effect on the prices or availability of coverage, as this was not an intent of the act. The participants said that the insurance business cycles are primarily responsible for any changes in prices and availability. According to surplus lines insurers that GAO contacted, NRRA has caused little noticeable shifting in coverage between the admitted and surplus lines markets. |
The loss of lives and property resulting from commercial motor vehicle accidents has been a focus of public concern for several years. In 2006, about 5,300 people died as a result of crashes involving large commercial trucks or buses, and about 126,000 more were injured. A recent study performed by DOT showed that a significant number of commercial driver crashes were due to a physical impairment of the driver. Specifically, DOT found that about 12 percent of the crashes where the crash cause could be identified were due to drivers falling asleep, being disabled by a heart attack or seizure, or other physical impairments. The Federal Motor Carrier Safety Administration (FMCSA) within DOT shoulders the primary federal responsibility for reducing crashes, injuries, and fatalities involving large trucks and buses. FMCSA’s primary means of preventing these crashes is to develop and enforce regulations to help ensure that drivers and motor carriers are operating in a safe manner. FMCSA’s regulations, among other things, require that drivers of commercial motor vehicles are 21 years old, can read and speak the English language, have a current and valid commercial motor vehicle operator’s license, have successfully completed a driver’s road test, and are physically qualified to drive. As part of these regulations, FMCSA established standards for the physical qualifications of commercial drivers, including the requirement of a medical certification from a medical examiner stating that the commercial driver is physically qualified to operate a commercial motor vehicle. See appendix II for a description of the federal medical requirements. The National Transportation Safety Board (NTSB), an independent federal agency that investigates transportation accidents, considers the medical fitness of commercial drivers a major concern. Over the past several years, NTSB has reported on serious flaws in the medical certification process of commercial drivers. NTSB stated that these flaws can lead to increased highway fatalities and injuries for commercial vehicle drivers, their passengers, and the motoring public. In 2001 NTSB recommended eight safety actions to improve the oversight of the medical certification process, in response to a bus crash that killed 22 people in Louisiana. According to NTSB, currently all eight of the recommendations remain open. In response to FMCSA’s failure to adequately address NTSB’s recommendations, NTSB placed the oversight of medical fitness on its “Most Wanted” list in 2003. Table 1 details each of NTSB’s recommendations. Several fatality crashes highlight the need for and the importance of having an effective medical certificate process. For example, In July 2000, a truck collided with a Tennessee Highway Patrol vehicle protecting a highway work zone. The patrol car exploded at impact, killing the state trooper. The driver of the truck had previously been diagnosed with sleep apnea and hypothyroidism, and had a similar crash in 1997, when he struck the rear of a patrol car in Utah. NTSB stated that it believes that if a comprehensive medical oversight program been in place at the time of the accident, this driver, with known and potentially incapacitating medical conditions, would have been less likely to have been operating a commercial vehicle. This accident, the NTSB said, “demonstrates how easily unfit drivers are able to take advantage of the inadequacies of the current medical system, resulting in potentially fatal consequences.” In May 2005, a truck collided with a sports utility vehicle in Kansas killing a mother and her 10-month-old baby. Prior to the accident, a physician diagnosed the truck driver with a severe form of sleep apnea. The truck driver subsequently went to another physician who issued the medical certificate because the driver did not disclose this illness. The truck driver was found guilty of two counts of vehicular manslaughter. In August 2005 in New York, a truck collided with a motor vehicle, killing the occupants. The truck driver admitted to forging a medical certificate required to get his CDL license because he had been diagnosed with a seizure disorder. The truck driver recently pled guilty of two counts of manslaughter. Commercial drivers with serious medical conditions can still meet DOT medical fitness requirements to safely operate a commercial vehicle and thus hold CDLs. However, there is general agreement that careful medical evaluations are necessary to ensure that serious medical conditions do not preclude the safe operation of a commercial vehicle. It is impossible to determine from data analysis which commercial drivers receiving disability benefits have a medical condition that precludes them from safely driving a commercial vehicle because medical determinations are largely based on subjective factors that are not captured in databases. As such our analysis provides a starting point for exploring the effectiveness of the current CDL medical certification process. Our analysis of DOT data and disability data from the four selected federal agencies, SSA, VA, OPM, and DOL, found that about 563,000 individuals had been issued CDLs and were receiving full medical disability benefits. This represented over 4 percent of all CDLs in the DOT database. However, because DOT’s database does include drivers that had suspended, revoked, or lapsed licenses, the actual number of active commercial drivers that receive full federal disability benefits cannot be determined. Also, our analysis does not include drivers with severe medical conditions that are not in the specific disability programs we selected. The majority of the individuals with serious medical conditions from our 12 selected states had an active CDL. Specifically, as shown in figure 1, of the 563,000 CDL holders receiving full disability benefits, about 135,000 of those individuals were from our 12 selected states. About 114,000 of these 135,000 individuals, or about 85 percent, had an active CDL according to CDL data provided by the 12 selected states. Further, our analysis of the state CDL data indicates that most of the licenses were issued after the commercial driver was found to be eligible for full disability benefits. Specifically, about 85,000 of the 135,000 individuals, or about 63 percent, had their CDL issued after the federal agency determined that the individual met the federal requirements for full disability benefits according to data from our four selected federal agencies. See appendix III for details for each selected state for the number of (1) commercial drivers with active CDLs, (2) commercial drivers with an active CDL even though they had a medical condition from which they received full federal disability benefits, and (3) commercial drivers that were issued a CDL after the driver was approved for full federal disability benefit payments. Because much of the determination of the medical fitness of commercial drivers relies on subjective factors, and because there are ways to circumvent the process (as shown below), it is impossible to determine the extent to which these commercial drivers have a medical condition that would preclude them from safely driving a commercial vehicle. As such our analysis provides a starting point for exploring the effectiveness of the current CDL medical certification process. However, because these individuals are receiving full disability benefits, it is likely that these medical conditions are severe. Further, our analysis also showed that over 1,000 of these drivers are diagnosed with vision, hearing, or seizure disorders, which are medical conditions that would routinely deny the granting of a CDL. Our investigations detail examples of 15 cases where careful medical evaluations did not occur on commercial drivers who were receiving full medical disability benefits. The case studies were selected from approximately 30,000 individuals from Florida, Maryland, Minnesota, and Virginia that had their CDL issued after the federal agency determined that the individual met the federal requirements for full medical disability benefits. For all 15 cases, we found that the states renewed the drivers’ CDLs after the drivers were found by the federal government to be eligible for full disability benefits. For more detailed information on criteria for selection of the 15 cases, see appendix I. On the basis of our investigation of these 15 cases, we identified instances where careful medical examinations did not occur. Most states do not require commercial drivers to provide medical certifications to be issued a CDL. Instead, many states only require individuals to self-certify that a medical examiner granted them a medical certification allowing them to operate commercial vehicles, thus meeting the minimum federal requirements. As a result, we found several commercial drivers who made false assertions on their self-certification that they received a medical certification when in fact no certification was made. For more information on state requirements for medical certifications, see appendix IV. In addition, our investigations found that commercial drivers produced fraudulent documentation regarding their medical certification. Specifically, we found instances where commercial drivers forged a medical examiner’s signature on a medical certification form. In addition, we also found a driver who failed to disclose to the medical examiner that another doctor had prescribed him morphine for his back pain. Finally, our investigations found certain medical examiners did not follow the federal requirements in the determination of medical fitness of commercial drivers. For example, one medical examiner represented to GAO that she did not know that a driver’s deafness would disqualify the individual from receiving a medical certification. Table 2 highlights 5 of the 15 drivers we investigated. For all cases we investigated, the CDL was issued after the driver’s disability benefits started. Appendix V provides details on the other 10 cases we examined. We are referring all 15 cases to the respective state driver license agency for further investigation. The following provides illustrative detailed information on three of the cases we examined. Case 1: A bus driver in Maryland has been receiving Social Security disability benefits since March 2006 due to his heart conditions. Specifically, the driver had open heart surgery in 2003 to repair a ruptured aorta, had a stroke in 2005, and shortly thereafter had another surgery to replace a heart valve. In June 2006, approximately 3 months after Social Security determined the driver was fully disabled; the Maryland driver license agency renewed his CDL for 5 years with a “Passenger” endorsement. The bus driver provided our investigator a forged medical certificate. Specifically, we found that the medical certificate did not have the required medical license number, the physician did not have any record that the bus driver underwent a medical examination for a CDL, and the physician denied conducting a CDL medical exam or signing the medical certificate. Surprisingly, the medical practice also had a copy of the forged medical certificate in its files. The medical practice’s staff stated, however, that it is not uncommon for a patient to bring documents to the office and ask that they be stored in their medical records. The driver’s CDL does not expire until 2011. Case 2: A Virginia truck driver has received SSA disability benefits for over 10 years. The driver’s disability records indicate that that driver had multiple medical conditions, including complications due to an amputation, and that the driver is “also essentially illiterate.” The truck driver has a prosthetic right leg resulting from a farm accident. Although the driver possesses a current medical certificate, the medical examiner did not specify on the medical certificate that it is only valid when accompanied with a Skills Performance Evaluation (SPE) certificate. To test his prosthetic leg, the truck driver stated that he was asked to push the medical examiner across the room in a rolling chair with the prosthetic leg. In our investigation, we attempted to contact the medical examiner but discovered that he is no longer employed by that clinic. The state revoked his medical license due to illegally distributing controlled substances. In 2006, the truck driver was involved in a single vehicle accident when the load in his truck shifted when making a turn and the truck overturned. Prior to October 2007, the truck driver had a CDL with both “Tanker” and “Hazmat” endorsements. In October 2007, the state driver license agency renewed his CDL with a “Tanker” endorsement, which will not expire until 2012. Case 3: A bus driver has been receiving Social Security disability benefits since 1994 for chronic obstructive pulmonary disorder (COPD). The bus driver currently uses three daily inhalers to control his breathing and has a breathing test conducted every 6 months. The bus driver stated that he “gets winded” when he walks to his mailbox and he “occasionally blacks out and forgets things.” However, the driver stated that he has no problem driving a bus, however, he cannot handle luggage or perform any other strenuous duties. Despite not possessing a valid medical certificate, companies continue to hire him as a bus driver on an ad hoc basis. For example, the driver drove a passenger bus as recently as 1 month prior to the time of our interview. The driver stated that the companies have not asked to see his medical certificate. He further stated that because most companies are “hurting for drivers,” they “don’t ask a lot of questions” and pay many of their drivers in cash. The driver’s CDL expires in 2010. We provided a draft of our report to DOT for review and comment. We received e-mail comments on the draft on June 16, 2008, from FMCSA’s Office of Medical Programs. In FMCSA’s response, FMCSA stated that our first objective implies that individuals who are fully disabled have severe medical conditions that may also prevent safe driving. FMCSA stated the following: Disability, even full disability associated with a diagnosis, does not necessarily mean that an individual is medically unfit to operate a commercial vehicle. Disability is not related necessarily to when a medical condition occurred or recurs. The onset of a disease or disabling medical condition is more relevant to medical fitness than when the disability benefits and payments began. As an example, a fully disabled individual may have accommodated to the disability and may improve with treatment while receiving lifelong disability payments. In general, a medical diagnosis alone is not adequate to determine medical fitness to operate a commercial vehicle safely. As an example, multiple sclerosis, while disabling, has several progressive phases, and is not necessarily disqualifying. In addition, FMCSA did not believe that we accurately characterized the 15 cases where careful medical evaluations did not occur. FMCSA stated that this implies these drivers were evaluated by someone for medical fitness for duty, but in 9 cases, the driver was not certified or not evaluated by a medical examiner. We believe our report clearly acknowledges that it is impossible to determine the extent to which these commercial drivers have medical conditions that would preclude them from safely driving a commercial vehicle. In the report, we state that commercial drivers with serious medical conditions can still meet DOT medical fitness requirements to safely operate a commercial vehicle and thus hold CDLs. Further, our report acknowledged that because medical determinations rely in large part on subjective factors that are not captured in databases, it is impossible to determine from data mining and matching the extent to which commercial drivers have a medical condition that precludes them from safely driving a commercial vehicle and therefore if the certification process is effective. Thus, our analysis provides a starting point for exploring the effectiveness of the current CDL medical certification process. We also believe that we fairly characterize that all 15 cases did not have a careful medical evaluation. For all 15 cases that we reviewed, we found that the medical evaluation was not adequate or did not occur. Thus, we conclude that a careful medical evaluation did not occur for all 15 drivers in our case studies. FMCSA also provided us a technical comment which we incorporated in the report. As agreed with your offices, unless you publicly release its contents earlier we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to the Secretary of Transportation. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-6722 or [email protected] if you have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. To determine to the extent possible the number of individuals holding a current commercial driver license (CDL) who have serious medical conditions, we presumed that individuals receiving full federal disability benefits were eligible for these benefits because of the seriousness of their medical conditions. As such, we obtained and analyzed the Department of Transportation’s (DOT) Commercial Driver License Information System (CDLIS) database as of May 2007. For the Social Security Administration (SSA) and the Department of Veterans Affairs (VA) we provided the CDLIS commercial driver information to those agencies. SSA and VA then matched the commercial drivers to the individuals receiving benefits for their disability programs and provided us those results. We also obtained and analyzed the recipient files for four additional federal disability programs. These include the Office of Personnel Management’s (OPM) civil service retirement program and the three programs administered by the Department of Labor: Black Lung, Federal Employee Compensation Act, and the Energy Employees Occupational Illness Compensation Program. We matched the CDL holders from CDLIS to the four federal disability recipient files based on social security number, name, and date of birth. We further analyzed the CDL and disability data to ensure that the commercial drivers met the following criteria: the individual must be currently receiving disability benefits, and the individual must be identified as 100 percent disabled according to the program’s criteria. Because CDLIS is an archival database, the CDLIS data contain information on expired CDLs. To identify the active drivers within CDLIS, we obtained CDL data from a nonrepresentative selection of 12 states. The 12 selected states, representing about 42 percent of all CDLs contained in CDLIS, are: California, Florida, Illinois, Kentucky, Maryland, Michigan, Minnesota, Montana, Tennessee, Texas, Virginia, and Wisconsin. The 12 states were selected primarily based on the size of the CDL population. Because commercial drivers may contract a serious medical condition after the issuance of the CDL, we also determined the number of individuals that received their CDL subsequent to when the federal agencies determined the individual to be eligible for full disability benefits. Our estimate does not include drivers with severe medical conditions that are not in the selected programs we analyzed. We matched the 12 state CDL files to the six CDLIS-disability match files based on driver license number, and identified those CDLs that were current based on license status. To provide case-study examples of commercial drivers who hold active CDLs while also receiving federal disability payments for a disqualifying medical condition, we focused on four states—Florida, Maryland, Minnesota, and Virginia. From these four states, we selected, in a nonrepresentative fashion, 15 commercial drivers for detailed investigation. We identified these driver cases based on our data analysis and mining. For each case, we interviewed, as appropriate, the commercial driver, the driver’s employer, and the driver’s physician to determine whether the medical condition should have precluded the driver from holding a valid CDL. For these 15 cases, we also reviewed state department of motor vehicle reports, police reports, and other public records. To determine the reliability of DOT’s CDLIS data, we used SSA’s Enumeration and Verification System to verify key data elements in the database that were used to perform our work. For the federal disability databases, we assessed the reliability of the data from SSA and VA, which comprise 99 percent of the CDLIS-disability matches. To verify its reliability, we reviewed program logic used by the agencies to match the CDLIS data with their federal disability recipients. We also reviewed the current Performance and Accountability Reports for the agencies to verify that their systems had successfully undergone the required stewardship reviews. For the 12 selected states’ CDL databases, we performed electronic testing of the specific data elements in the database that were used to perform our work. In addition, for 5 of the 12 states we verified the query logic used to create the CDL extract files. For the other 7 states we were unable to obtain the query logic. We performed our investigative work from May 2007 to June 2008 in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. Federal regulations require that commercial drivers be examined and certified by a licensed medical examiner, such as licensed physician, physician’s assistant, and nurse practitioner, to ensure they meet minimum physical qualifications prior to driving. It is the responsibility of both drivers and motor carriers employing drivers to ensure that drivers’ medical certificates are current. According to federal regulations, the medical examiner must be knowledgeable about the regulatory physical qualifications and guidelines as well as the driver’s responsibilities and work environment. In general, the medical certification procedures include the following steps: The driver completes and certifies a medical certification form that includes information about the driver’s health history. The form is provided to the medical examiner as part of the examination. The medical examiner discusses the driver’s health history and the side effects of prescribed medication and common over-the-counter medications. The medical examiner tests the driver’s vision, hearing, blood pressure, pulse rate, and urine specimen (for testing sugar and protein levels). The medical examiner conducts a physical examination and makes a determination on driver fitness. If the medical examiner determines the driver is fit to drive, he/she signs the medical certificate, which the driver must carry with his/her license. The certificate must be dated. The medical examiner keeps a copy in his/her records, and provides a copy to the driver’s employer. When the medical examiner finds medical conditions that prevent certification of the physical condition of the driver and this finding is in conflict with the findings of another medical examiner or the driver’s personal physician, the driver can apply to the Federal Motor Carrier Safety Administration (FMCSA) for a determination. Federal regulations and the accompanying medical guidance provide criteria to the medical examiners for determining the physical condition of commercial drivers. Although the medical examiner makes the determination as to whether the driver is medically fit to operate a commercial vehicle, the following provides a general overview of the nature of the physical qualifications: no loss of physical limbs, including a foot, a leg, a hand, or an arm; no impairment of limbs that would interfere with grasping or their ability to perform normal tasks; no established medical history or clinical diagnosis of diabetes currently requiring insulin for control, respiratory dysfunction, or high blood pressure that would affect their ability to control or drive a commercial motor vehicle; no current diagnosis of a variety of coronary conditions and cardiovascular disease including congestive heart failure; no mental disease or psychiatric disorder that would interfere with their ability to drive a commercial vehicle safely; has distant visual acuity and hearing ability that meets stated does not use a controlled substance or habit-forming drug; and has no current clinical diagnosis of alcoholism. When operating a commercial motor vehicle, drivers must have a copy of the medical examiner’s certificate in their possession. Motor carriers, in turn, are required to maintain a copy of the certificate in their files. When drivers are stopped for a roadside inspection, state inspectors can review the medical examiner’s certificate. During compliance reviews of motor carriers, FMCSA investigators may also verify the validity of medical certifications on file with the motor carrier. In the main portion of the report, we state that from the 12 selected states 114,000 commercial drivers had a current commercial driver license (CDL) even though they had a medical condition from which they received full federal disability benefits. Further, approximately 85,000, or about 63 percent of the active commercial drivers, were issued a CDL after the driver was approved for full federal disability benefit payments. Table 3 below provides details by each selected state on the number of (1) commercial drivers with active CDLs, (2) commercial drivers with an active CDL even though they had a medical condition from which they received full federal disability benefits, and (3) commercial drivers that were issued a CDL after the driver was approved for full federal disability benefit payments. The states have adopted different levels of control to verify that commercial driver license applicants meet the Department of Transportation (DOT) medical certification requirements. As shown in figure 2, 25 states, or 50 percent, allow drivers to self-certify that they meet the requirements. The self-certification is often simply a check-box on the application. Eighteen states, or 36 percent, require that the commercial driver show the DOT medical certificate to the driver licensing agency at the time of application. Further, 6 states, or 12 percent, not only require that the driver show the DOT medical certificate at the time of application but also maintain a copy of the certificate in the driving records of the applicant. Finally, 1 state did not respond to the inquiries. Table 2 in the main portion of the report provides information on five detailed case studies. Table 4 shows the remaining case studies that we investigated. As with the five cases discussed in the body of this testimony, we found drivers with a valid commercial driver license (CDL) who also had serious medical conditions. GAO staff who made major contributions to this report include Matthew Valenta, Assistant Director; Sunny Chang; Paul DeSaulniers; Craig Fischer; John V. Kelly; Jeffrey McDermott; Andrew McIntosh; Andrew O’Connell; Philip Reiff; Nathaniel Taylor; and Lindsay Welter. | Millions of drivers hold commercial driver licenses (CDL), allowing them to operate commercial vehicles. The Department of Transportation (DOT) established regulations requiring medical examiners to certify that these drivers are medically fit to operate their vehicles and provides oversight of their implementation. Little is known on the extent to which individuals with serious medical conditions hold CDLs. GAO was asked to (1) examine the extent to which individuals holding a current CDL have serious medical conditions and (2) provide examples of commercial drivers with medical conditions that should disqualify them from receiving a CDL. To examine the extent to which individuals holding CDLs have serious medical conditions, GAO identified those who were in both DOT's CDL database and selected federal disability databases of the Social Security Administration, Office of Personnel Management, and Departments of Veterans Affairs and Labor and have been identified as 100 percent disabled according to the program's criteria. Because DOT's data also include inactive licenses, GAO obtained current CDL data from 12 selected states based primarily on the size of CDL population. To provide case study examples, GAO focused on four states--Florida, Maryland, Minnesota, and Virginia. For 15 drivers identified from data mining, GAO interviewed, as appropriate, the driver, driver's employer, and driver's physician. GAO is not making any recommendations. Commercial drivers with serious medical conditions can still meet DOT medical fitness requirements to safely operate a commercial vehicle and thus hold CDLs. However, there is general agreement that careful medical evaluations are necessary to ensure that serious medical conditions do not preclude the safe operation of a commercial vehicle. Because medical determinations rely in large part on subjective factors that are not captured in databases, it is impossible to determine from data matching and mining alone the extent to which commercial drivers have medical conditions that preclude them from safely driving a commercial vehicle and therefore if the certification process is effective. GAO's analysis provides a starting point for exploring the effectiveness of the current CDL medical certification process. Our analysis of commercial license data from DOT and medical disability data from the Social Security Administration, Office of Personnel Management, and Departments of Veterans Affairs and Labor found that about 563,000 of such individuals had commercial driver licenses and were determined by the federal government to be eligible for full disability benefits. This represented over 4 percent of all commercial driver licenses in the DOT database. Our analysis of 12 selected states indicates that most of these commercial drivers still have active licenses. Specifically, for these 12 selected states, about 85 percent had a current CDL even though they had a medical condition from which they received full federal disability benefits. The majority of these drivers were issued a CDL after the driver was approved for full federal disability benefit. Our investigations detail examples of 15 cases where careful medical evaluations did not occur on commercial drivers who were receiving full disability benefits for serious medical conditions. |
With the agreement between Congress and the administration to balance the federal budget and the widespread demands by the American people for a less costly government, agencies are being challenged as never before to ensure that their operations are as efficient as possible. Efforts by Congress and the administration are leading or have led to a broader focus on results, significant reductions in the size of the federal workforce, simplified administrative and management procedures, and additional mechanisms to improve efficiency. Within this context, interest has grown over the last several years in using contracting out as one of the central tools available to agencies to reduce costs in a balanced budget environment. have relied from the start on contracting out much of their work rather than performing it directly. Contractors also have almost completely replaced federal employees in some functions, such as cleaning services, travel management, and most recently personnel security investigations. As an indication of the degree to which the federal government uses the private sector, total civilian personnel costs for fiscal year 1997 were about $113 billion, as compared with about $110 billion that federal agencies spent on commercial service contracts. The issue of whether to contract out federal functions has always been challenging. In an effort to help agencies make better decisions in this regard, OMB issued Circular A-76 in 1966 and updated it several times, most recently in 1983. A-76 provides federal policy for the government’s performance of commercial activities. OMB issued a supplemental handbook to the circular in 1979 that included detailed procedures for competitively determining whether commercial activities should be performed in-house; by another federal agency, through an interservice support agreement; or by the private sector. OMB updated this handbook in 1983 and again in March 1996. This latest revision was intended to streamline the cost comparison process and reduce the A-76 administrative burden and thereby ease the use of A-76 within the executive branch. According to OMB, the purpose of A-76 is not to convert work to or from in-house, contract, or interservice support agreement performance. Thus, a senior OMB official stressed, OMB does not view its role as requiring agencies to undertake A-76 cost comparisons. Rather, OMB encourages agencies to understand and use A-76 as one of a series of tools federal managers can employ to make sound business decisions and to enhance federal performance through competition and choice. procedures for the most efficient and effective in-house performance of the commercial activity, referred to as the Most Efficient Organization or MEO; and (3) accepting formal bids and conducting a cost comparison between the private sector and the government’s Most Efficient Organization in order to make a decision on whether an activity will be performed by the government or the private sector. Agencies’ experiences with A-76 suggest that competition is a key to realizing savings, whether functions are eventually performed by private sector sources or remain in-house. We have found that savings achieved through the A-76 competitive process were largely personnel savings, the result of closely examining the work to be done and reengineering the activities in order to perform them with fewer personnel, whether in-house or by contractor. OMB has reported that savings from reviewing an agency’s operations and making changes to implement the Most Efficient Organization have averaged 20 percent from original costs. We have noted in past work that such reported savings must be viewed with caution because statements about savings have often been heavily premised on initial estimates that were not later updated to reflect actual amounts. However, there appears to be a clear consensus, which we share, that savings are possible when agencies undertake a disciplined approach, such as that called for under A-76, to review their operations and implement the changes to become more efficient themselves or contract with the private sector for services. In fact, in DOD’s case, about half of the competitions were won by federal employees. revision to the A-76 Supplemental Handbook would make A-76 a more attractive vehicle for agencies to use, no significant increase in efforts under A-76 among civilian agencies are readily evident. As shown in table 1, for fiscal year 1997, DOD was the only federal agency that reported to OMB that it had completed any A-76 studies of federal positions. For the future, DOD projected that it can save about $6 billion by 2003 and $2.5 billion each year thereafter by subjecting more of its business and support activities to competition using the A-76 process. Currently, DOD plans to subject over 220,000 positions to the A-76 process. DOD has not fully achieved estimated savings in the past, and we question DOD’s ability to achieve all estimated savings in the future. However, if DOD is able to complete its ambitious A-76 plans, significant savings are likely. then, A-76 efforts at Commerce and Interior have dwindled along with those at other federal agencies. The Department of Commerce has not done a complete update of its inventory of commercial activities since 1983 and recently completed what had been its only ongoing study. That study covered the operation and support of a National Oceanic and Atmospheric Administration (NOAA) ship. NOAA officials told us that the study was done because of pressure from Congress, OMB, and the Department’s Inspector General to explore alternatives to an agency-designed, -owned, and -operated fleet for acquiring marine data. However, the study did not result in any commercial offers in response to NOAA’s solicitation. In addition, the study took almost 19 months and required nearly 10 staff years to complete. It would have required even more resources if NOAA had received offers to perform the work. The Department of the Interior has a current inventory which it updates periodically, and has identified over 5,000 FTEs as devoted to commercial activities. These activities include such functions as administrative support services and automated data processing-related services. The Department reports that although it has not conducted many formal A-76 studies in recent years, it has undertaken a number of A-76 cost comparisons of its aircraft services, including examinations of aircraft maintenance and decisions on whether to lease or purchase aircraft. However, most of these studies did not involve any federal positions, and therefore are not reflected in OMB’s governmentwide data on FTEs studied. Officials at the Departments of Commerce and the Interior provided similar explanations for the limited effort under A-76. They said that they perceived that the priorities in management reform initiatives had changed and that greater emphasis was being given to implementation of more fundamental, mission-based initiatives arising from the National Partnership for Reinventing Government, formerly know as the National Performance Review (NPR) and the Government Performance and Results Act (the Results Act), among others. According to the officials, these shifting management priorities, along with the significant time and money needed to do the studies under A-76 and the need for sufficient staff with the necessary technical skills, have all contributed to reduce A-76 efforts. studies. This supplement was issued subsequent to the expiration of several legislative provisions that temporarily limited agencies’ A-76 efforts, particularly those of DOD. OMB’s revision of the supplement had the potential to re-focus attention on A-76. However, since issuing the revision, OMB has not consistently worked with agencies to ensure that the provisions of A-76 are being effectively implemented. For example, OMB made only limited efforts to gather and use the commercial activities inventories that agencies are to develop under A-76. In June 1996, OMB requested that agencies submit not later than September 13, 1996, a summary of their updated inventory of commercial activities as required by A-76. According to OMB, it did not receive inventories from all agencies, and of those that it did receive, many were based largely on previous inventory efforts. In June 1997, after not receiving responses from several agencies, OMB followed up with another request for the commercial inventory information. Several months later, in April 1998, we found that 6 of the 24 largest agencies still had not complied with OMB’s initial and follow-up requests to provide updated commercial activities inventories. OMB also has not systematically reviewed the inventories of commercial activities that it did receive to determine whether agencies are missing opportunities to generate savings. OMB generally has not attempted to determine whether agencies have inappropriately omitted some commercial activities. OMB also does not compare commercial activities among agencies to identify inconsistent application of A-76 guidance across the federal government. As a result, some agencies may not be identifying commercial activities that are similar to those included in other agencies’ commercial activities inventories, thereby missing opportunities to use the A-76 process to achieve cost savings. these paragraphs instruct agencies that their savings estimates should reflect the probable results generated by cost comparisons or conversions. OMB officials stated that they rely primarily on program examiners in the OMB Resource Management Offices (RMO) to review agencies’ A-76 efforts in conjunction with the budget review and approval process. In 1996, the OMB Deputy Director for Management asked the RMOs to examine competition initiatives, such as A-76, as part of their continuing program management and budget reviews. The Deputy Director highlighted agencies’ strategic plans and streamlining plans as being especially appropriate vehicles for examining agencies’ efforts to compete their support service requirements. However, since then, OMB has not provided its program examiners with more recent written requirements or guidance on the need to review agencies’ A-76 efforts. OMB officials said that, despite the lack of current guidance, some review has been done on an ad hoc basis in conjunction with budget reviews. According to these officials, examiners were given copies of agencies’ commercial activities inventories where they existed and were instructed to keep in mind all reinvention efforts, including A-76, as they reviewed agency budget requests. However, given the absence of inventory information for several of the largest federal agencies and the absence of ongoing studies in virtually all agencies other than DOD, the effect of examiners’ efforts, if any, is questionable. and functions, these inventories can be valuable not only for A-76 purposes, but also for identifying other reinvention opportunities. This plan for renewed OMB commitment, if effectively implemented, is an important and noteworthy development that could lay the groundwork for a reinvigorated A-76 program. Given OMB’s past experience with requesting and using inventories of commercial activities from agencies, it is clear that sustained OMB commitment and follow-through will be vital to the success of the effort. We plan to continue to monitor OMB’s and the agencies’ efforts in this area. Over the last couple of years, there has been interest in Congress in establishing a statutory basis for A-76 and for making other changes intended to expand the degree to which agencies compete their commercial activities. We have been pleased that Congress has turned to us for assistance as it has considered various legislative proposals.Irrespective of any decisions that Congress may make about the A-76 program, our work suggests that several elements are needed for a successful A-76 effort across federal agencies. Today, I will highlight four elements that I believe merit special attention. The sustained commitment of agency and administration leadership is a necessary element to ensure the success of any management improvement effort, including A-76. As the current level of activity suggests, consistent and forceful leadership from OMB may be needed to create incentives for agencies’ managers to subject themselves to the rigors of the A-76 process. By comparison with the rest of the federal government, DOD has maintained much larger levels of activity because it has incentives to generate savings through A-76 to fund its modernization efforts. goals the agency will pursue and the strategies the agency will use to achieve those goals. The first of these strategic plans were provided to Congress last fall. Each agency is then to develop annual performance plans that identify the agency’s annual goals and strategies and the resources that will be used to achieve those yearly goals. The first of these plans, to cover fiscal year 1999, were submitted to Congress this spring. An agency’s efforts on its annual performance plans provides the opportunity to consider A-76 within the broader context of what the agency is trying to achieve and how best to achieve it. At the request of congressional leaders and to assist Congress in using annual performance plans for making decisions, we issued a guide in February 1998 for Congress to use in assessing annual performance plans. In that guide, we noted that Congress could examine the plans from the standpoint of whether they show evidence that various approaches, such as establishing partnerships with other organizations and contracting, were considered in determining how best to deliver products and services. More directly, the annual performance plans can provide a ready-made, annual vehicle for Congress to use to inquire about agencies’ efforts to ensure that the most cost-effective strategies are in place to achieve agencies’ goals. As part of this inquiry, Congress can ask agencies about the tools the agencies are using to increase effectiveness, including the status of A-76 programs, and the specific choices the agencies have made about whether to keep a commercial activity in-house or contract it out. deficiencies affect the government’s ability to accurately measure the full cost and financial performance of programs and to efficiently manage its operations. For example, in January 1998, we reported that DOD has no reliable means of accumulating actual cost data to account for and manage resources. Moreover, in a February 1998 report, we noted that it will likely be many years before DOD is capable of providing accurate and reliable cost data. Efforts are under way to improve government cost data and supporting systems, but for some agencies it could be several years before significant improvements are made. Continuing efforts to implement the Chief Financial Officers Act are central to ensuring that agencies resolve their long-standing problems in generating vital information for decisionmakers. In that regard, the Federal Accounting Standards Advisory Board (FASAB) has developed a new set of accounting concepts and standards that underpin OMB’s guidance to agencies on the form and content of their agencywide financial statements. As part of that effort, FASAB developed managerial cost accounting standards. These managerial cost accounting concepts and standards require that federal agencies provide reliable and timely information on the full cost of federal programs and on their activities and outputs. Specifically identified in the standards is the need for information to help guide decisions involving economic choices, such as whether to do a project in-house or contract it out. Such information would allow agencies to develop appropriate overhead rates for specific operations. These cost accounting standards became effective for fiscal year 1998. Some agencies’ Chief Financial Officers have expressed concern about their agencies’ ability to comply with the cost accounting standards this year. privatization, the need for aggressive monitoring and oversight grows.Oversight was needed not only to evaluate compliance with the terms of the privatization agreement, but also to evaluate performance in delivering goods and services to help ensure that the government’s interests were fully protected. Officials from most state and local governments said that the monitoring of contractor performance was the weakest link in their privatization processes. Oversight and monitoring have been consistent weaknesses in federal efforts as well. In numerous past reports on governmentwide contract management, we have identified major problem areas, such as ineffective contract administration, insufficient oversight of contract auditing, and lack of high-level management attention to and accountability for contract management. For example, long-standing contractor oversight problems at several agencies, including DOD, the Department of Energy, and the National Aeronautics and Space Administration have, in our view, put these agencies at high risk for waste, fraud, abuse, and mismanagement.Although each of these agencies have taken actions to improve their contractor oversight and monitoring functions, these remain high-risk areas that we continue to monitor closely. In summary, Mr. Chairman, A-76 has shown itself to be an effective management tool in increasing the efficiency of the federal government and saving scarce funds. However, despite its proven track record, A-76 is seldom used in civilian agencies. OMB has not consistently sent strong messages to the agencies that A-76 is a priority management initiative. While OMB’s May 12, 1998, memorandum is an encouraging first step, thorough implementation and follow-through will be needed to get A-76 on track. In addition, agencies will need to continue their efforts to ensure both that they have the sound program cost data needed to make comparisons and that mechanisms are in place to monitor and oversee contracts. Finally, we believe that agencies’ development and Congress’ use of annual performance plans under the Results Act provide an opportunity to consider A-76 and other competition issues within the context of the most efficient means to achieve agency goals. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed: (1) the purpose and usefulness of the Office of Management and Budget's (OMB) Circular A-76 in the current federal environment; (2) why A-76 is not being used extensively by civilian agencies; (3) the effectiveness of OMB's efforts to lead the implementation of A-76, which, in GAO's view, could be enhanced; and (4) observations regarding the necessary elements of a more active A-76 program. GAO noted that: (1) OMB Circular A-76 has shown itself to be an effective management tool in increasing the efficiency of the federal government and saving scarce funds; (2) despite its proven track record, A-76 is seldom used in civilian agencies; (3) OMB has not consistently sent strong messages to the agencies that A-76 is a priority management initiative; (4) while OMB's May 12, 1998, memorandum is an encouraging first step, thorough implementation and follow-through will be needed to get A-76 on track; (5) in addition, agencies will need to continue their efforts to ensure both that they have the sound program cost data needed to make comparisons and that mechanisms are in place to monitor and oversee contracts; and (6) agencies' development and Congress' use of annual performance plans under the Government Performance and Results Act provide an opportunity to consider A-76 and other competition issues within the context of the most efficient means to achieve agency goals. |
The use of IT to electronically collect, store, retrieve, and transfer clinical, administrative, and financial health information has great potential to help improve the quality and efficiency of health care and is critical to improving the performance of the U.S. health care system. Historically, patient health information has been scattered across paper records kept by many different caregivers in many different locations, making it difficult for a clinician to access all of a patient’s health information at the time of care. Lacking access to these critical data, a clinician may be challenged to make the most informed decisions on treatment options, potentially putting the patient’s health at greater risk. The use of electronic health records can help provide this access and improve clinical decisions. Electronic health records are particularly crucial for optimizing the health care provided to military personnel and veterans. While in military status and later as veterans, many VA and DOD patients tend to be highly mobile and may have health records residing at multiple medical facilities within and outside the United States. Making such records electronic can help ensure that complete health care information is available for most military service members and veterans at the time and place of care, no matter where it originates. VA and DOD have been working to exchange patient health data electronically since 1998. As we have previously noted, their efforts have included both short-term initiatives to share information in existing (legacy) systems, as well as a long-term initiative to develop modernized health information systems—replacing their legacy systems—that would be able to share data and, ultimately, use interoperable electronic health records. In their short-term initiatives to share information from existing systems, the departments began from different positions. VA has one integrated medical information system—the Veterans Health Information Systems and Technology Architecture (VistA)—which uses all electronic records and was developed in-house by VA clinicians and IT personnel. All VA medical facilities have access to all VistA information. In contrast, DOD uses multiple legacy medical information systems, all of which are commercial software products that are customized for specific uses. For example, the Composite Health Care System (CHCS) which was formerly DOD’s primary health information system, is still in use to capture pharmacy, radiology, and laboratory information. In addition, the Clinical Information System (CIS), a commercial health information system customized for DOD, is used to support inpatient treatment at military medical facilities. The departments’ short-term initiatives to share information in their existing systems have included several projects. Most notable are two information exchange projects: ● The Federal Health Information Exchange (FHIE), completed in 2004, enables DOD to electronically transfer service members’ electronic health information to VA when the members leave active duty. ● The Bidirectional Health Information Exchange (BHIE), also established in 2004, was aimed at allowing clinicians at both departments viewable access to records on shared patients (that is, those who receive care from both departments—veterans may receive outpatient care from VA clinicians and be hospitalized at a military treatment facility). The interface also allows DOD sites to see previously inaccessible data at other DOD sites. As part of the long-term initiative, each of the departments aims to develop a modernized system in the context of a common health information architecture that would allow a two-way exchange of health information. The common architecture is to include standardized, computable data; communications; security; and high- performance health information systems: DOD’s AHLTA and VA’s HealtheVet. The departments’ modernized systems are to store information (in standardized, computable form) in separate data repositories: DOD’s Clinical Data Repository (CDR) and VA’s Health Data Repository (HDR). For the two-way exchange of health information, in September 2006 the departments implemented an interface named CHDR, to link the two repositories. Beyond these initiatives, in January 2007, the departments announced their intention to jointly determine an approach for inpatient health records. On July 31, 2007, they awarded a contract for a feasibility study and exploration of alternatives. In December 2008, the contractor provided the departments with a recommended strategy for jointly developing an inpatient solution. VA and DOD have increased their ability to share and use health information, sharing both computable and viewable data. This achievement has required years of effort by the two departments, involving, among other things, agreeing on standards and setting priorities for the kind of information to be shared and the appropriate level of interoperability to work toward. Interoperability—the ability to share data among health care providers—is key to sharing health care information electronically. Interoperability enables different information systems or components to exchange information and to use the information that has been exchanged. This capability is important because it allows patients’ electronic health information to move with them from provider to provider, regardless of where the information originated. If electronic health records conform to interoperability standards, they can be created, managed, and consulted by authorized clinicians and staff across more than one health care organization, thus providing patients and their caregivers the necessary information required for optimal care. (Paper-based health records—if available—also provide necessary information, but unlike electronic health records, do not provide decision support capabilities, such as automatic alerts about a particular patient’s health, or other advantages of automation.) Interoperability can be achieved at different levels. At the highest level, electronic data are computable (that is, in a format that a computer can understand and act on to, for example, provide alerts to clinicians on drug allergies). At a lower level, electronic data are structured and viewable, but not computable. The value of data at this level is that they are structured so that data of interest to users are easier to find. At still a lower level, electronic data are unstructured and viewable, but not computable. With unstructured electronic data, a user would have to find needed or relevant information by searching uncategorized data. Beyond these, paper records can also be considered interoperable (at the lowest level) because they allow data to be shared, read, and interpreted by human beings. Figure 1 shows the distinction between the various levels of interoperability and examples of the types of data that can be shared at each level. VA and DOD have adopted a classification framework like the one in the figure to define what level of interoperability they are aiming to achieve in various information areas. For example, in their initial efforts to implement computable data, VA and DOD focused on outpatient pharmacy and drug allergy data because clinicians gave priority to the need for automated alerts to help medical personnel avoid administering inappropriate drugs to patients. As of January 31, 2009, the departments were exchanging computable outpatient pharmacy and drug allergy data through the CHDR interface on over 27,000 shared patients—an increase of about 9,000 patients since June 2008. However, according to VA and DOD officials, not all data require the same level of interoperability, nor is interoperability at the highest level achievable in all cases. For example, unstructured, viewable data may be sufficient for such narrative information as clinical notes. According to the departments, much of the information being shared today is currently at the structured, viewable level. For example, through BHIE, the departments exchange surgical pathology reports, microbiology results, cytology reports, chemistry and hematology reports, laboratory orders, vital signs, and other data in structured, viewable form. Some of this information is from scanned documents that are viewable but unstructured. With this format, a clinician would have to find needed or relevant information by scanning uncategorized information. The value of viewable data is increased if the data are structured so that information is categorized and easier to find. Nonetheless, achieving even a minimal level of electronic interoperability is valuable for potentially making all relevant information available to clinicians. However, the departments have more to do: not all electronic health information is yet shared. In addition, although VA’s health data are all captured electronically, information is still captured on paper at many DOD medical facilities. Any level of interoperability depends on the use of agreed-upon standards to ensure that information can be shared and used. In the health IT field, standards may govern areas ranging from technical issues, such as file types and interchange systems, to content issues, such as medical terminology. ● For example, vocabulary standards provide common definitions and codes for medical terms and determine how information will be documented for diagnoses and procedures. These standards are intended to lead to consistent descriptions of a patient’s medical condition by all practitioners. Without such standards, the terms used to describe the same diagnoses and procedures may vary (the condition known as hepatitis, for example, may be described as a liver inflammation). The use of different terms to indicate the same condition or treatment complicates retrieval and reduces the reliability and consistency of data. ● Another example is messaging standards, which establish the order and sequence of data during transmission and provide for the uniform and predictable electronic exchange of data. For example, they might require the first segment to include the patient’s name, hospital number, and birth date. A series of subsequent segments might transmit the results of a complete blood count, dictating one result (e.g., iron content) per segment. Messaging standards can be adopted to enable intelligible communication between organizations via the Internet or some other communications pathway. Without them, the interoperability of health IT systems may be limited, reducing the data that can be shared. VA and DOD have agreed upon numerous common standards that allow them to share health data. These are listed in a jointly published common set of interoperability standards called the Target DOD/VA Health Standards Profile, updated annually. The profile includes federal standards (such as data standards established by the Food and Drug Administration and security standards established by the National Institute of Standards and Technology); industry standards (such as wireless communications standards established by the Institute of Electrical and Electronics Engineers and Web file sharing standards established by the American National Standards Institute); and international standards (such as the Systematized Nomenclature of Medicine Clinical Terms, or SNOMED CT, and security standards established by the International Organization for Standardization). For the two kinds of data now being exchanged in computable form through CHDR (pharmacy and drug allergy data), VA and DOD adopted the National Library of Medicine data standards for medications and drug allergies, as well as the SNOMED CT codes for allergy reactions. This standardization was a prerequisite for exchanging computable medical information—an accomplishment that, according to the Department of Health and Human Services’ National Coordinator for Health IT, has not been widely achieved. Further, VA and DOD are continuing their historical involvement in efforts to agree upon standards for the electronic exchange of clinical health information by participating in ongoing initiatives led by the Office of the National Coordinator under the direction of HHS. These initiatives have included the designation of standards- setting organizations tasked to reach consensus on the definition and use of standards. For example, these organizations have been responsible for, among other things, ● developing use cases, which provide the context in which standards would be applicable; identifying competing standards for the use cases and harmonizing the standards; ● developing interoperability specifications that are needed for implementing the standards; and ● creating certification criteria to determine whether health IT systems meet standards accepted or recognized by the Secretary of HHS, and then certifying systems that meet those criteria. The involvement of the two departments in these initiatives is important both because of the experience that the departments can offer the national effort, and also because their involvement helps ensure that the standards they adopt are consistent with the emerging federal standards. DOD and VA have made progress toward adopting health data interoperability standards that are newly recognized and accepted by the Secretary of HHS. The departments have identified these new standards, which relate to three HHS-recognized use cases, in their most recent Target Standards Profile. Nonetheless, the need to be consistent with the emerging federal standards adds complexity to the task faced by the two departments of extending their standards efforts to additional types of health information. The National Coordinator recognized the importance of their participation and stated it would not be advisable for VA and DOD to move significantly ahead of the national standards initiative; if they did, the departments might have to change the way their systems share information by adjusting them to the national standards later, as the standards continue to evolve. Using interoperable health IT to help improve the efficiency and quality of health care is a complex goal that requires the involvement of multiple stakeholders in both departments, as well as numerous activities taking place over an expanse of time. In view of this complexity, it is important to develop comprehensive plans that cover the full scope of the activities needed to reach the goal of interoperable health capabilities or systems. To be effective, these plans should be grounded in results-oriented goals and performance measures that allow the results of the activities to be monitored and assessed, so that the departments can take corrective action if needed. In the course of their health IT efforts, VA and DOD have faced considerable challenges in project planning and management. As far back as 2001 and 2002, we reported management weaknesses, such as inadequate accountability and poor planning and oversight, and recommended that the departments apply principles of sound project management. The departments’ efforts to meet the recent requirements of the National Defense Authorization Act for Fiscal Year 2008 provide additional examples of such challenges, raising concerns regarding their ability to most effectively meet the September 2009 deadline for developing and implementing interoperable electronic health record systems or capabilities. The departments have identified key documents as defining their planned efforts to meet this deadline: the November 2007 VA/DOD Joint Executive Council Strategic Plan for Fiscal Years 2008–2010 (known as the VA/DOD Joint Strategic Plan) and the September 2008 DOD/VA Information Interoperability Plan (Version 1.0). These plans identify various objectives and activities that, according to the departments, are aimed at increasing health information sharing and achieving full interoperability. However, of the 45 objectives and activities identified in their plans, we previously reported that only 4 were documented with results-oriented (i.e., objective, quantifiable, and measurable) performance goals and measures that are characteristic of effective planning. ● An example of an objective, quantifiable, and measurable performance goal is DOD’s objective of increasing the percentage for inpatient discharge summaries that it shares with VA from 51 percent as of March 2009, to 70 percent by September 30, 2009. ● However, other goals in the plans are not measurable: For example, one objective is the development of a plan for interagency sharing of essential health images. Another objective is to review national health IT standards. In neither case are tangible deliverables described that would permit the departments to determine progress in achieving these goals. In view of the complexity and scale of the tasks required for the two departments to develop interoperable electronic health records, the lack of documented results-oriented performance goals and measures hinder their ability to measure and report their progress toward delivering new capabilities. Both departments agreed with our January 2009 recommendation that they develop results- oriented goals and associated performance measures to help them manage this effort. Until they develop these goals and measures, the departments will be challenged to effectively assess their progress. In addition, we previously reported that the departments had not fully set up the interagency program office that was established in the National Defense Authorization Act for Fiscal Year 2008. According to department officials, this office will play a crucial role in coordinating the departments’ efforts to accelerate their interoperability efforts. These officials stated that having a centralized office to take on this role will be a primary benefit. Further, defining results-oriented performance goals and ensuring that these are met would be an important part of the task of the program office. However, the effort to set up the program office was still in its early stages. The departments had taken steps to set up the program office, such as developing descriptions for key positions and beginning to hire personnel, but they had not completed all necessary activities to meet their December 2008 deadline for the office to be fully operational. Both departments agreed with our July 2008 recommendation that the departments give priority to fully establishing the interagency program office. Since we last reported, the departments have continued their efforts to hire staff for the office with 18 of 30 positions filled as of March 5, 2009, but the positions of Director and Deputy Director are not yet filled with permanent hires. Until the departments complete key activities to set up the program office, it will not be positioned to be fully functional, or accountable for fulfilling the departments’ interoperability plans. Coupled with the lack of results-oriented plans that establish program commitments in measurable terms, the absence of a fully operational interagency program office leaves VA and DOD without a clearly established approach for ensuring that their actions will achieve the desired purpose of the act. In closing, Mr. Chairman, VA and DOD have made important progress in achieving electronic health records that are interoperable, but the departments continue to face challenges in managing the activities required to achieve this inherently complex goal. These include the need to continue to agree on standards for their own systems while ensuring that they maintain compliance with federal standards, which are still emerging as part of the effort to promote the nationwide adoption of health IT. In addition, the departments’ efforts face managerial challenges in defining goals and measures and setting up the interagency program office. Until these challenges are addressed, the risk is increased that the departments will not achieve the ability to share interoperable electronic health information to the extent and in the manner that most effectively serves military service members and veterans. This concludes my statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have. If you have any questions on matters discussed in this testimony, please contact Valerie C. Melvin, Director, Information Management and Human Capital Issues, at (202) 512-6304 or [email protected]. Other individuals who made key contributions to this testimony are Mark Bird, Assistant Director; Barbara Collier; Neil Doherty; Rebecca LaPaze; J. Michael Resser; Kelly Shaw; and Eric Trout. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | For over a decade, the Department of Veterans Affairs (VA) and the Department of Defense (DOD) have been engaged in efforts to improve their ability to share electronic health information. These efforts are vital for making patient information readily available to health care providers in both departments, reducing medical errors, and streamlining administrative functions. In addition, Congress has mandated that VA and DOD jointly develop and implement, by September 30, 2009, electronic health record systems or capabilities that are fully interoperable and compliant with applicable federal interoperability standards. (Interoperability is the ability of two or more systems or components to exchange information and to use the information that has been exchanged.) The experience of VA and DOD in this area is also relevant to broader efforts to advance the nationwide use of health information technology (IT) in both the public and private health care sectors--a goal of both current and past administrations. In this statement, GAO describes VA's and DOD's achievements and challenges in developing interoperable electronic health records, including brief comments on how these apply to the broader national health IT effort. Through their long-running electronic health information sharing initiatives, VA and DOD have succeeded in increasing their ability to share and use health information. In particular, they are sharing certain clinical information (pharmacy and drug allergy data) in computable form--that is, in a format that a computer can understand and act on. This permits health information systems to provide alerts to clinicians on drug allergies, an important feature that was given priority by the departments' clinicians. The departments are now exchanging this type of data on over 27,000 shared patients--an increase of about 9,000 patients between June 2008 and January 2009. Sharing computable data is considered the highest level of interoperability, but other levels also have value. That is, data that are only viewable still provide important information to clinicians, and much of the departments' shared information is of this type. However, the departments have more to do: not all electronic health information is yet shared, and although VA's health data are all captured electronically, information is still captured on paper at many DOD medical facilities. To share and use health data has required, among other things, that VA and DOD agree on standards. At the same time, they are participating in federal standards-related initiatives, which is important both because of the experience that the departments bring to the national effort, and also because their involvement helps ensure that their adopted standards are compliant with federal standards. However, these federal standards are still emerging, which could complicate the departments' efforts to maintain compliance. Finally, the departments' efforts face management challenges. Specifically, the effectiveness of the departments' planning for meeting the deadline for fully interoperable electronic health records is reduced because their plans did not consistently identify results-oriented performance goals (i.e., goals that are objective, quantifiable, and measurable) or measures that would permit progress toward the goals to be assessed. Further constraining VA's and DOD's planning effectiveness is their inability to complete all necessary activities to set up the interagency program office, which is intended to be accountable for fulfilling the departments' interoperability plans. Defining goals and ensuring that these are met would be an important part of the task of the program office. Without a fully established office that can manage the effort to meet these goals, the departments increase the risk that they will not be able to share interoperable electronic health information to the extent and in the manner that most effectively serves military service members and veterans. Accordingly, GAO has recommended that the departments give priority to fully establishing the interagency program office and develop results-oriented performance goals and measures to be used as the basis for reporting interoperability progress. The departments concurred with these recommendations. |
The Comanche helicopter program began in 1983 to provide a family of high technology, low-cost aircraft that would replace the Army’s light helicopter fleet, which includes the AH-1 Cobra, OH-58 Kiowa, OH-6 Cayuse, and the UH-1 Iroquois (Huey). The Army subsequently decided to develop only a single Comanche aircraft capable of conducting either armed reconnaissance or attack missions. The Army intends for the Comanche to be part of its future or “objective” force. The Comanche is designed to have improved speed, agility, aircrew visibility, reliability, availability, and maintainability over current reconnaissance and attack helicopters. The helicopter is also designed for low observability (stealth) and is expected to be capable of deploying over long ranges without refueling. Lastly, the Comanche is being designed to provide enemy information to force commanders at all levels. Critical to achieving the Comanche’s desired capabilities is the successful development and integration of advanced technologies, especially for the mission equipment package. The mission equipment package includes an integrated communication system, piloting system, target acquisition system, navigation system, helmet-mounted display, survivability and early warning equipment, mission computer, and weapon management system. The Comanche program started in 1983 and is currently projected to continue through fiscal year 2028. A timeline of the Comanche’s acquisition history and schedule is provided below. Since our August 1999 review, the Comanche program’s estimated cost has increased significantly—from $43.3 billion to $48.1 billion—and costs are expected to increase further. In addition, the Comanche continues to experience scheduling delays and performance risks. These problems are due to a range of factors, such as understated acquisition program cost estimates; ambitious flight test schedules with substantial concurrency in test events; delays in another DOD program which had been counted on to develop a critical component of the aircraft; inadequate facilities to fully test and integrate system hardware and software; and considerable growth in aircraft weight. The Army has not updated the Comanche’s cost or schedule estimates since April 2000 and does not plan such an update until its in-progress program review in January 2003. The Comanche program’s latest cost estimate, in April 2000, shows estimated costs have increased by almost $4.8 billion—from $43.3 billion to $48.1 billion—since our last report. Table 2 identifies where the cost estimate has changed. The $75.3 million increase in research, development, testing, and evaluation resulted from added testing for the Comanche program. During the Milestone II decision process, the Defense Acquisition Executive directed that the Comanche testing program be expanded by adding more testing to fully demonstrate the aircraft’s reliability before completion of its engineering and manufacturing development phase. The $4.777 billion increase in estimated production cost was to address DOD concerns about the long-term affordability and stability of the Comanche program. Specifically, DOD directed the Army to add 10 percent to Comanche’s production unit cost estimate in order to ensure that annual planned procurement funding would be sufficient to cover planned procurement quantities. To reduce the annual funding increase resulting from this directive, the Army reduced Comanche’s peak annual production rate from 72 aircraft per year to 62 per year, which extended the planned delivery schedule by 3 years. The $67.5 million reduction in estimated military construction costs reflects changes in anticipated needs for operating and maintenance facilities. In January 2001, DOD added about $504 million in funding to the Comanche program over the next few years. About $84 million of the additional funds are earmarked for research, development, test, and evaluation, and the remaining $420 million for production. These additional funds have not yet been reflected in the program’s official cost estimates. The program office plans to use the additional development funding to at least partially address what had been unfunded requirements in three areas considered to be high risk: (1) developing and integrating the mission equipment package; (2) developing the technology to detect and isolate equipment problems (automatic fault isolation); and (3) developing and integrating satellite communication capabilities. The section on performance discusses these areas in more detail. The Comanche’s most recent cost estimate was made in April 2000, when DOD approved the program for entry into the engineering and manufacturing development phase. At that time, DOD’s Cost Analysis Improvement Group estimated that the Comanche program would need an additional $180 million for its engineering and manufacturing development phase. However, the higher costs estimated by the Cost Analysis Group were not included in the cost estimate when the program office established a new baseline for the Comanche program in April 2000. The Comanche program is scheduled for an in-progress program review in January 2003 to review, among other things, its cost estimate. DOD believes that this January 2003 review, along with other major program reviews and oversight processes will permit successful management of program risks. The Deputy Program Manager acknowledged that the Army’s cost estimate for the Comanche may need to be revised at this point. The Comanche program office also maintains a list of unfunded requirements. The additional funds recently added to the program have reduced these funding requirements, but the revised list still has unfunded requirements in the amount of $68 million. The program office acknowledges that, unless additional funds are obtained, some yet-to-be- determined program performance requirements could be impacted. We have reported that when development work and low-rate initial production are done concurrently, significant schedule delays that cause cost increases and other problems are not uncommon in early production. Also, production processes are often not able to consistently yield output of high quality when full-rate production begins. DOD’s guidance also states that programs in which development work and low-rate initial production are done concurrently typically have a higher risk of production items having to be retrofitted to make them work properly and of system design not being thoroughly tested. We have also reported that the discovery of problems in testing conducted late in development is a fairly common occurrence on DOD programs, as is the attendant “late cycle churn”, that is, the unanticipated effort that must be invested to overcome such problems. Further, these problems could be exacerbated if the program plans to produce a significant number of systems during the low-rate initial production period, before design and testing are completed. In August 1999, we reported that the Army would experience a 19-month delay in testing because the first pre-production aircraft for testing were expected to be delivered 19 months later than planned. We noted that, by retaining the December 2006 initial operating capability date, the delay in acquiring test aircraft would compress the majority of Comanche’s flight- test schedule into the last 3 years of development. The compressed flight- test schedule would, in turn, shorten the available time for completing all test events and taking necessary corrective actions before the full-rate production decision. Since our last report, the first pre-production aircraft to be used for development testing is now scheduled for delivery in January 2004, adding an additional 3-month delay to the 19-month delay we reported in August 1999. As shown in figure 1 below, the delivery of pre-production Comanche aircraft has been delayed and, because the Army has retained the December 2006 full-rate production decision, the time available for testing, assessing, and correcting problems has been reduced. Many critical test events are now scheduled late in the development stages—during the low-rate initial production phase of the program—and, as shown in figure 2, many developmental and operational test events are scheduled to be conducted concurrently. The combination of compressing the development schedule and undertaking developmental and operational testing activities concurrently leaves the Army with little room to accommodate any delays that may result from assessing, correcting, and retesting problems found during testing. In Comanche’s case, several critical subsystems—to be included in the mission equipment package—may not be available until the development flight-testing is well underway. These subsystems are very complex, state-of-the-art systems that have not been demonstrated on a helicopter platform like Comanche. As testing proceeds, any problems found will need to be analyzed, fixed, and retested. However, with the ambitious test schedule, there may not be time available between test events to correct problems and prepare properly for the next event. Further, the Army’s schedule for developing and testing software for the Comanche may not be completed prior to the full-rate production decision. The contractor is experiencing a shortage of software engineers available to work on the Comanche contract. In addition, only about 1.4 million of the projected 1.9 million lines of computer code for the Comanche’s mission equipment package will be completed by the time the package is to be tested on the initial pre-production aircraft. Additional segments of computer code for the mission equipment package will be introduced as developmental testing is underway. At this point, it is uncertain if all of the computer code for the full mission equipment package will be completed by the time the Army is scheduled to make a full-rate production decision for Comanche in late 2006. Finally, the Army plans to use pre-production aircraft that it considers production-representative for operational flight-testing. Before this testing is complete, the Army plans to begin producing a total of 84 low-rate initial production aircraft. These aircraft are to be used to equip Army helicopter units and to ramp-up production. To produce that many aircraft during low-rate initial production, the Army will have to ramp-up its production capabilities rapidly and at a time when the aircraft design is still evolving as new subsystems are introduced and test results are evaluated. Specifically, the Army does not plan to freeze Comanche’s design configuration until January 2006, or six months after the low-rate initial production decision point. Making design changes and retrofits to a large number of aircraft already produced could be costly. In our last report, we noted that the Army was making modifications to the Comanche that would adversely impact some of the Comanche’s planned performance capabilities; for example, some modifications have added weight and drag to the aircraft. While their exact impacts are still unknown, these changes increase the risk that the Comanche’s planned performance goals may not be achieved. The Comanche continues to have several areas of high technical risk that jeopardize the achievement of several critical performance requirements. The Comanche’s ability to climb at a rate of 500 feet per minute is a key performance requirement for the aircraft. Since we last reported on the Comanche program, the aircraft’s projected empty weight has increased by 653 pounds—from 8,822 pounds to 9,475 pounds. At the current projected design weight of 9,475 pounds, the Comanche program office has acknowledged that the helicopter cannot achieve the required vertical rate of climb of 500 feet per minute without increasing the horsepower of the current engine. Consequently, the program office has assessed its achievement of the weight requirement as high risk. The Army offered its prime contractor for Comanche’s development, Boeing-Sirkosky, an award fee of $1.4 million to reduce its projected weight to 9,250 pounds. However, the contractor did not achieve the first iteration of weight reduction in December 2000. The program office is considering increasing the incentive fee to $5 million for the contractor to reduce the projected weight to 9,300 pounds in December 2001. The program office believes that it can achieve its vertical rate of climb, even with the increase in Comanche’s weight, by increasing the horsepower of Comanche’s T-801 engine from its current horsepower rating of 1131 to 1201. The program office estimates that the increase in the engine’s power can be obtained at a cost of about $13 million, and this approach will be less costly than other weight reduction efforts. However, an increase in engine performance could adversely affect the expected life of the engine since it will have to perform about 47 degrees hotter than is normally required. According to the program office, this increased performance may not have an appreciable impact on the engine’s life. As noted earlier in this report, the successful development and integration of the mission equipment package is critical to meeting Comanche’s performance requirements. This package includes an integrated communications system, piloting system, target acquisition system, navigation system, helmet-mounted display, survivability and early warning equipment, mission computer, and weapons management system. The program office has assessed the achievement of this portion of its development effort as high risk. In order to reduce this risk, the Army had planned to develop a mobile integration laboratory, called a hotbench, which simulates Comanche’s hardware, to integrate and test mission equipment package software before installing the software on the flight test aircraft. However, due to a shortage of development funds, the Army had listed the hotbench as an unfunded requirement. DOD recently provided additional funding to the Comanche program, which the program office plans to use to fully fund the hotbench. Despite the additional funding for the hotbench, the program office continues to acknowledge that integration of Comanche’s mission equipment package as an area of high technical risk. A critical Comanche requirement is an on-board fault detection system that can rapidly and accurately provide information about equipment problems. With an on-board fault isolation system, the Army would be able to promptly identify and correct potential problems in advance, according to the Comanche’s operational requirements document. Additionally, without the system, the time and cost of maintaining the aircraft will likely increase. According to the Army, this system needs to be 75 to 95 percent accurate—75 percent for mechanical and electrical equipment and 95 percent for avionics and electronics equipment. The Comanche program office has concluded that this requirement will be difficult to achieve within the current cost, weight, and packaging constraints, and does not expect to achieve a mature fault detection and fault isolation capability until 2 years after initial fielding. According to the program office, this system depends, in part, on a database built on flight data and equipment failure experience; therefore, the system becomes better with additional flight hours. The program office anticipates that after 2 years of flight testing, the system should meet the full level of predictability required. Although some of the recently provided development funding will be used by the Army in this area, the Comanche program has identified an additional $20 million unfunded requirement for the fault isolation capability. In some battle situations, the Army plans to use Comanche as a deep reconnaissance aircraft to provide critical information and situational awareness to joint forces. Satellite communication technology is necessary for the helicopter to be able to achieve the “beyond-line-of- sight” capability needed to carry out this function, according to the Comanche operational requirement document. To meet this need, the Army was planning to rely on satellite communication technology being developed and miniaturized as part of the Joint Strike Fighter program, which is being developed jointly by the Air Force, Navy, and Marines. However, in May 2000, Congress provided that the Joint Strike Fighter program could not enter into the engineering and manufacturing development phase until the Secretary of Defense certified the technological maturity of its critical technologies. This has delayed the Joint Strike Fighter program’s schedule for beginning its engineering and manufacturing development phase. When assessing the risk of its dependency on the Joint Strike Fighter’s program, the Comanche program office concluded that the helicopters in low-rate initial production would not have the beyond-line-of-sight communication capability if the Joint Strike Fighter program was delayed. The program office now believes that it must develop its own satellite communication capability. However, the development schedule remains high-risk for the timely inclusion of this capability on the initially fielded Comanche helicopters. The Army has estimated that it will require about $58 million to develop this capability and plans to fully fund this effort with additional funds recently provided by DOD. Our work on best practices has found that product development in successful commercial firms is a clearly defined undertaking for which firms insist on having in hand the technology that meets customers’ needs before starting. The firms demand—and receive—specific knowledge about a new product before production begins. And, they do not go forward unless a strong business case on which the program was originally justified continues to hold true. Such a knowledge-based process is essential to commercial firms getting better cost, schedule, and performance outcomes. It enables decision-makers to be reasonably certain about critical facets of the product under development when they need it. At the point of going into production, successful firms will already know that (1) technologies match customer requirements, that is, they can fit onto a product and function as expected, (2) the product’s design meets performance requirements, and (3) the product can be produced within cost, schedule, and quality targets. The Comanche program does not yet have this knowledge and is not likely to have this knowledge when it plans to begin low-rate initial production in June 2005. First, the Army does not yet know and it will not know until well after its low-rate initial production decision whether certain technologies being developed will fit on the helicopter and function as expected. Our reporton incorporating new technologies into programs indicated that demonstrating a high level of maturity before new technologies are incorporated into product development programs puts those programs into a better position to succeed. Further, technologies that were included in a product development before they were mature later contributed to cost increases and schedule delays to those products. While the Comanche program has made progress in the technology readiness level of its critical components, integration of those components into subsystems, such as the mission equipment package, and the helicopter as a whole remains high-risk. In addition, the integration, development, and configuration of key satellite communication technology for inclusion in the integrated communication, navigation, and identification avionics has also been assessed as high risk. Finally, some of the technologies have not been developed to meet Comanche’s specific configuration requirements. For instance, the Comanche’s second generation forward-looking infrared sensor has been tested and proven on the Black Hawk helicopter by the Army’s night vision laboratory but not on the Comanche itself. Such testing needs to be done to ensure that the system can work together with other unique systems being developed for the Comanche, including the piloting, target acquisition, and navigation systems, which work as one unit. Comanche’s contractor has maintained that its mission equipment package technology is challenging because some key components have not been developed and configured in the required manner for the helicopter’s intended mission. Second, as discussed earlier, the Army does not yet know and may not know until well after the start of low-rate initial production, whether performance requirements can be met—including vertical rate of climb, on-board fault isolation, and beyond-line-of-sight communication requirements. The Army plans to conduct a limited user test before it begins low-rate initial production but it is a rudimentary test and not a complete operational test that fully demonstrates the aircraft’s capabilities. By compressing many key events late in the development schedule and conducting developmental and operational testing activities concurrently, the Army is running the risk of not fully demonstrating many of its critical capabilities before its planned full-rate production decision. Under current plans, for example, the Army will not complete a full demonstration of its integrated mission equipment package until December 2006—over a full year after its low-rate initial production decision and within the same month that the Army plans to make its decision on Comanche full-rate production. Third, as noted earlier, it is still uncertain whether the Comanche can be developed within cost and scheduling estimates. Although additional costs have been identified for the Comanche since it was last restructured, the full development cost will not be known until critical technology is fully developed, integrated, and tested. This will not occur until well after a low- rate initial production decision has been made in June 2005. The program office believes that it will know the cost of the initial production aircraft, which will have been negotiated prior to the low-rate initial production decision. However, at that time, the program office and the contractor will have limited experience and data relative to producing the fully developed Comanche helicopter. Until more experience and data is available, there is not a high level of confidence in the Army’s production cost estimate. Further, the Director of Operational Test and Evaluation in assessing the results of the Comanche milestone II test data indicated that it is highly unlikely that the Army can deliver the expected system performance within the current budget and schedule. The Director's assessment revealed that, without an operational assessment of an integrated system, it is difficult to predict with any degree of confidence whether (1) the individual subsystems can be successfully integrated, (2) the subsystems will function properly in an operational environment, or (3) the subsystems, in concert, will provide the anticipated benefits in operational performance. In 1999, we reported that the Army started the Comanche’s program development too early in terms of technology readiness, which is contrary to best commercial practices. Further, in approving the program for engineering and manufacturing development, the Army accelerated the development of some components, reduced the number of test aircraft, and compressed the test schedule. Two years later, the program is confronted with rising development costs, a compressed development schedule, and several major areas of high technical risk. The Army plans to proceed to low-rate initial production in June 2005 and full-rate production in December 2006, both of which could be well in advance of attaining sufficient knowledge of the helicopter’s technical maturity, demonstrated performance capabilities, and production costs. With such a scenario, the potential for adverse program outcomes is high—higher than expected costs, longer than expected schedules, and uncertain performance. DOD and Army officials acknowledge that the current program cost and schedule objectives are not achievable and should be changed to reflect more realistic objectives, but they believe that the planned January 2003 review for the Comanche program is the appropriate time to address such changes. Such a delay in revising the program’s cost and schedule estimate limits the visibility and knowledge that Army and DOD management as well as the Congress needs to (1) provide program oversight and direction; (2) make effective cost, schedule, and performance trade-off decisions; and (3) assess affordability and annual funding requirements. To improve management oversight and direction and achieve more favorable program outcomes, this report recommends that the Secretary of the Army reassess the program’s cost, schedule, and performance objectives, and revise those objectives to more achievable levels prior to submitting its next fiscal year budget. In commenting on a draft of this report, DOD partially concurred with our recommendation. DOD noted that it agrees with some of our concerns and recognizes there are risks in the currently planned Comanche engineering and manufacturing development program. DOD noted that these risks were understood during the Comanche milestone II review. At that time, the Defense Acquisition Executive directed that the program proceed as planned, but that interim decision reviews be conducted in January 2003 and June 2005 to review program status. DOD stated that these reviews, along with other major program review and oversight processes, will permit successful management of program risks. Nevertheless, DOD stated that it is currently examining whether any of Comanche's requirements should be deferred, in order to reduce the risk of not meeting cost and schedule objectives. DOD's examination of Comanche's requirements is consistent with our recommendation. We continue to believe that DOD should report on the results of this examination and any revisions to the program’s objectives to the defense committees of the Congress with its next budget request. DOD disagreed with a reference to our previous Comanche report stating that current program risks are caused by, among other things, the program being allowed to enter engineering and manufacturing development prior to maturation of key technologies. DOD maintains that the Comanche program successfully demonstrated its exit criteria prior to entering engineering and manufacturing development. However, the Comanche program’s demonstration of its exit criteria was not sufficient as a basis to move forward in the acquisition process. For example, the exit criteria did not require that the technologies used in Comanche be at or above specific levels of demonstrated readiness. As we previously reported, the Army's own assessments clearly indicated that several key areas of technology were not at those levels called for in commercial best practices guidelines. DOD's comments are reprinted in appendix I. Other comments provided by DOD were incorporated in the report as appropriate. To evaluate changes in the Comanche’s status with regard to cost, schedule, and performance and assess whether the Army has the certainty it needs to proceed with beginning production, we examined and compared program schedules, pertinent cost documents, and acquisition strategies, and discussed potential changes and causative factors with cognizant Comanche program officials. We analyzed flight-test plans, schedules, and reports and discussed significant issues with program officials. We reviewed program documents related to risk and analyzed program risks and development problems by comparing them with various test schedules and plans. To assess performance capabilities before beginning with production, we analyzed required and projected performance and compared it with the Comanche’s operational requirements. We relied on previous GAO best practices work to examine Comanche’s technological readiness levels for key program technologies. Our analyses focused on the impact of Comanche’s cost, schedule, and performance on the Army’s ability to field a Comanche helicopter that would meet its requirements and incorporate technological upgrades in its helicopter fleet. In performing our work, we obtained pertinent program documents and interviewed officials from the offices of the Secretary of Defense and the Army, Washington, D.C.; the Program Executive Office-Aviation and Comanche Program Office, Redstone Arsenal, Alabama; the U.S. Army Training and Doctrine Command, Fort Rucker, Alabama; the Comanche Joint Project Office, Huntsville, Alabama; and the Aviation Test and Evaluation Command, Alexandria, Virginia. We conducted our review from September 2000 through March 2001 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute this report until 5 days from its date. At that time, we will send copies of this report to the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable Thomas White, Secretary of the Army; Director, Office of Management and Budget; and other interested congressional committees and parties. We will also make copies available to others upon request. If you have any questions regarding this report, please contact me on (202) 512-4530. GAO contacts and major contributors to this report are listed in appendix II. In addition to those named above, Leon S. Gill, Wendy Smythe, Gary Middleton, and Cristina Chaplain made key contributions to this report. | As of August 1, 1999, the Army's Comanche helicopter program faced significant risks related to cost overruns, scheduling delays, and degraded performance. GAO concluded that proceeding to the next development phase with high levels of uncertainty was not in accordance with best practices followed by successful commercial firms. This report evaluates changes since 1999 in the Comanche's cost, schedule, and performance, and assesses whether the Army will have the knowledge it needs to proceed with its current production plans. GAO found that the Comanche program's total development and production cost estimate has increased by almost $4.8 billion. However, areas of high technical risks and unfunded requirements could further increase the program's costs. The program office does not plan to update its April 2000 current estimate to reflect these increases until January 2003. The Comanche's December 2006 full rate production decision date has not changed even though the risks of not meeting this date have increased. The Army continues to face the risk that critical performance requirements may not be met--at least for the helicopters it will initially produce. The Department of Defense (DOD) recently provided $84 million in additional development funding to help reduce some of these high-risk areas. Additionally, the Army is not likely to have the knowledge it needs to begin production when scheduled. It is also not likely to know whether some technologies being developed, such as those used for the mission equipment package, will work on the helicopter and function as expected. DOD is also unlikely to know whether the helicopter can be produced within current cost estimates. |
In an effort to promote and achieve various U.S. foreign policy objectives, trade preference programs have expanded in number and scope over the past 3 decades. The purpose of these programs is to foster economic development through increased trade with qualified beneficiary countries while not harming U.S. domestic producers. Trade preference programs extend unilateral tariff reductions to over 130 developing countries. Currently, the United States offers the Generalized System of Preferences (GSP) and three regional programs, the Caribbean Basin Initiative (CBI), the Andean Trade Preference Act (ATPA), and the African Growth and Opportunity Act (AGOA). Special preferences for Haiti became part of CBI with enactment of the Haitian Hemispheric Opportunity through Partnership Encouragement (HOPE) Act in December 2006. The regional programs cover additional products but have more extensive criteria for participation than the GSP program. Eight agencies have key roles in administering U.S. trade preference programs. Led by USTR, they include the Departments of Agriculture, Commerce, Homeland Security, Labor, State, and Treasury, as well as the U.S. International Trade Commission (ITC). GSP—the longest standing U.S. preference program—expires December 31, 2008, as do ATPA benefits. At the same time, legislative proposals to provide additional, targeted benefits for the poorest countries are pending. U.S. trade preference programs are widely used, but some economists and others have raised questions about them. Their concerns include the potential for diversion of trade from other countries that these programs can cause; the complexity, scope of coverage, duration, and conditionality of these programs; and the potential opposition to multilateral and bilateral import liberalization preferences can create. U.S. imports from countries benefiting from U.S. preference programs have increased significantly over the past decade. Total U.S. preference imports grew from $20 billion in 1992 to $92 billion in 2006. Most of this growth in U.S. imports from preference countries has taken place since 2000. Whereas total U.S. preference imports grew at an annual rate of 0.5 percent from 1992 to 1996, the growth quickened to an annual rate of 8 percent from 1996 to 2000, and 19 percent since 2000. This accelerated growth suggests an expansionary effect of increased product coverage and liberalized rules of origin for LDCs under GSP in 1996 and for African countries under AGOA in 2000. There is also some evidence that leading suppliers under U.S. preference programs have “arrived” as global exporters. For example, the 3 leading non-fuel suppliers of U.S. preference imports—-India, Thailand, and Brazil—were among the top 20 world exporters and U.S. import suppliers in 2007, and their exports in 2007 grew faster than world exports, according to the World Trade Organization (WTO). Preference programs entail three critical policy trade-offs. First, the programs are designed to offer duty-free access to the U.S. market to increase beneficiary trade, but only to the extent it does not harm U.S. industries. U.S. preference programs provide duty-free treatment for over half of the 10,500 U.S. tariff lines, in addition to those that are already duty- free on a most favored nation basis. But, they also exclude many other products from duty-free status, including some that developing countries are capable of producing and exporting. GAO’s analysis showed that notable gaps remain, particularly in agricultural and apparel products. For 48 GSP-eligible countries, more than three-fourths of the value of U.S. imports that are subject to duties (i.e., are dutiable) are left out of the programs. For example, just 1 percent of Bangladesh’s dutiable exports to the United States and 4 percent of Pakistan’s are eligible for GSP. Although regional preference programs tend to have more generous coverage, they sometimes feature “caps” on the amount of imports that can enter duty-free, which may significantly limit market access. Imports subject to caps under AGOA include certain meat products, a large number of dairy products, many sugar products, chocolate, a range of prepared food products, certain tobacco products, and groundnuts (peanuts), the latter being of particular importance to some African countries. The second trade-off is related and involves deciding which developing countries can enjoy particular preferential benefits. A few LDCs in Asia are not included in the U.S. regional preference programs, although they are eligible for GSP-LDC benefits. Two of these countries—Bangladesh and Cambodia—have become major exporters of apparel to the United States and have complained about the lack of duty-free access for their goods. African private-sector spokesmen have raised concerns that giving preferential access to Bangladesh and Cambodia for apparel might endanger the nascent African apparel export industry that has grown up under AGOA. Certain U.S. industries have joined African nations in opposing the idea of extending duty-free access for apparel from these countries, arguing these nations are already so competitive in exporting to the United States that in combination they surpass U.S. FTA partners Mexico and CAFTA, as well as the Andean/AGOA regions, which are the major export market for U.S. producers of textiles. This same trade-off involves decisions regarding the graduation of countries or products from the programs. It relates to the original intention that preference programs would confer temporary trade advantages on particular developing countries, which would eventually become unnecessary as countries became more competitive. Specifically, the GSP program has mechanisms to limit duty-free benefits by “graduating” countries that are no longer considered to need preferential treatment, based on income and competitiveness criteria. Since 1989, 28 countries have been graduated from GSP, mainly as a result of “mandatory” graduation criteria such as high income status or joining the European Union. Five countries in the Central American and Caribbean region were recently removed from GSP and CBI/CBTPA when they entered free trade agreements with the United States. In the GSP program, the United States also pursues an approach of ending duty-free access for individual products from a given country by means of import ceilings—Competitive Needs Limitations (CNL). Over one-third of the trade from GSP beneficiaries—$13 billion in imports in 2006—is no longer eligible for preferences because countries have exceeded CNL ceilings for those products. Although the intent of country and product graduation is to focus benefits on those countries most in need of the competitive margin preferences provide, some U.S. and beneficiary country officials observe that remaining GSP beneficiaries will not necessarily profit from another country’s loss of preference benefits. We repeatedly heard concerns that China would be most likely to gain U.S. imports as a result of a beneficiary’s loss of preferences. In 2007, the President revoked eight CNL waivers as a result of legislation passed in December 2006. Consequently, over $3.7 billion of trade in 2006 from six GSP beneficiaries—notably Brazil, India, and Thailand—lost duty-free treatment. Members of the business community raised concerns that revocation of these waivers would harm U.S. business interests while failing to provide more opportunities for poorer beneficiaries. GAO’s analysis showed that China and Hong Kong were the largest suppliers of the precious metal jewelry formerly eligible under GSP for duty-free import by India and Thailand; Canada, Mexico, Japan, and China were the leading competitors to Brazil’s motor parts. Policymakers face a third trade-off in setting the duration of preferential benefits in authorizing legislation. Preference beneficiaries and U.S. businesses that import from them agree that longer and more predictable renewal periods for program benefits are desirable. Private-sector and foreign government representatives have complained that short program renewal periods discourage longer-term productive investments that might be made to take advantage of preferences, such as factories or agribusiness ventures. Members of Congress have recognized this argument with respect to Africa and, in December 2006, Congress renewed AGOA’s third-country fabric provisions until 2012 and AGOA’s general provisions until 2015. However, some U.S. officials believe that periodic program expirations can be useful as leverage to encourage countries to act in accordance with U.S. interests such as global and bilateral trade liberalization. Furthermore, making preferences permanent may deepen resistance to U.S. calls for developing country recipients to lower barriers to trade in their own markets. Global and bilateral trade liberalization is a primary U.S. trade policy objective, based on the premise that increased trade flows will support economic growth for the United States and other countries. Spokesmen for countries that benefit from trade preferences have told us that any agreement reached under Doha round of global trade talks at the WTO must, at a minimum, provide a significant transition period to allow beneficiary countries to adjust to the loss of preferences. Preference programs have proliferated over time. In response to differing statutory requirements, agencies pursue different approaches to monitoring the various criteria set for programs. The result is a lack of systematic review and little to no reporting on impact. U.S. trade preferences have evolved into an increasingly complex array of programs. Congress generally considers these programs separately, partly because they have disparate termination dates. Many countries participate in more than one of these programs. Of the 137 countries and territories eligible for preference programs, as of January 1, 2007, 78 benefit from more than one program, and 34 were eligible for more than two programs. While there is overlap in various aspects of trade preference programs, each program is currently considered separately by Congress based on its distinct timetable and expiration date. Typically the focus has been on issues relevant to specific programs, such as counternarcotics cooperation efforts in the case of ATPA, or phasing out benefits for advanced developing countries in the case of GSP. As a result, until last year’s hearing before this committee, congressional deliberations have not provided for cross-programmatic consideration or oversight. The oversight difficulties associated with this array of preference programs and distinct timetables is compounded by different statutory review and reporting requirements for agencies. Reflecting the relevant statutory requirements, two different approaches—a petition process and periodic reviews—have evolved to monitor compliance with criteria set for various programs. We observed advantages to each approach, but individual program reviews appear disconnected and result in gaps. The petition-driven GSP reviews of country practices and product coverage have the advantage of adapting the programs to changing market conditions and the concerns of businesses, foreign governments, and others. However, the petition process can result in gaps in reviews of country compliance with the criteria for participation: From 2001 to 2006, three-quarters of the countries eligible only for GSP did not get examined at all for their conformity with eligibility criteria. Long periods passed between overall reviews of GSP. USTR completed an overall review of the GSP program in fall 2006. USTR completed the last general review of the program approximately 20 years earlier, in January 1987. The petition-driven review process also fails to systematically incorporate other ongoing monitoring efforts. For example, the lack of review under GSP provisions of any of the 26 preference beneficiary countries cited by USTR in 2006 for having problems related to the adequate and effective protection of U.S. intellectual property rights (IPR) makes it appear no linkage exists between GSP and ongoing monitoring of IPR protection abroad. The periodic reviews under the regional programs offer more timely and consistent evaluations of country performance against the criteria for participation, but may still miss important concerns. For example, 11 countries that are in regional programs were later subject of GSP complaints in the 2001 to 2006 period: Although AGOA has the most intensive evaluation of country performance against the criteria for participation, the GSP process later validated and resulted in further progress in resolving concerns with AGOA beneficiaries Swaziland and Uganda on labor issues. The African country of Equatorial Guinea has been reviewed for AGOA eligibility and found to be ineligible. Yet, Equatorial Guinea has not been subject to a GSP country practice petition or reviewed under GSP. As a result, Equatorial Guinea remains eligible for GSP and exported more than 90 percent of its $1.7 billion in exports duty free to the United States under that program in 2006. Many developing countries have expressed concern about their inability to take advantage of trade preferences because they lack the capacity to participate in international trade. Sub-Saharan Africa has been the primary focus of U.S. trade capacity-building efforts linked to the preference programs, with the United States allocating $394 million in fiscal year 2006 to that continent. Although AGOA authorizing legislation refers to trade capacity assistance, USTR officials noted that Congress has not appropriated funds specifically for that purpose. However, USTR has used the legislative language as leverage with U.S. agencies that have development assistance funding to target greater resources to trade capacity building. In other regions of the world, U.S. trade capacity building assistance has less linkage to preference programs. Separate reporting for the various preference programs makes it difficult to measure progress toward achieving the fundamental and shared goal of promoting economic development. Only one program (CBI) requires agencies to directly report on impact on the beneficiaries. Nevertheless, in response to statutory requirements, several government agencies report on certain economic aspects of the regional trade preference programs. However, different approaches are used, resulting in disparate analyses that are not readily comparable. Agencies do not regularly report on the economic development impact of GSP. Moreover, there is no evaluation of how trade preferences, as a whole, affect economic development in beneficiary countries. To address the concerns I have summarized today, in our March 2008 report, GAO recommended that USTR periodically review beneficiary countries that have not been considered under the GSP or regional programs. Additionally, we recommended that USTR should periodically convene relevant agencies to discuss the programs jointly. In response, USTR is undertaking two actions. First, USTR will conduct a review of the operation and administration of U.S. preference programs to explore practical steps that might improve existing communication and coordination across programs. Second, beginning with the Annual Report of the President of the United States on the Trade Agreements Program to be issued on March 1, 2009, the discussion of the operation of all U.S. trade preference programs will be consolidated into its own section. We also suggested that Congress should consider whether trade preference programs’ review and reporting requirements may be better integrated to facilitate evaluating progress in meeting shared economic development goals. We believe that the hearings held by the committee last year and again today are responsive to the need to consider these programs in an integrated fashion and are pleased to be able to contribute to this discussion. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions that you or other members of the committee may have. For further information on this testimony, please contact Loren Yager at (202) 512-4347, or by e-mail at [email protected]. Juan Gobel, Assistant Director; Kim Frankena, Assistant Director; R. Gifford Howland; Karen Deans; Ernie Jackson; and Ken Bombara made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | U.S. trade preference programs promote economic development in poorer nations by providing duty-free export opportunities in the United States. The Generalized System of Preferences, Caribbean Basin Initiative, Andean Trade Preference Act, and African Growth and Opportunity Act unilaterally reduce U.S. tariffs for many products from over 130 countries. However, two of these programs expire partially or in full this year, and Congress is exploring options as it considers renewal. This testimony describes the growth in preference program imports since 1992, identifies policy trade-offs concerning these programs, and evaluates the overall U.S. approach to preference programs. The testimony is based on two recent studies on trade preference programs, issued in September 2007 and March 2008. For those studies, GAO analyzed trade data, reviewed trade literature and program documents, interviewed U.S. officials, and did fieldwork in six trade preference beneficiary countries. Total U.S. preference imports grew from $20 billion in 1992 to $92 billion in 2006, with most of this growth taking place since 2000. The increases from preference program countries reflect legislation passed by Congress in 1996 and 2000 that enhanced preference programs and added new eligible products. Preference programs give rise to three critical policy trade-offs. First, preferences entail a trade-off to the extent opportunities for beneficiary countries to export products duty free must be balanced against U.S. industry interests. Some products of importance to developing countries, notably agriculture and apparel, are ineligible by statute as a result. Secondly, certain developing countries have been given additional preferential benefits for such import-sensitive products under regional programs. But some of the poorest countries, outside targeted regions, do not qualify. Third, Congress faces a trade-off between longer program renewals, which may encourage investment and undermine support for the likely greater economic benefits of broader trade liberalization, a key U.S. goal, and shorter renewals, which may provide opportunities to leverage the programs to meet evolving priorities. Trade preference programs have proliferated over time, becoming more complex, but neither Congress nor the administration formally considers them as a whole. Responsive to their legal mandates, the Office of the U.S. Trade Representative (USTR) and other agencies use different approaches to monitor compliance with program criteria, resulting in disconnected review processes and gaps in addressing some countries and issues. Disparate reporting makes it difficult to determine progress on programs' contribution to economic development in beneficiary countries. |
In reaction to allegations of widespread misconduct and abusive practices involving mutual funds, regulators have responded with various proposals. In early September 2003, the Attorney General of the State of New York filed charges against a hedge fund manager for arranging with several mutual fund companies to improperly trade in fund shares and profit at the expense of other fund shareholders. Since then, widening federal and state investigations of illegal late trading and improper timing of fund trades have involved a growing number of prominent mutual fund companies and brokerage firms. One of the abuses that has come to light recently is late trading. Under current rules, funds accept orders to sell and redeem fund shares at a price based on the current net asset value, which most funds calculate once a day at 4:00 p.m. Eastern Time. Many investors, however, purchase mutual fund shares through other intermediaries such as broker-dealers, banks, and retirement savings plans. Instead of submitting hundreds or even thousands of individual purchase and redemption orders each day, these intermediaries typically aggregate orders received from investors and submit a single purchase or redemption order that nets all the individual shares their customers are seeking to buy or sell. Because this processing takes time, SEC rules permit these intermediaries to forward the order information to funds after 4:00 p.m. However, late trading occurs when some investors are able to illegally purchase or sell mutual fund shares after the 4:00 p.m. Eastern Time close of U.S. securities markets, the time at which funds typically price their shares. An investor permitted to engage in late trading could be buying or selling shares at the current day’s 4:00 p.m. price with knowledge of developments in the financial markets that occurred after 4:00 p.m. Such investors thus have unfair access to opportunities for profit that are not provided to other fund shareholders. The extent to which some investors were allowed to submit late trading orders may have been significant. In September 2003, SEC sought information from fund advisers and broker-dealers about their pricing of mutual fund orders and late trading policies. SEC’s preliminary analysis of this information showed that more than 25 percent of the 34 major broker- dealers that responded had customers that still received that day’s price for orders they had placed or confirmed after 4:00 p.m. As of March 1, 2004, SEC had formally announced seven enforcement cases involving broker-dealers and other firms that were allegedly involved in late trading schemes; other cases may be forthcoming. We will be initiating a review of the adequacy of SEC’s enforcement efforts and the sanctions that it can and has applied in these cases and will be reporting separately on these issues later this year. In addition, legislation is under consideration in the House of Representatives that will expand SEC’s enforcement capabilities by raising the civil penalties for securities law violations, enhance the investigative procedures available to SEC, and streamline the process by which fines are disbursed among injured parties. Another abuse that has come to light is known as market timing. Market timing occurs when certain fund investors place orders to take advantage of temporary disparities between the share value of a fund and the values of the underlying assets in the fund’s portfolio. For example, U.S. mutual funds that use the last traded price for foreign securities (whose markets close hours before the U.S. markets) to value their portfolio when the U.S. markets close could create opportunities for market timing if events that subsequently occurred were likely to cause significant movements in the prices of those foreign securities when their home markets reopen. Market timing, although not currently illegal, can be unfair to long-term fund investors because it provides the opportunity for selected fund investors to profit from fund assets at the expense of long-term investors. The following example illustrates how market timing transactions can reduce the return to long-term shareholders of a fund. As shown in the figure, the loss to long-term holders of the fund in this case is only $.01 per share. Although the amount by which a single market timing transaction reduces a fund’s overall return can be small, repeated and large transactions over long periods of time can have a greater cumulative effect. For example, one fund company whose staff were accommodating market timing transactions by 10 different investors estimated that these investors earned $22.8 million through their trading and that these activities costs its funds $2.7 million over a period of several years. In addition, the redemption fees that these investors should have paid but did not, amounted to another $5 million. Market timing may also have been widespread. According to testimony by SEC’s Director of Enforcement, although most mutual funds have policies that discourage market timing, this strategy was popular among some individuals and institutional traders who attempted to conceal their identities from fund companies. He also stated that 30 percent of the broker-dealers responding to an SEC information request reported assisting customers in attempting to conduct market timing trades, by using methods, such as breaking their orders into smaller sizes to avoid detection by the fund companies. Of the twelve cases SEC formally opened that involved market timing activities, including five cases that also involved late trading, two have been settled. In the settlement for one case that involved both late trading and market timing, SEC ordered the firm to pay fines and disgorgements of $225 million. In the other case, SEC ordered the firm to pay $250 million in fines and disgorgements. NASD also has taken various enforcement cases against broker-dealers involving late trading and market timing, including one in which a broker-dealer was fined $1 million and ordered to provide restitution of more than $500,000 for failing to prevent market timing of an affiliated firm’s mutual funds. Additional abusive practices associated with mutual funds have also come to light. To facilitate late trading and market timing arrangements, some fund advisers selectively disclosed information about their funds’ portfolio holdings to outsiders. They also allowed these parties to late trade or conduct market timing in their funds. For example, in one SEC case a fund manager allowed a hedge fund to engage in market timing in a fund that he managed. The fund manager also disclosed portfolio information to a broker that enabled brokerage customers to conduct market timing transactions in the funds. In another state-administered case, a hedge fund executive obtained special trading privileges from several mutual fund companies that allowed him to engage in late trading and market timing in those funds. In addition to enforcement actions, SEC has also proposed amending regulatory rules to address late trading, market timing, and selective disclosure abuses. In December 2003, SEC proposed amending the rule that governs how mutual funds price their shares and receive orders for share purchases or sales. Since many of the cases of late trading involved orders submitted through intermediaries, including banks and pension plans not regulated by SEC, the proposed amendments to its rules would require that orders to purchase or redeem mutual fund shares be received by a fund, its transfer agent, or a registered clearing agency before the time of pricing (that is, 4:00 p.m. Eastern Time). Many organizations that purchase mutual fund shares, particularly those that administer retirement savings plans, have expressed concerns that such a “hard close” would unfairly prohibit some of their participants from receiving the same day’s price on share purchases. Because intermediaries generally combine individual investor orders and submit single orders to funds to buy or sell, many officials at such firms are concerned that the time required to complete this processing will not allow them to meet the 4:00 p.m. deadline. In such cases, investors purchasing shares from Western states or through intermediaries would either have to submit their trades earlier than other investors in order to receive the current day’s price or receive the next day’s price. A letter commenting on SEC’s proposal from two investor advocacy groups indicated that implementing the hard close would relegate some retail investors to the status of “second-class shareholders.” Some plan sponsor organizations and plan record keepers have also raised concerns about the potential significant administrative costs associated with adopting systems to accommodate the 4:00 p.m. hard close and other proposed rules. Because the hard close could affect some investors’ ability to trade at the current day’s price, some groups have called on SEC to allow industry participants to develop systems of internal controls that would serve to ensure that intermediaries receive individual orders before 4:00 p.m. With such controls in place, these orders could continue to be processed after this time. However, SEC officials told us that they were skeptical that any system that relies on internal controls could not provide certainty that late trading was not occurring because many of the late trading abuses happened at firms that purportedly had such controls in place. However, SEC remains open to the possibility of the development of systems that could reasonably detect and deter late trading. In its proposals, SEC requests comments on various approaches designed to prevent late trading. Such protections could include a system that provides an electronic or physical time-stamp on orders. Other possible controls could include certifications that the intermediary had policies and procedures in place designed to prevent late trades, or audits by independent public accountants. Because multiple regulators oversee the operations of these financial intermediaries, any assessment of the reasonableness of recommended systems or controls would likely require effective coordination. SEC is also proposing to take actions to address market timing. On December 11, 2003, SEC released a rule proposal to provide greater transparency to funds’ market timing policies. Specifically, SEC would require mutual funds to disclose in their prospectuses the risks to shareholders of the frequent purchase and redemption of investment company shares, and fund policies and procedures pertaining to frequent purchases and redemptions. The proposal also would require funds to explain both the circumstances under which they would use fair value pricing and the effects of using fair value pricing. Another rule will require funds to adopt fair value pricing policies that require funds among other things, to monitor for circumstances that may necessitate the use of fair value pricing, establish criteria for determining when market quotations are no longer reliable for a particular portfolio security, and provide a methodology or methodologies by which the funds determine the current fair value of portfolio securities. Also, SEC is seeking comment in one of its proposals for additional ways to improve the implementation of fair value pricing. In addition, the proposal would require funds to disclose policies and procedures pertaining to their disclosing information on the funds’ portfolio holdings, and any ongoing arrangements to make available information about their portfolio securities. These additional disclosures would enable investors to better assess risks, policies, and procedures, and determine if a fund’s policies and procedures were in line with their expectations. Disclosure of a fund’s procedures in these areas would also allow SEC to better examine a fund’s compliance with its stated procedures and hold fund managers accountable for their actions. To further stem market timing, on March 3, 2004, SEC issued a proposed new rule to require mutual funds to impose a 2-percent redemption fee on the proceeds of shares redeemed within 5 business days of purchase. According to the proposal, the proceeds from the redemption fees would be retained by the fund, becoming a part of fund assets. In addition, the proposal addresses the pass thru of information from omnibus accounts maintained by intermediaries. Specifically, the proposal identifies three alternatives for funds to ensure that redemption fees are imposed on the appropriate market timers through the use of Taxpayer Identification Numbers. On at least a weekly basis intermediaries would be required to provide to the fund, purchase and redemption information for each shareholder within an omnibus account to enable the fund to detect market timers and properly assess redemption fees. The rule is designed to require short-term shareholders to reimburse funds for costs incurred as a result of investors using short-term trading strategies, such as market timing. The proposal would also include an emergency exception that would allow an investor not to pay a redemption fee in the event of an unanticipated financial emergency. Unlawful late trading and certain market timing activities, which are not currently illegal, can be unfair to long-term investors because these activities provide the opportunity for selected fund investors to profit from fund assets at the expense of fund long-term investors. SEC’s proposal to address late trading with a hard 4:00 p.m. close appears, in the short-term, to be the solution that provides the most certainty that all orders being submitted to the funds legitimately deserve that day’s price. However, we also recognize that this action could have a significant impact on many investors, particularly those in employer-based retirement savings plans, who own fund shares through financial intermediaries. As a result, we urge the Commission to, as a supplement to their planned action, explore alternatives to the hard 4:00 p.m. close more fully and to revisit formally the question of how best to prevent late trading. Since some of the financial intermediaries involved are either overseen by other regulators or, in the case of third-party pension plan administrators, not overseen by any regulator, any such assessment should include the development of a strategy for overseeing the intermediary processing of mutual fund trades. Having a sound strategy for oversight of the varied participants in the mutual fund industry would ensure that all relevant entities are held equally accountable for compliance with all appropriate laws. We also commend SEC for proposing to require that mutual funds more fully disclose their market timing and portfolio disclosure policies. By increasing the transparency of these policies, industry participants will have the incentive to ensure that their policies are sound and will provide investors with information that they can use to distinguish between funds on the basis of these policies. The disclosures will also provide regulators and others with information to hold these firms accountable for their actions. However, such disclosures would likely also require improving related investor education programs to better ensure that investors understand the importance of these new disclosures. We also support SEC’s redemption fee proposal as a means of discouraging market timing. Placing the proceeds of the fee back in the fund itself helps to ensure that the actions of short-term traders do not financially harm long-term investors, including pension plan participants who hold such funds. Mutual fund boards of directors have a responsibility to protect shareholder interests and SEC has issued various proposals to increase the effectiveness of these bodies. In particular, independent directors, who are not affiliated with the investment adviser, play a critical role in protecting mutual fund investors. To improve the independence of fund boards, SEC has issued various proposals to alter the structure of these boards and task them with additional duties. Because the organizational structure of a mutual fund can create conflicts of interest between the fund’s investment adviser and its shareholders, the law governing U.S. mutual funds requires funds to have a board of directors to protect the interests of the fund’s shareholders. A fund is usually organized by an investment management company or adviser, which is responsible for providing portfolio management, administrative, distribution, and other operational services. In addition, the fund’s officers are usually provided, employed, and compensated by the investment adviser. The adviser charges a management fee, which is paid with fund assets, to cover the costs of these services. With the management fee representing its revenue from the fund, the adviser’s desire to maximize its revenues could conflict with shareholder goals of reducing fees. As one safeguard against this potential conflict, the Investment Company Act of 1940 (the Investment Company Act) requires mutual funds to have boards of directors to oversee shareholder interests. These boards must also include independent directors who are not employed by or affiliated with the investment adviser. As a group, the directors of a mutual fund have various responsibilities and in some cases, the independent directors have additional duties. In particular, the independent directors also have specific duties to approve the investment advisory contract between the fund and the investment adviser and the fees that will be charged. Specifically, section 15 of the Investment Company Act requires that the terms of any advisory contracts and renewals of advisory contracts be approved by a vote of the majority of the independent directors. Under section 36(b) of the Investment Company Act, investment advisers have a fiduciary duty to the fund with respect to the fees they receive, which under state common law typically means that the adviser must act with the same degree of care and skill that a reasonably prudent person would use in connection with his or her own affairs. Section 36(b) also authorizes actions by shareholders and SEC against an adviser for breach of this duty. Courts have developed a framework for determining whether an adviser has breached its duty under section 36(b), and directors typically use this framework in evaluating advisory fees. This framework finds its origin in a Second Circuit Court of Appeals decision, in which the court set forth the factors relevant to determining whether an adviser’s fee is excessive. The court in this case stated that to be guilty of a breach under section 36 (b), the fee must be “so disproportionately large that it bears no reasonable relationship to the services rendered and could not have been the product of arms-length bargaining.” The standards developed in this case, and in cases that followed, served to establish current expectations for fund directors with respect to fees. In addition to potentially considering how a fund’s fee compared to those of other funds, this court indicated that directors might find other factors more important, including the nature and quality of the adviser’s services, the adviser’s costs to provide those services, the extent to which the adviser realizes and shares with the fund economies of scale as the fund grows, the volume of orders that the manager must process, indirect benefits to the adviser as the result of operating the fund, and the independence and conscientiousness of the directors. Some industry experts have criticized independent directors for not exercising their authority to reduce fees. For example, in a speech to shareholders, one industry expert stated that mutual fund directors have failed in negotiating management fees. The criticism arises in part from the annual contract renewal process, in which boards compare fees of similar funds. However, the directors compare fees with the industry averages, which the experts claim provides no incentive for directors to seek to lower fees. Another industry expert complained that fund directors are not required to ensure that fund fees are reasonable, much less as low as possible, but instead are only expected to ensure that fees fall within a certain range of reasonableness. In contrast, an academic study we reviewed criticized the court cases that have shaped directors’ roles in overseeing mutual fund fees. The authors noted that these cases generally found that comparing a fund’s fees to other similar investment management services, such as pension plans, was inappropriate as fund advisers do not compete with each other to manage a particular fund. Without being able to compare fund fees to these other products, the study’s authors say that investors bringing these cases lacked sufficient data to show that a fund’s fees were excessive. In light of concerns over director roles and effectiveness, including concerns arising from the recently alleged abusive practices, SEC has taken various actions to improve board governance and strengthen the compliance programs of fund advisers. To strengthen the hand of independent directors when dealing with fund management, SEC issued a proposal in January 2004 to amend rules under the Investment Company Act to alter the composition and duties of many fund boards. These reforms include requiring an independent chairman for fund boards of directors; increasing the percentage of independent directors from a majority to at least seventy-five percent of a fund’s board; requiring fund independent directors to meet at least quarterly in a separate session; and providing the independent directors with authority to hire employees and others to help the independent directors fulfill their fiduciary duties. Under the Investment Company Act, only individuals who are not “interested” can serve as independent directors. Section 2(a)(19) of the Investment Company Act defines the term “interested person” to include the fund’s investment adviser, principal underwriter, and certain other persons (including their employees, officers or directors) who have a significant relationship with the fund, its investment adviser or principal underwriter. Broker-dealers that distribute the fund’s shares or persons who have served as counsel to the fund would also be considered interested. However, SEC has suggested that Congress give it authority to fill gaps in the statute that have permitted persons to serve as independent directors who do not appear to be sufficiently independent of fund management. For example, the statute permits a former executive of the fund’s adviser to serve as an independent director two years after the person has retired from his position. This permits an adviser to use board positions as a retirement benefit for its employees. The statute also permits relatives of fund managers to serve as independent directors as long as they are not members of the “immediate family” or affiliated persons of the fund. In one case, SEC found that an uncle of the funds portfolio manager served as an independent director of the fund. Giving SEC additional rulemaking authority to define the term “interested person” clearly seems appropriate. As part of their proposal to alter the structure of fund boards, SEC is also proposing that fund directors perform at least once annually an evaluation of the effectiveness of the board and its committees. This evaluation is to consider the effectiveness of the board’s committee structure and whether the directors have taken on the responsibility for overseeing too many funds. The proposal also seeks to amend the fund recordkeeping rule (rule 31a-2) to require that funds retain copies of the written materials that directors consider in approving an advisory contract under section 15 of the Investment Company Act. According to the SEC proposal, the changes to board structure and authority are designed to enhance the independence and effectiveness of fund boards and to improve their ability to protect the interests of the funds and fund shareholders they serve. Specifically, SEC noted that commenters on a 2001 amendment believed that a supermajority of independent directors would help to strengthen the hand of independent directors when dealing with fund management, and help assure that independent directors maintain control of the board in the event of illness or absence of other independent directors. Also, SEC concluded that (1) a boardroom culture favoring the long-term interests of fund shareholders might be more likely to prevail if the board chairman does not have the conflicts of interest inherent in his role as an executive of the fund adviser, and (2) a fund board may be more effective when negotiating with the fund adviser over matters such as the advisory fee if it were not led by an executive of the adviser with whom it was negotiating. SEC also noted that separate meetings of the independent directors would afford independent directors the opportunity for frank and candid discussion among themselves regarding the management of the fund. In addition, it saw the use of staff and experts as important to help independent directors deal with matters beyond their level of expertise and give them an understanding of better practices among mutual funds. According to SEC’s proposal, having fund directors perform self- evaluations of the boards’ effectiveness could improve fund performance by strengthening the directors’ understanding of their role and fostering better communication and greater cohesiveness. This would focus the board’s attention on the need to create, consolidate, or revise various board committees such as the audit, nominating, or pricing committees. Finally, according to SEC staff, the proposed additional recordkeeping rule would allow compliance examiners to review the quality of the materials that boards considered in approving advisory contracts. In response to concerns regarding the adequacy of fund board review of advisory contracts and management fees, on February 11, 2004, SEC also released proposed rule amendments to require that funds disclose in shareholders reports how boards of directors evaluate and approve, and recommend shareholder approval of investment, advisory contracts. The proposed amendments would require a fund to disclose in its reports to shareholders the material factors and the conclusions with respect to those factors that formed the basis for the board’s approval of advisory contracts during the reporting period. The proposals also are designed to encourage improved disclosure in the registration statement of the basis for the board’s approval of existing advisory contracts, and in proxy statements of the basis for the board’s recommendation that shareholders approve an advisory contract. In addition, to facilitate better board governance and oversight, SEC adopted requirements to ensure that mutual funds and advisers have internal programs to enhance compliance with federal securities laws and regulations. On December 17, 2003, SEC adopted a new rule that requires each investment company and investment adviser registered with the Commission to adopt and implement written policies and procedures reasonably designed to prevent violation of the federal securities laws, review those policies and procedures annually for their adequacy and the effectiveness of their implementation, and designate a chief compliance officer to be responsible for administering the policies and procedures. In the case of an investment company, the chief compliance officer would report directly to the fund board. These rules are designed to protect investors by ensuring that all funds and advisers have internal programs to enhance compliance with federal securities laws. To ensure that fund investment adviser officials and employees are aware of and held accountable for their fiduciary responsibilities to their fund shareholders, SEC also released a rule proposal in January 2004 that would require registered investment adviser firms to adopt codes of ethics. According to the proposal, the rule was designed to prevent fraud by reinforcing fiduciary principles that must govern the conduct of advisory firms and their personnel. The proposal states that codes of ethics remind employees that they are in a position of trust and must act with integrity at all times. The codes would also direct investment advisers to establish procedures for employees, so that the adviser would be able to determine whether the employee was complying with the firm’s principles. In addition to these actions, SEC had previously adopted rules that became effective in April 2003 that require funds to disclose on a quarterly basis how they voted their proxies for the portfolio securities they hold. SEC also required client proxies to adopt policies and procedures reasonably designed to ensure that the adviser votes proxies in the best interests of clients, to disclose to clients information about those policies and procedures, to disclose to clients how they may obtain information on how the adviser voted their proxies, and to maintain certain records relating to proxy voting. In adopting these requirements, SEC noted that this increased transparency would enable fund shareholders to monitor their funds’ involvement in the governance activities of portfolio companies, which may have a dramatic impact on shareholder value. We are currently reviewing whether pension plans have similar requirements to disclose their proxy voting activities to their participants and will be reporting separately on these issues later this year. In our view, these SEC proposals should help ensure that mutual fund boards of directors are independent and take an active role in ensuring that their funds are managed in the interests of their shareholders. Many fund boards already meet some of these requirements, but SEC’s proposal will better ensure that such practices are the norm across the industry. Although such practices do not guarantee that funds will be well managed and will avoid illegal or abusive behavior, greater board independence could promote board decision making that is aligned with shareholders’ interests and thereby enhance board accountability. While board independence does not require eliminating all nonindependent directors, we have taken the position in previous work that it should call for a supermajority of independent directors. Our prior work also recognized that independent leadership of the board is preferable to ensure some degree of control over the flow of information from management to the board, scheduling of meetings, setting of board agendas, and holding top management accountable. To further ensure that board members are truly independent, we would support the Congress giving SEC rulemaking authority to specify the types of persons who qualify as “interested persons.” Having compliance officers report to fund boards and having advisers implement codes of ethics should also provide additional tools to hold fund advisers and boards accountable for ensuring that all fund activities are conducted in compliance with legal requirements and with integrity. In addition to addressing alleged abusive practices, securities regulators are also introducing proposals that respond to concerns over how broker- dealers are compensated for selling mutual funds. Specifically, SEC is seeking comments on how to revise a rule that allows mutual funds to deduct fees to pay for the marketing and sale of fund shares. In addition, to address a practice that raises potential conflicts of interest between broker-dealers and their customers, SEC and NASD have also proposed rules that would require broker-dealers to disclose revenue sharing payments that fund advisers make to broker-dealers to compensate them for selling fund shares. SEC has also recently proposed banning a practice called directed brokerage that, if adopted, would prohibit funds from using trading commissions as an additional means of compensating broker- dealers for selling their funds. Approximately 80 percent of mutual fund purchases are made through broker-dealers or other financial professionals, such as financial planners and pension plan administrators. Prior to 1980, the compensation that these financial professionals received for assisting investors with mutual fund purchases was paid either by charging investors a sales charge or load or paying for such expenses out of the investment adviser’s own profits. However, in 1980, SEC adopted rule 12b-1 under the Investment Company Act to help funds counter a period of net redemptions by allowing them to use fund assets to pay the expenses associated with the distribution of fund shares. Under NASD rules, 12b-1 fees are limited to a maximum of 1 percent of a fund’s average net assets per year. Although originally envisioned as a temporary measure to be used during periods when fund assets were declining, the use of 12b-1 fees has evolved to provide investors with flexibility in paying for investment advice and purchases of fund shares. Instead of being offered only funds that charge a front-end load, investors using broker-dealers to assist them with their purchases can now choose from different classes of fund shares that vary by how the broker-dealer is compensated. In addition to shares that involve front-end loads with low or no 12b-1 fee—typically called Class A shares, investors can also invest in Class B shares that have no front-end load but instead charge an annual 1 percent 12b-1 fee paid a certain number of years, such as 7 or 8 years, after which the Class B shares would convert to Class A shares. Other share classes may have lower 12b- 1 fees but charge investors a redemption fee—called a back-end load—if shares are not held for a certain minimum period. Having classes of shares allows investors to choose the share class that is most advantageous depending on how long they plan to hold the investment. Because 12b-1 fees are used in ways different than originally envisioned, SEC is seeking public comment on whether changes to rule 12b-1 are necessary. In a proposal issued on February 24, 2004, SEC staff noted that modifications might be needed to reflect changes in the manner in which funds are marketed and distributed. For example, SEC staff told us that rule 12b-1 requires fund boards when annually re-approving a fund’s 12b-1 plan, to consider a set of factors that likely are not relevant in today’s environment. In the proposal, SEC also seeks comments on whether alternatives to 12b- 1 fees would be beneficial. One such alternative would have distribution- related costs deducted directly from individual customer accounts rather than having fund advisers deduct fees from the entire fund’s assets for eventual payment to selling broker-dealers. The amount due the broker- dealer could be deducted over time, say once a quarter until the total amount is collected. According to the SEC proposal, this alternative would be beneficial because the amounts charged and their effect on shareholder value would be completely transparent to the shareholder because the amounts would appear on the shareholder’s account statements. According to a fund official and an industry analyst, having fund shareholders see the amount of compensation that their broker is receiving would increase investor awareness of such costs and could spur greater competition among firms over such costs. We commend SEC for seeking comments on potentially revising rule 12b-1. Such fees are now being used in ways SEC did not intend when it adopted the rule in 1980. We believe providing alternative means for investors to compensate broker-dealers, like the one SEC’s proposal describes, would preserve the beneficial flexibility that investors currently enjoy while also increasing the transparency of these fees. An approach like the one SEC describes would also likely increase competition among broker-dealers over these charges, which could lower the costs of investing in fund shares further. Regulators have also acted to address concerns arising from another common mutual fund distribution practice called revenue sharing. Revenue sharing occurs when mutual fund advisers make payments out of their own revenue to broker-dealers to compensate them for selling that adviser’s fund shares. Broker-dealers that have extensive distribution networks and large staffs of financial professionals who work directly with and make investment recommendations to investors, increasingly demand that fund advisers make these payments in addition to the sales loads and 12b-1 fees that they earn when their customers purchase fund shares. For example, some broker-dealers have narrowed their offerings of funds or created preferred lists that include the funds of just six or seven fund companies that then become the funds that receive the most marketing by these broker-dealers. In order to be selected as one of the preferred fund families on these lists, the mutual fund adviser often is required to compensate the broker-dealer firms with revenue sharing payments. According to an article in one trade journal, revenue sharing payments made by major fund companies to broker-dealers may total as much as $2 billion per year. According to the officials of a mutual fund research organization, about 80 percent of fund companies that partner with major broker-dealers make cash revenue sharing payments. However, revenue sharing payments may create conflicts of interest between broker-dealers and their customers. By receiving compensation to emphasize the marketing of particular funds, broker-dealers and their sales representatives may have incentives to offer funds for reasons other than the needs of the investor. For example, revenue sharing arrangements might unduly focus the attention of broker-dealers on particular mutual funds, reducing the number of funds considered as part of an investment decision—potentially leading to inferior investment choices and potentially reducing fee competition among funds. Finally, concerns have been raised that revenue sharing arrangements might conflict with securities self-regulatory organization rules requiring that brokers recommend purchasing a security only after ensuring that the investment is suitable for the investor’s financial situation and risk profile. Our June 2003 report recommended that SEC consider requiring that more information be provided to investors to evaluate these conflicts of interest; SEC and NASD have recently issued proposals to require such disclosure. Although broker-dealers are currently required to inform their customers about the third-party compensation the firm is receiving, they have generally been complying with this requirement by providing their customers with the mutual fund’s prospectus, which discloses such compensation in general terms. On January 14, 2004, SEC proposed rule changes that would require broker-dealers to disclose to investors prior to purchasing a mutual fund whether the broker-dealer receives revenue sharing payments or portfolio commissions from that fund adviser as well as other cost-related information. Similarly, NASD has proposed a change to its rules that would require broker-dealers to provide written disclosures to a customer when an account is first opened or when mutual fund shares are purchased that describe any compensation that they receive from fund advisers for providing their funds “shelf space” or preference over other funds. SEC is also proposing that broker-dealers be required to provide additional specific information about the revenue sharing payments they receive in the confirmation documents they provide to their customers to acknowledge a purchase. This additional information would include the total dollar amount earned from a fund’s adviser and the percentage that this amount represented of the total sales by the broker- dealer of that advisers’ fund shares over the 4 most recent quarters. We commend SEC and NASD for taking these actions. The disclosures being proposed by SEC and NASD are intended to ensure that investors have information that they can use to evaluate the potential conflicts their broker-dealer may have when recommending particular fund shares to investors. However, such disclosures would likely also require improving related investor education programs to better ensure that investors understand the importance of these new disclosures. SEC has also taken another action to address a practice that creates conflicts of interest between fund shareholders and broker-dealers or fund advisers. On February 11, 2004, SEC proposed prohibiting fund advisers from using trading commissions as compensation to broker-dealers that sell their funds. Such arrangements are called “directed brokerage,” in which fund advisers choose broker-dealers to conduct trades in their funds’ portfolio securities as an additional way of compensating those brokers for selling fund shares. These arrangements represent a hidden expense to fund shareholders because brokerage commissions are paid out of fund assets, unlike revenue sharing, which is paid out of advisers’ revenues. We support this action as a means of better ensuring that fund advisers choose broker-dealers based on their ability to effectively execute trades and not for other reasons. SEC is considering actions to address conflicts of interests created by “soft-dollar arrangements” and has taken actions to enhance disclosures related to the costs of owning mutual funds, including considering making more transparent costs included in brokerage transactions. Although SEC has taken some actions, we believe that additional steps could be taken to provide further benefits to investors by increasing the transparency of certain mutual fund practices and enhancing competition among funds on the basis of the fees that are charged to shareholders. Soft dollar arrangements allow fund investment advisers to obtain research and brokerage services that could potentially benefit fund investors but also increase investor costs. When investment advisers buy or sell securities for a fund, they may have to pay the broker-dealers that execute these trades a commission using fund assets. In return for these brokerage commissions, many broker-dealers provide advisers with a bundle of services, including trade execution, access to analysts and traders, and research products. Soft dollar arrangements are the result of regulatory changes in the 1970s. Until the mid-1970s, the commissions charged by all brokers were fixed at one equal price. To compete for commissions, broker-dealers differentiated themselves by offering research-related products and services to advisers. In 1975, to increase competition, SEC abolished fixed brokerage commission rates. However, investment advisers were concerned that they could be held in breach of their fiduciary duty to their clients to obtain best execution on trades if they paid anything but the lowest commission rate available to obtain research and brokerage services. In response, Congress created a “safe harbor” under Section 28(e) of the Securities Exchange Act of 1934 that allowed advisers to pay more than the lowest available commission rate for security transactions in return for research and brokerage services. Although legislation provides a safe harbor for investment advisers to use soft-dollars, SEC is responsible for defining what types of products and services are considered lawful under the safe harbor. Since 1986, the SEC has interpreted Section 28(e) as applying to a broad range of products and services, as long as they provide ‘lawful and appropriate assistance to the money manager in carrying out investment decision-making responsibilities.’ Some industry participants argue that the use of soft dollars benefits investors in various ways. The research that the fund adviser obtains can directly benefit fund investors if the adviser uses it to select securities for purchase or sale by the fund. The prevalence of soft dollar arrangements also allows specialized, independent research to flourish, thereby providing money managers a wider choice of investment ideas. As a result, this research could contribute to better fund performance. The proliferation of research available as a result of soft dollars might also have other benefits. For example, an investment adviser official told us that the research on smaller companies helps create a more efficient market for securities of those companies, resulting in greater market liquidity and lower spreads, which would benefit all investors including those in mutual funds. Although the research and brokerage services that fund advisers obtain through the use of soft dollars could benefit a mutual fund investor, this practice also could increase investors’ costs and create potential conflicts of interest that could harm fund investors. For example, soft dollars could cause investors to pay higher brokerage commissions than they otherwise would, because advisers might choose broker-dealers on the basis of soft dollar products and services, not trade execution quality. Soft dollar arrangements could also encourage advisers to trade more in order to pay for more soft dollar products and services. Overtrading would cause investors to pay more in brokerage commissions than they otherwise would. These arrangements might also tempt advisers to “over-consume” research because they would not be paying for it directly. In turn, advisers might have less incentive to negotiate lower commissions, resulting in investors paying more for trades. Regulators also have raised concerns over soft dollar practices. In 1996 and 1997, SEC examiners conducted an examination sweep into the soft dollar practices of broker-dealers, investment advisers, and mutual funds. In the resulting 1998 inspection report, SEC staff documented instances of soft dollars being used for products and services outside the safe harbor, as well as inadequate disclosure and bookkeeping of soft dollar arrangements. SEC staff told us that their review found that mutual fund advisers engaged in far fewer soft dollar abuses than other types of advisers. To address the concerns identified, the SEC staff report proposed recommending that investment advisers keep better records and make greater disclosure about their use of soft dollars. A working group formed in 1997 by the Department of Labor (DOL) to study the need for regulatory changes and additional disclosures to pension plan sponsors and fiduciaries on soft dollar arrangements recommended that SEC act to narrow the definition of products and services that are considered research and allowable under the safe harbor. The working group also recommended that SEC prepare a specific list of acceptable purchases with soft dollars that included brokerage and research services. Although SEC has acknowledged the concerns involved with soft-dollar arrangements, it has taken limited actions to date. SEC staff told us that the press of other business prevented them from addressing the issues raised by other regulators and their own 1998 staff report. However, in a December 2003 concept release on portfolio transaction costs staff requested comments on what types of information investment advisers should be required to provide to mutual fund boards regarding the allocation of brokerage commissions for execution purposes and soft dollar benefits. In addition, SEC staff told us that they have formed a study group with representatives of the relevant SEC divisions, including Investment Management, Market Regulation, and the Office of Compliance Inspections and Examinations, to review soft dollar issues. This group also is collecting information from industry and foreign regulators. Regulators in other countries and other industries have acted to address the conflicts created by soft dollars. In the United Kingdom, the Financial Services Authority (FSA), which regulates the financial services industry in that country, has issued a consultation paper that argues that these arrangements create incentives for advisers to route trades to broker- dealers on the basis of soft dollar arrangements and that these practices represented an unacceptable market distortion. As a result of recommendations from a government-commissioned review of institutional investment, FSA has proposed banning soft dollars for market pricing and information services, as well as various other products. FSA notes that their proposal would limit the ability of fund managers to pass management costs through their customers’ funds in the form of commissions and would provide more incentive to consider what services are necessary for efficient funds management, both of which could lower investor costs. However, FSA staff has acknowledged that restricting soft dollar arrangements in the United Kingdom could hurt the international competitiveness of their fund industry because fund advisers outside their country would not have to comply with these restrictions. In addition, DOL has placed more restrictions on pension plan administrators use of soft dollars than apply to mutual fund advisers. SEC requires mutual fund boards of directors to review fund trading activities to ensure that the adviser is obtaining best execution and to monitor any conflicts of interest involving soft dollars. However, section 28(e) allows fund advisers to use soft dollars generated by trading in one fund’s portfolio to obtain research that does not benefit that particular fund but instead benefits other funds managed by that adviser. In contrast, DOL requires plan fiduciaries to monitor the plan’s investment managers to ensure that the soft dollar research obtained from trading commissions paid out of plan assets benefits the plan and that the benefits to the plan are reasonable in relation to the value of the brokerage and research services provided to the plan. Some industry participants have also called on SEC to restrict soft dollar usage. For example, the board of the Investment Company Institute (ICI), which is the industry association for mutual funds, recently recommended that SEC consider narrowing the definition of allowable research under Section 28(e) and eliminate the purchase of third-party research with soft- dollars. According to statements released by ICI, SEC’s definition of permitted research is overly expansive and has been susceptible to abuse. ICI recommends that SEC prohibit advisers from using soft dollars to obtain any products and services that are otherwise publicly available in the marketplace, such as periodical subscriptions or electronic news services. In a letter to the SEC Chairman, ICI wrote that its proposal would reduce incentives for investment advisers to engage in unnecessary trading and would more closely reflect the original purpose of Section 28(e), which was to allow investment advisers to take into account a broker-dealer’s research capabilities in addition to its ability to provide best execution. Beyond these proposals, some industry participants have called for a complete ban of soft dollars. If soft dollars were banned—which would require repeal of Section 28(e)—and bundled commission rates were required to be separately itemized, fund advisers would not be allowed to pay higher commissions in exchange for research. Advocates of banning soft dollars believe that this would spur broker-dealers to compete on the price of executing trades, which averages between $.05 and $.06 per share at large broker-dealers, whereas trades conducted through other venues can be done for $.01 or less. Critics fear that this ban would reduce the amount of independent research that advisers obtain, which would hurt investors and threaten the viability of some existing independent research firms. To address concerns over soft dollars, our June 2003 report recommends that SEC evaluate ways to provide additional information to fund directors and investors on their fund advisers’ use of soft dollars. Because SEC has not acted to more fully address soft dollar-related concerns, investors and mutual fund directors have less complete and transparent information with which to evaluate the benefits and potential disadvantages of fund advisers’ use of soft dollars. However, such disclosures could potentially increase the complexity of the information that investors are provided and require them to interpret and understand such information. As such, an enhanced investor education campaign would also likely be warranted. Although disclosure can improve transparency, it may not be sufficient for creating proper incentives and accountability. In our view, the time for SEC to take bolder actions regarding soft dollars is now. Allowing the advisers of mutual funds to use customer assets to obtain services that would otherwise have to be paid for using advisers’ revenues appears to create inappropriate incentives, and inadequate transparency and accountability. We commend SEC for initiating an internal study of soft dollar issues. As part of this evaluation, we believe that SEC should consider at a minimum the merits of narrowing the services that are considered acceptable under the safe harbor. Concerns that SEC’s current definition of permitted research is overly expansive and susceptible to abuse have been recognized for years. Acting to narrow the safe harbor could reduce opportunities for abusive practices. It could also lower investor costs by reducing adviser incentives to overtrade portfolio assets to obtain soft dollar research and services. We also believe that SEC’s study should consider the relative merits of eliminating soft dollar arrangements altogether. The elimination of soft dollars, which would require legislative action, could create greater incentives for broker-dealers to compete on the basis of execution cost and greater incentives for fund advisers to weigh the necessity of some of the research they now receive since they would have to pay for such items from their own revenues. SEC recently adopted rules and rule amendments aimed at increasing investor awareness by improving the disclosures of the fees and expenses paid for investing in mutual funds. In February 2004, SEC adopted rule amendments that require mutual funds to make additional disclosures about their expenses. This information will be presented to investors in the annual and semiannual reports prepared by mutual funds. Among other things, mutual funds will now be required to disclose the cost in dollars associated with an investment of $1,000 that earned the fund’s actual return and incurred the fund’s actual expenses paid during the period. In addition to allowing existing investors to compare fees across funds, SEC staff indicated that placing these disclosures in funds’ annual and semiannual reports will help prospective investors to compare funds’ expenses before making a purchase decision. In addition to this action, SEC amended fund advertising rules in September 2003 to require funds to state in advertisements that investors should consider a fund’s fees before investing and direct investors to consult the fund prospectus for more information. Additionally, in November 2003, NASD proposed amending rules to require that mutual funds advertising their performance present specific information about the fund’s expenses and performance in a more prominent format. These new requirements are aimed at improving investor awareness of the costs of buying and owning a mutual fund, facilitating comparison of fees among funds, and make presentation of standardized performance information more prominent. Specifically, NASD’s proposal would require that all performance advertising contain a text box that sets forth the fund’s standardized performance information, maximum sales charge, and annual expense ratio. In doing so NASD’s proposal would go beyond SEC requirements by requiring funds to include specific performance and expense information within advertising materials. Another cost-related rulemaking initiative by SEC staff seeks to improve the disclosure of breakpoint discounts for front-end sales loads. In March 2003, SEC, NASD, and the New York Stock Exchange issued a report describing the failure of some broker-dealers to issue discounts on front- end charges paid to them by mutual fund investors. Mutual funds with front-end sales loads often offer investors discounts or “breakpoints” in these sales loads as the dollar value of the shares purchased by investors or members of their family increases, such as for purchases of $50,000 or more. To better ensure that investors receive these discounts when deserved, SEC is proposing to require funds to disclose in their prospectuses when shareholders are eligible for breakpoint discounts. According to the SEC proposal, such amendments are intended to provide greater prominence to breakpoint disclosure by requiring its inclusion in the prospectus rather than in the Statement of Additional Information, which is a document delivered to investors only upon request. However, these actions would not require mutual funds to disclose to each investor the specific amount of fees in dollars that are paid on the shares they own. As result, investors will not receive information on the costs of mutual fund investing in the same way they see the costs of many other financial products and services that they may use. In addition, these actions do not require that mutual funds provide information relating to fees in the document that is most relevant to investors—the quarterly account statement. In a 1997 survey of how investors obtain information about their funds, ICI indicated that, to shareholders, the account statement is probably the most important communication that they receive from a mutual fund company and that nearly all shareholders use such statements to monitor their mutual funds. Our June 2003 report recommends that SEC consider requiring mutual funds to make additional disclosures to investors, including considering requiring funds to specifically disclose fees in dollars to each investor in quarterly account statements. SEC has agreed to consider requiring such disclosures but was unsure that the benefits of implementing specific dollar disclosures outweighed the costs to produce such disclosures. However, we estimate that spreading these implementation costs across all investor accounts might not represent a large outlay on a per-investor basis. Our report also discusses less costly alternatives that could also prove beneficial to investors and spur increased competition among mutual funds on the basis of fees. For example, one less costly alternative would require quarterly statements to present the same information—the dollar amount of a fund’s fees based on a set investment amount—recently required for mutual fund semiannual reports. Doing so would place this additional fee disclosure in the document generally considered to be of the most interest to investors. An even less costly alternative would be to require that quarterly statements also include a notice that reminds investors that they pay fees and to check their prospectus and ask their financial adviser for more information. Disclosures such as these could be the incentive that some investors need to take action to compare their fund’s expenses to those of other funds and thus make more informed investment decisions. Such disclosures may also increasingly motivate fund companies to respond competitively by lowering fees. This concludes my prepared statement and I would be happy to respond to any questions at the appropriate time. For further information regarding this testimony, please contact Richard J. Hillman or Cody J. Goebel at (202) 512-8678. Individuals making key contributions to this testimony include Toayoa Aldridge, Barbara Roesmann, George Scott, and David Tarosky. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since September 2003, widespread allegations of abusive practices involving mutual funds have come to light. An abuse called late trading allowed some investors, at times in collusion with pension plan intermediary, broker-dealer, or fund adviser staff, to profit at other investors' expense by submitting orders for fund shares to receive that day's price after the legal cutoff. Other investors were allowed to conduct market timing trades to take advantage of stale prices used by funds to calculate their net asset values at funds with stated policies against such trading. SEC and other regulators have responded with numerous proposals for new or revised practices. Based on a body of work that GAO has conducted involving mutual funds, GAO analyzed and provides views on proposed and final rules involving (1) fund pricing and compliance practices intended to address various mutual fund trading abuses that have come to light recently, (2) fund boards' independence and effectiveness, (3) fund adviser compensation of broker-dealers that sell fund shares, and (4) additional actions regulators could take to further improve transparency and investor understanding of the fees they pay. GAO commends SEC and other regulators for their swift regulatory response to recently revealed abusive mutual fund practices. However, some proposed actions need to be thoroughly assessed to ensure equitable treatment for all investors and others will need to be reinforced with enhanced compliance, enforcement, and investor education programs to be truly effective. In particular, to prevent further late trading, SEC has proposed that all mutual fund orders be received by funds or designated processors by 4:00 p.m. Eastern Time, but this action may unfairly impact some retail investors that place orders through financial intermediaries. Although GAO supports in the short run the proposed hard 4:00 p.m. close as a way of increasing the certainty that all orders have been legitimately received, GAO believes that SEC should continue to work with industry participants, including pension plan intermediaries, to address concerns that the hard close would adversely affect investors that use such intermediaries. To address market timing, SEC is proposing that funds make greater disclosure of market timing, securities pricing, and portfolio disclosure policies. GAO supports these steps and encourages regulators to educate investors about the importance of such disclosures. To improve mutual fund corporate governance and oversight, SEC has also proposed increasing the proportion of independent directors to 75 percent and to require independent chairs. SEC is also proposing that fund advisers appoint compliance officers that report to fund boards. GAO sees these actions as giving increased prominence to independent members on fund boards of directors and providing them with additional tools to effectively oversee fund practices. However, additional actions may be needed to ensure that independent directors have no relationships with the fund adviser or its personnel that could impair their independence. SEC and other regulators have also proposed that the broker-dealers that sell fund shares make more extensive disclosures about payments they receive from fund advisers. SEC is also seeking comments on how to revise the fees they charge investors that also compensate broker-dealers for selling fund shares. GAO supports these actions as increasing the transparency of these costs to investors but recognizes that the effectiveness of these proposals could be enhanced by expanded compliance and investor education programs. SEC is also seeking information on how fund advisers use investor dollars to obtain research under a practice called soft dollars. Given the increased spotlight that Congress and regulators are placing on the mutual fund industry, GAO believes the time is right to more effectively address the conflicts of interest created by soft-dollar arrangements. In addition, GAO identifies further actions that could be taken to improve disclosure of mutual fund fees to enhance competition among funds on the basis of the fees that are charged to shareholders. |
SSA provides benefits to individuals with disabilities through two main programs: DI and SSI. Individuals are generally considered disabled if they are unable to do their previous work and, considering age, education, and work experience, are unable to engage in any other kind of substantial gainful work and their disability has lasted or is expected to last at least 1 year or is expected to result in death. See table 1 for additional key features and requirements of the DI and SSI programs. SSA’s process for determining eligibility for disability benefits—which is generally the same for DI and SSI claimants—starts when an individual submits a claim to SSA. First, an SSA field office employee reviews the claim to determine whether the claimant meets SSA’s nonmedical eligibility requirements and, if so, forwards the claim to a state Disability Determination Services (DDS) office. A DDS examiner then reviews the claim to make a determination of medical eligibility. If the claim is denied, the claimant has several opportunities for appeal within SSA, starting with a reconsideration at the DDS level and then a hearing before an administrative law judge. Claims at all levels for which the claimant is determined to be eligible for DI or SSI benefits, also called favorable claims, are forwarded to other SSA offices for payment. See figure 1 for an overview of SSA’s eligibility determination process for disability programs. SSA has policies and guidance on how SSA and DDS employees should document and report suspected fraud. According to SSA’s policy, fraud generally occurs when an individual makes or causes to be made false statements of material facts or conceals such facts for use in determining rights to benefits, with intent to defraud. SSA guidance includes a list of factors that may indicate potential fraud for field office staff and DDS examiners to consider when processing disability claims. According to SSA’s policy, employees should gather enough information to either remove suspicion or determine that there is potential fraud. If they determine that potential fraud exists, they should refer the case to the SSA OIG. They are also expected to develop the referral with as much information as possible, including who allegedly committed fraud, how, and when. See figure 2 for an overview of SSA’s fraud referral process. To help investigate some potential disability fraud cases, the OIG and SSA collaborate through Cooperative Disability Investigation (CDI) and fraud examination units: CDI units are staffed with a mix of SSA, DDS, and OIG staff, as well as state or local law-enforcement officers, to obtain evidence when potential fraud is suspected. CDI investigators often work undercover. As of January 2017, there were 39 CDI units covering 33 states, the District of Columbia, and Puerto Rico. Fraud examination units are staffed by SSA employees with disability determination expertise. Prompted by OIG requests and at the direction of SSA’s Operations component, they review claims to help detect patterns of disability fraud perpetrated by third parties such as physicians, attorneys, or claimant representatives. As of January 2017, fraud examination units were located in New York City, San Francisco, and Kansas City, but each of the three units works on cases from anywhere in the nation. SSA’s OAFP provides centralized oversight and accountability of the agency’s antifraud initiatives. The OAFP’s efforts include managing fraud risks in all of SSA’s programs including its disability programs. SSA’s National Anti-Fraud Committee (NAFC) provides a forum for agency executives to collaborate on antifraud strategies and serves as an advisory board for the OAFP in determining what initiatives to monitor, among other things. The NAFC is cochaired by SSA’s Deputy Commissioner for Budget, Finance, Quality, and Management and the agency’s Inspector General. All SSA deputy commissioners are voting members of the NAFC. The Associate Commissioner for the OAFP, who reports to the Deputy Commissioner for Budget, Finance, Quality, and Management, serves as the agency’s chief fraud prevention officer and is a nonvoting member of the NAFC. The NAFC is at a higher level than the OAFP and has cross-agency authority. See figure 3 for an overview of SSA’s antifraud management structure. According to federal standards and guidance, executive-branch agency managers are responsible for managing fraud risks and implementing practices for combating those risks. Federal internal control standards call for agency managers to assess fraud risks as part of their internal control activities. The Fraud Risk Framework provides a comprehensive set of leading practices that serves as a guide for agency managers to use when developing efforts to combat fraud in a strategic, risk-based way. The Fraud Risk Framework describes leading practices in four components: 1. Commit—Commit to combating fraud by creating an organizational culture and structure conducive to fraud risk management; 2. Assess—Plan regular fraud risk assessments and assess risks to determine a fraud risk profile; 3. Design and Implement—Design and implement a strategy with specific control activities to mitigate assessed fraud risks and collaborate to help ensure effective implementation; and 4. Evaluate and Adapt—Evaluate outcomes using a risk-based approach and adapt activities to improve fraud risk management. In addition, the Fraud Risk Framework reflects activities related to monitoring and feedback mechanisms, which include ongoing practices that apply to all four components, as depicted in figure 4. In July 2016, the Office of Management and Budget published guidance about enterprise risk management and internal controls in federal executive departments and agencies. Among other things, this guidance affirms that managers should adhere to the leading practices identified in the Fraud Risk Framework. In addition, the Fraud Reduction and Data Analytics Act of 2015, enacted in June 2016, requires the Office of Management and Budget to establish guidelines for federal agencies to establish controls to identify and assess fraud risks and design and implement antifraud control activities. The act requires the Office of Management and Budget to incorporate the leading practices from the Fraud Risk Framework in the guidelines. Further, the act requires federal agencies to submit to Congress a progress report each year for 3 consecutive years on the implementation of the controls established under the Office of Management and Budget guidelines, among other things. SSA has taken steps to establish an organizational culture and structure that are conducive to managing fraud risks in its disability programs. The agency has demonstrated a senior-level commitment to combating fraud in its disability programs and has worked to involve all levels of the agency in setting an antifraud tone. For example, in April 2014, SSA reestablished the NAFC to provide support for national and regional antifraud activities. The NAFC is composed of deputy commissioners from across the agency and other SSA executives who meet at least quarterly, which helps to demonstrate a senior-level commitment to combating fraud—one of the Fraud Risk Framework’s leading practices. The NAFC invites regional staff to its regular meetings and to an annual conference to report on the progress of SSA’s antifraud initiatives, which helps involve multiple levels of the agency in setting an antifraud tone— another leading practice in the Fraud Risk Framework. SSA also demonstrated a commitment to combating fraud at all levels of the agency when it implemented the first annual mandatory antifraud training in 2014 for all SSA and DDS staff. As discussed further below, the training describes indicators of potential fraud and instructs staff on how to report it. According to SSA officials, 97 percent of SSA employees and all DDS employees except for those on extended leave completed the most recent annual antifraud training. SSA further demonstrated a commitment to antifraud efforts by conducting a study to evaluate its fraud risk management approach and shortly thereafter establishing a dedicated antifraud office within the agency. SSA contracted with a consulting firm in April 2014 to identify leading practices for managing fraud risks and, according to SSA officials, sought input from other federal agencies about their efforts. The consulting firm’s study identified the importance of establishing a dedicated antifraud office. SSA subsequently established the OAFP in November 2014. The OAFP is responsible for coordinating antifraud efforts, developing antifraud policies, and creating and implementing fraud mitigation plans across SSA, among other things. These responsibilities are consistent with leading practices. According to the Fraud Risk Framework, agency managers can show commitment to combating fraud by creating a structure with a dedicated entity to lead fraud risk management activities and coordinate antifraud initiatives across the agency. In addition, leading practices call for the designated antifraud entity to, among other things, serve as the repository of knowledge on fraud risks and controls and lead or assist with trainings and other fraud-awareness activities. Since the OAFP was established, the office has performed several of these activities. For example, the OAFP has taken steps to coordinate antifraud initiatives across SSA by gathering information about progress on the initiatives, and has helped create antifraud training materials for the agency. In addition, the OAFP formed a committee in 2015, made up of associate commissioners from across the agency, to work on antifraud initiatives. SSA’s Acting Commissioner has reinforced SSA’s commitment to combating fraud by communicating the importance of antifraud efforts in multiple ways. For example, the Acting Commissioner highlighted SSA’s antifraud efforts in a March 2015 virtual town hall meeting, which more than 27,000 SSA staff participated in by video or in person, and issued a “Commissioner’s broadcast” to all employees, encouraging staff to provide suggestions for combating fraud via an e-mail box launched in March 2014. From March 2014 through the end of December 2016, SSA received 399 suggestions and implemented 8 of them by, for example, adding examples and a cross-reference in SSA’s operations manual to help employees identify potential beneficiary fraud. In addition, the Acting Commissioner included preventing waste, fraud, and abuse as a guiding principle in SSA’s strategic plan and supported the launch in early 2015 of SSA’s antifraud website, which includes a link for the public to report suspected fraud to the OIG. Although these steps are generally consistent with leading practices in fraud risk management, the OAFP faced challenges during its first 2 years to fully establish itself within the agency. Specifically, the OAFP faced challenges related to a lack of consistent leadership and established institutional relationships. However, recent actions may help to address these challenges: Lack of consistent leadership: Until recently, the OAFP had not had a permanent leader who provided accountability for the agency’s antifraud initiatives. When the OAFP was established, SSA designated the OAFP associate commissioner as the agency’s chief fraud prevention officer. According to SSA officials, from the summer of 2015 until September 2016, two Senior Executive Service (SES) candidates served successive 6-month periods as the OAFP’s acting associate commissioner. In September 2016, a third SES candidate was appointed as the acting associate commissioner of the OAFP. Upon confirmation as a member of the SES, he became the OAFP’s permanent associate commissioner and assumed the role of SSA’s chief fraud prevention officer in October 2016, according to SSA officials. Lack of established institutional relationships: The OAFP is a relatively new, small office that is still building relationships and establishing its role across the agency for which it is charged with overseeing fraud risk management efforts. According to SSA officials, the process of building relationships across the agency will likely require additional time to become more fully implemented. The OAFP is relatively small compared with the size and complexity of SSA’s 11 components. In fiscal year 2016, the OAFP had approximately 60 full-time equivalent staff, who were in charge of coordinating antifraud initiatives, among other tasks, across SSA, which employs over 60,000 full-time equivalent staff, excluding DDS staff. The OAFP is also in the process of overcoming perceptions of mission overlap. For example, according to SSA officials, there were initial concerns about the OAFP’s role in identifying potential fraud overlapping with the OIG’s role in investigating potential fraud. In August 2015, the Acting Commissioner of SSA approved a memo to components across the agency including the OIG that clarified the function and responsibilities of the OAFP. SSA has undertaken efforts over the last year to identify fraud risks in its disability programs but has not comprehensively assessed the fraud risks identified through those efforts. Leading practices in fraud risk management call for the agency’s designated antifraud entity to lead fraud risk assessments and plan to conduct updated assessments on a regular basis. In planning the fraud risk assessment, leading practices call for managers to tailor the fraud risk assessment to the program by, among other things, identifying appropriate tools, methods, and sources for gathering information about fraud risks and involving relevant stakeholders in the assessment process. Fraud risk assessments that align with leading practices involve (1) identifying inherent fraud risks affecting the program, (2) assessing the likelihood and impact of those fraud risks, (3) determining fraud risk tolerance, (4) examining the suitability of existing fraud controls and prioritizing residual fraud risks, and (5) documenting the results. There is no universally accepted approach for conducting fraud risk assessments because circumstances among programs vary. In spring 2016, SSA took steps to identify fraud risks in its disability programs by contracting with a third party, but this effort was not intended to be a comprehensive fraud risk assessment. According to the contractor’s report, the initial goals of the project were to develop a fraud risk assessment methodology that could be refined and updated over time and to conduct a pilot study of fraud risks in SSA’s disability programs by applying the risk assessment method. The contractor developed risk profiles for four fraud risks in SSA’s disability programs, including a qualitative assessment of the likelihood, impact, and significance of each risk, as well as potential responses to each risk. The contractor also included several other examples of fraud risks in SSA’s disability programs in its final report but did not assess these other risks. Both the contractor and SSA cited the limited time frame for the effort as a reason for not completing the initial project goal of applying the fraud risk assessment method more comprehensively. For example, because of the limited time frame, the contractor was not able to include disability- program experts on the risk assessment team or review risk information beyond information included in documents provided by SSA stakeholders. Although it was not able to conduct a comprehensive assessment of fraud risks in SSA’s disability programs, the contractor stated in its final report that its assessment of four fraud risks provided a successful proof-of- concept of its risk assessment method and that SSA could apply and update that method over time. SSA also identifies fraud risks in other ways. Specifically, according to SSA officials, the OAFP hired a contractor to conduct interviews of SSA and DDS staff in the spring and summer of 2016 to gather information on patterns and characteristics of fraud in its disability programs to help develop an antifraud data-analytics model. However, they noted that the final report will not be available until mid-2017. In addition, according to SSA officials, SSA may become aware of fraud risks through information provided by the OIG’s Office of Investigations and from SSA field offices and hearing offices. These ongoing activities may provide SSA information on fraud risks in its disability programs but do not provide an overall assessment of fraud risks. SSA plans to assess fraud risks, but it is unclear when or how an assessment of its disability programs will occur and whether it will follow leading practices. SSA’s antifraud plans for 2016 to 2018 include an objective to conduct regular fraud risk assessments but do not specify which programs will be included. In addition, it is uncertain when or how SSA will conduct these assessments because the plans do not describe interim steps or specific time frames. SSA’s plans include reassessing fraud risks in its programs on a regular basis, but the plans do not provide timelines for updating its risk assessments. Further, it is not clear which agency stakeholders will be involved in the process or what specific tools, methods, and sources SSA will use to gather information about fraud risks. SSA’s plans state that it will use information produced by its data- analytics system to assist with its risk assessments but, as discussed below, this data-analytics system is in the early stages of development and it may be years before the system produces information on disability fraud schemes and trends that could be incorporated into a fraud risk assessment. According to a senior OAFP official, the risk-assessment effort is on hold because the OAFP is focusing its staff resources on developing its data-analytics system. However, information on the relative likelihood and impact of fraud risks identified through a fraud risk assessment can help ensure that the data-analytics system is appropriately targeted and that the OAFP’s resources are focused on SSA’s most significant fraud risks. Conducting fraud risk assessments that involve relevant stakeholders and align with other leading practices, and updating such assessments on a regular basis, can help identify fraud vulnerabilities before any actual fraud occurs and better position management to take steps to strengthen antifraud controls. Without planning and conducting regular, comprehensive assessments of fraud risks in its disability programs and documenting the results, SSA does not have reasonable assurance that it has implemented antifraud controls to mitigate risks in its disability programs and may be vulnerable to new or emerging fraud risks. SSA has several control activities that seek to prevent, detect, and respond to fraud in its disability programs. However, SSA has not developed and documented an overall antifraud strategy to guide its design and implementation of antifraud activities and help ensure it has sufficient and appropriate controls in place to mitigate its most significant fraud risks. Leading practices in fraud risk management call for managers to document an antifraud strategy that links antifraud activities to the program’s most significant fraud risks. Although SSA has not developed and documented an overall antifraud strategy, SSA has implemented several initiatives, some of which it created after identifying specific fraud risks. For example, after large-scale fraud schemes in SSA’s New York region highlighted the possibility that third parties could facilitate fraud against SSA, the agency established fraud examination units in 2014 to help detect these types of fraud schemes in the future. In addition, after identifying an issue in 2011 with the case management system used by administrative law judges that could potentially allow judges to improperly reassign cases, SSA officials said that they took steps such as changing system access rules to reduce the likelihood that this issue could occur. SSA performs several control activities that are designed to prevent, detect, and respond to disability program fraud. SSA considers risk when implementing some of these activities, for example by considering the likelihood an individual transaction may be fraudulent or the overall amount of funds involved. SSA’s antifraud activities include the following, among others: Cooperative Disability Investigation (CDI) units: SSA and the OIG established CDI units in fiscal year 1997 to help prevent and detect potential disability fraud and have increased the number and coverage of the units in recent years. A typical CDI unit is staffed with an OIG agent who serves as the unit leader, SSA program specialists and DDS examiners who serve as experts on SSA disability policies and procedures, and state or local law-enforcement officers who gather evidence through covert surveillance operations or interviews. As of January 2017, there were 39 CDI units covering 33 states, the District of Columbia, and Puerto Rico, which is up from 26 units covering 22 states and Puerto Rico in fiscal year 2014. Most referrals come to CDI units from DDS examiners who identify suspicious information during an initial eligibility determination or reconsideration, but referrals can also come from administrative law judges during their review of appealed decisions, as well as from other sources such as SSA field offices and the general public. Unit leaders screen referrals and, if the case is accepted, law-enforcement officers conduct an investigation. Staff from one CDI unit we interviewed said they spend more investigative resources on cases that could have more of a financial effect, such as younger individuals who could potentially receive more benefits over their lifetimes. After an investigation, CDI units prepare a summary report of their findings that DDS or SSA staff are required to consider in making a disability determination, which could help prevent or stop fraudulent benefits from being paid. Categories of Fraud Control Activities Control activities for managing fraud risks fall into three general categories—prevention, detection, and response. These categories are interdependent, and activities can often serve more than one purpose. Fraud prevention activities include efforts to mitigate the risk of fraud occurring or to identify potential fraud before making payments. Fraud detection activities help discover fraud that has already occurred. Response activities include actions to address instances of potential fraud, such as referring potential fraud cases for investigation and prosecution, as well as actions to remedy the harm caused by proven fraud. Preventive activities generally offer the most cost efficient use of resources, since they enable managers to avoid a costly and inefficient “pay-and-chase” model in which fraudulent transactions are detected and then attempts are made to recover funds after payments have occurred. Fraud examination units: SSA established fraud examination units in 2014. These units help detect potential third-party disability fraud and help respond to known fraud cases perpetrated by doctors, representatives, and other third parties. The units are staffed with SSA disability examiners from regional offices who assist the OIG with disability fraud investigations involving third parties and help reevaluate disability claims that could have been affected by known fraudsters. For example, unit staff can analyze groups of potentially fraudulent cases to identify trends or suspicious information, such as inconsistencies, altered medical records, or similar or identical medical evidence used in multiple claims. After their analysis, examiners share their findings with the OIG to consider in its investigation. In addition, fraud examination unit staff reevaluate disability claims that have evidence from third parties involved in fraud schemes, such as a case that was uncovered in Puerto Rico where third-party facilitators conspired with claimants to submit evidence for fabricated or exaggerated disabilities, resulting in 27 defendants sentenced as of September 2014, according to a report from SSA’s OIG. During a reevaluation, unit examiners help evaluate whether there is sufficient evidence of a claimant’s disability after evidence from a physician or other third party involved in a fraud scheme is excluded, or whether other evidence and further evaluation is needed to support the claim. Antifraud training and awareness-raising for SSA employees: As mentioned earlier in this report, SSA has annual mandatory antifraud training for all staff and multiple activities designed to raise awareness among employees about fraud, which can help deter and detect potential disability fraud by educating staff on how to identify and report suspicious activity. Since 2014, SSA has developed three 30- minute antifraud training videos that explain fraud and fraud-related terms, highlight OIG success stories and agency antifraud activities, and instruct employees on how to report potential fraud. Employees view the training through SSA’s online learning management system—which tracks completion of the course—or in other ways, such as in group settings. SSA requires managers to certify that employees completed the training. In addition, SSA has multiple sources of fraud-awareness information, such as newsletters, an internal website page, memos, and OIG presentations that highlight agency antifraud activities and instruct staff on how to identify and report potential fraud. Antifraud communications to the public: SSA promotes fraud awareness externally, which can help prevent fraud by deterring potential fraudsters and detect potential fraud by encouraging the public to report suspicious activity. For instance, SSA provides fact sheets, posters, articles, web graphics, and social media posts on its website for groups, organizations, and individuals to use to help promote SSA’s antifraud messages. SSA’s website also contains information on how to report suspected fraud, a message from the Acting Commissioner warning the public that SSA will find and prosecute fraudsters, and examples of what SSA is doing to combat fraud. In addition, SSA includes antifraud language on notices to beneficiaries, which includes the OIG’s hotline number and website. Antifraud data analytics: SSA is in the beginning stages of developing a data-analytics system to help prevent and detect potential fraud. According to agency officials, this system will include multiple models, and SSA plans to contract for a disability-related model after mid- fiscal year 2017 that will likely use data from several SSA databases to help identify pending disability claims that may be fraudulent on the basis of conditions that are indicative of disability fraud. As previously discussed, the OAFP worked with a contractor in fiscal year 2016 to conduct interviews with SSA staff who are familiar with the disability programs to help understand how fraud might occur in those programs. The OAFP plans to use that information to help guide its development of the model and what data may be required. According to OAFP officials, the agency plans to develop models that will rank claims by the likelihood of potential fraud so that analysts can focus on reviewing highly ranked claims more closely. SSA plans for the system to run in real time, which would allow SSA to examine questionable claims before they are fully processed and prevent potentially fraudulent benefits from being paid. Prosecution and sanctions: SSA has additional activities to respond to fraud and potential fraud and has taken steps to improve its processes. For example, SSA has provided attorneys to serve as fraud prosecutors to supplement the prosecutorial resources of the Department of Justice, which has responsibility for criminally prosecuting SSA fraud cases. Specifically, SSA doubled the number of fraud attorneys to 24 in fiscal year 2015 compared with early fiscal year 2014. In fiscal year 2015, the SSA attorneys secured 171 convictions resulting in over $16 million in restitution, according to an SSA report. Additionally, SSA can impose administrative sanctions—a temporary suspension of benefit payments for 6 to 24 months—on individuals who knowingly provide false or misleading information to SSA or fail to disclose information that is material to determining a right to benefits. In 2013, SSA implemented a new process to increase the consistency of imposing administrative sanctions and implemented a tool for improved tracking and monitoring of administrative sanctions in 2016. SSA has other program integrity activities that can help detect potential fraud in its disability programs, although these activities were not designed for this specific purpose. For example, SSA performs analytics to prevent and detect suspicious online transactions, such as unusual direct deposit requests. According to SSA officials, in fiscal year 2016, SSA reviewed over 29,000 suspicious online transactions and referred 1,460 of those transactions to the OIG. Some of these transactions may involve disability benefits. According to SSA officials, the agency’s system to prevent and detect suspicious online transactions ranks such transactions by the likelihood of potential fraud, which helps focus reviews on the cases that are most likely to be fraudulent. SSA also uses data- matching techniques and participates in external data-sharing agreements to help determine benefit eligibility and detect improper payments, of which some may be fraudulent disability benefit payments. For example, SSA uses the Department of Health and Human Services’ National Directory of New Hires—a database containing quarterly wage data—to help determine an individual’s initial and continued eligibility for DI and SSI benefits. Another data match enables the agency to verify SSI applicants’ and recipients’ bank account balances to determine whether they failed to report monetary resources that could affect their benefit amount or render them ineligible. Additionally, DDS examiners conduct periodic continuing disability reviews of disability beneficiaries to determine whether they remain eligible for benefits. If examiners identify suspicious evidence, such as conflicting medical information, during these reviews and perceive potential fraud, they are instructed to refer the case to the OIG for investigation. SSA has reported on the status of its antifraud initiatives and has a plan that includes high-level goals and objectives for managing fraud risks. However, it is unclear whether SSA’s antifraud initiatives are targeting the most significant fraud risks in SSA’s disability programs because SSA has not developed or documented an antifraud strategy that aligns antifraud activities to the most significant fraud risks identified through a comprehensive fraud risk assessment. SSA reported to the Subcommittee on Social Security, Committee on Ways and Means, in the House of Representatives in December 2014 and November 2015 on its antifraud initiatives, but these reports essentially provide summaries of antifraud efforts rather than a proactive strategy to guide SSA’s efforts moving forward. In particular, the reports generally do not clearly link antifraud activities to specific fraud risks by, for example, describing how the activities can address identified fraud risks. In September 2016, SSA completed a high-level antifraud plan that lists the agency’s antifraud- related goals for 2016 to 2018. The plan outlines specific initiatives related to each goal but generally does not describe how these initiatives can help SSA address identified fraud risks. For example, the plan includes an initiative to increase the number of attorneys that serve as fraud prosecutors for SSA. The plan states that officials responsible for the initiative will determine strategic locations for the prosecutors but does not indicate how SSA will consider fraud risk in different areas when identifying locations for the prosecutors. Without developing and documenting an antifraud strategy that considers the likelihood and impact of fraud risks along with its tolerance for these risks and its existing activities to mitigate them, as called for in leading practices, SSA cannot ensure that it has a coordinated approach to address the range of fraud risks in its disability programs and appropriately target the most significant risks. Developing an antifraud strategy can help SSA ensure that it has an appropriate balance of fraud prevention and detection activities to address its risks and can help SSA target its program integrity resources to the antifraud activities that would most cost effectively mitigate its fraud risks. SSA monitors its antifraud activities—including those to protect its disability programs—through the OAFP and NAFC, but the metrics SSA uses do not enable effective monitoring and evaluation. According to SSA documents, the OAFP is responsible for monitoring SSA’s antifraud activities and establishing performance and outcome-oriented goals for them. The OAFP receives updates from the components that are responsible for each antifraud initiative and has shared these updates with the NAFC through periodic meetings and with Congress through reports about SSA’s antifraud initiatives. The OAFP provides flexibility to the SSA components in developing and identifying the metrics that are used to assess and report on antifraud initiatives, and, according to an OAFP official, the office does not revise the metrics submitted by components. Leading practices identified in the Fraud Risk Framework call for managers to develop outcome-oriented measures to monitor and evaluate fraud risk management activities and to adapt activities based on the results. In addition, according to federal internal control standards, managers should establish and operate activities to monitor the internal control system and evaluate the results, which may be compared against an established baseline. Furthermore, agency-wide strategic planning requirements established under the Government Performance and Results Act of 1993 (GPRA) and enhanced by the GPRA Modernization Act of 2010 required agencies to establish outcome-oriented goals and a balanced set of performance indicators, including output and outcome indicators as appropriate, in measuring or assessing progress toward goals. Although these requirements apply at the agency level, in our prior work we have reported that these requirements can serve as leading practices for strategic planning at lower levels within federal agencies, such as planning for individual divisions, programs, or initiatives. However, we found that SSA does not track most of its antifraud initiatives via outcome-oriented metrics to help the agency regularly measure progress in achieving targets. Of the 17 ongoing initiatives listed in SSA’s 2015 antifraud initiatives report, we found that 10 had metrics that were not outcome-oriented, and 4 did not have any metrics. For example, the percentage of staff trained in fraud detection and prevention methods (an output) is listed as a metric of the antifraud training initiative, but SSA does not evaluate the outcomes associated with those trainings such as the change in particular behaviors following the trainings (e.g., the number of referrals to OIG about schemes covered during the trainings). In addition, the 2015 report lists the fraud examination units and fraud case reviews as initiatives but does not include metrics for either. Further, the majority of antifraud initiatives do not provide targets against which to measure performance and track progress relative to a baseline. For example, the initiative to add antifraud language to SSA notices states the “number of notices issued annually with antifraud language is over 186 million notices” as a metric but does not include a target. As previously mentioned, the NAFC receives updates on SSA’s antifraud initiatives from the OAFP, but it plays a limited role in monitoring and making decisions about the initiatives. SSA antifraud initiatives reports state that the NAFC monitors certain antifraud initiatives. For example, the 2015 report notes that the NAFC identified 10 major antifraud initiatives to monitor that year. Although not all NAFC members attend its meetings, the NAFC meets at least quarterly to sustain attention on antifraud initiatives. The NAFC meetings provide an opportunity to present updates and ensure committee members are aware of ongoing initiatives. However, our review of summaries from the seven NAFC meetings held during fiscal year 2016 found that the NAFC generally did not take an active role in monitoring the initiatives. For example, six of the seven NAFC meeting minutes did not indicate instances in which the NAFC requested further information to determine whether antifraud initiatives were meeting their intended goals or recommend changes to improve antifraud initiatives. According to SSA officials with responsibility for implementing antifraud initiatives, the NAFC serves as a forum for updates about antifraud initiatives, including efforts being implemented in SSA’s regions, but makes few decisions about the initiatives and performs limited monitoring of them. SSA recognizes the importance of monitoring, but it is unclear how it plans to evaluate its antifraud activities and adapt them if necessary. We have previously reported that agencies may face challenges measuring outcomes of fraud risk management activities in a reliable way. These challenges include the difficulty of measuring the extent of deterred fraud, isolating potential fraud from legitimate activity or other forms of improper payments, and determining the amount of undetected fraud. However, as described in the Fraud Risk Framework, it is possible for agencies to gather additional information on the short-term or intermediate outcomes of some antifraud initiatives, which may be more readily measured than ultimate benefits. For example, although SSA does not have a metric to monitor the fraud examination units or to evaluate their effect on fraud, it is possible for SSA to identify more immediate outcomes such as the number of potential fraud patterns that the units uncover doing nonreevaluation work. SSA’s antifraud strategic plan for 2016 to 2018 includes an objective to “adapt fraud risk management activities and communicate the results of monitoring and evaluations to management.” Although the plan highlights the importance of monitoring to help strengthen fraud risk management activities, it does not include specific steps for monitoring its antifraud initiatives. Identifying performance metrics, including baselines and targets as appropriate, and requiring additional information from the responsible components on progress made would help the OAFP and NAFC better monitor whether SSA is achieving its antifraud goals. Without this information, the OAFP and NAFC may not be able to determine whether SSA’s antifraud activities are operating effectively or determine whether changes are necessary. Proactively managing fraud risks can help facilitate SSA’s mission by ensuring that government services function as intended, and SSA has taken some steps to manage these risks. Many of SSA’s actions such as establishing an entity dedicated to coordinating fraud risk management activities and instituting annual antifraud training throughout the agency are consistent with leading practices and demonstrate a commitment to managing fraud risks. However, gaps exist in the agency’s fraud risk assessment, corresponding strategy design, and monitoring of antifraud activities. Despite some foundational efforts such as piloting a method for conducting future assessments by profiling four particular risks, until it conducts a thorough, systematic assessment of its fraud risks, SSA will lack robust information on the risks that may most affect the integrity of its disability programs. As a result, SSA may be using its resources to combat fraud schemes that are unlikely to materialize or that have a relatively minimal effect on SSA’s finances or reputation. Although SSA plans to assess fraud risks, it is unclear when an assessment of its disability programs will occur and whether it will reflect leading practices. Absent a comprehensive fraud risk assessment that aligns with leading practices and is regularly updated, SSA will not be equipped to address the fraud schemes to its disability programs that are considered to be the most significant before they occur. Similarly, without developing, documenting, and implementing a comprehensive antifraud strategy that builds on a comprehensive risk assessment, as called for by leading practices, SSA cannot ensure that its antifraud control activities are targeted to its fraud risks, and therefore may be using its resources for program integrity efforts inefficiently. Further, by creating the OAFP and reestablishing the NAFC, SSA has laid some important groundwork for monitoring and evaluating SSA’s antifraud activities. However, without establishing outcome-oriented metrics and then regularly reviewing progress toward meeting these goals, the OAFP will not be able to determine whether the agency’s antifraud control activities are working as intended. The OAFP is charged with coordinating among SSA’s components that lead particular antifraud efforts and establishing outcome-oriented goals; the NAFC—with its higher, cross-agency authority—is well-positioned to consider, act on, and help enforce the OAFP’s recommendations. If the OAFP does not review progress toward meeting antifraud goals and then recommend changes to any antifraud control activities that are not meeting goals, SSA will be hard-pressed to address known fraud risks and to respond to emerging risks that could undermine the agency’s antifraud efforts. We recommend that the Commissioner (or Acting Commissioner) of SSA direct the OAFP to take the following four actions for its disability programs: lead a comprehensive fraud risk assessment that is consistent with leading practices, and develop a plan for regularly updating the assessment; develop, document, and implement an antifraud strategy that is aligned to its assessed fraud risks; work with components responsible for implementing antifraud initiatives to develop outcome-oriented metrics, including baselines and goals, where appropriate for antifraud activities; and review progress toward meeting goals on a regular basis, and recommend that the NAFC make changes to control activities or take other corrective actions on any initiatives that are not meeting goals. We provided a draft of this report to SSA for review and comment, and its written comments are reproduced as appendix I in this report. SSA agreed with our recommendations and emphasized its commitment to preventing and detecting fraud. The agency also provided technical comments, which we incorporated into the report as appropriate. SSA described how it plans to address our recommendations as follows: Regarding our recommendation to lead a comprehensive fraud risk assessment of SSA’s disability programs, SSA agreed and stated that it will conduct a fraud risk assessment on its major lines of business on a recurring schedule, beginning with disability in fiscal year 2017. The agency stated that it will conduct this assessment after developing a fraud risk management strategy that is consistent with leading practices identified in the Fraud Risk Framework. Although it is important to take a strategic approach to assessing fraud risks, we maintain that an effective fraud risk management strategy is based on a comprehensive fraud risk assessment. Regarding our recommendation to develop, document, and implement an antifraud strategy that is aligned to assessed fraud risks, SSA agreed and stated that it will prioritize and align the agency’s antifraud strategy to the outcomes of the fraud risk assessment. Regarding our recommendation to work with components responsible for implementing antifraud initiatives to develop outcome-oriented metrics, including baselines and goals, where appropriate for antifraud activities, SSA agreed and noted that it will collaborate with stakeholders, including its OIG, to implement this recommendation. Regarding our recommendation to review progress toward meeting goals on a regular basis and to recommend that the NAFC make changes to any control activities or initiatives that are not meeting goals, SSA agreed. The agency stated that, as it develops its antifraud strategy, the NAFC will measure and recommend necessary corrective action to ensure that its initiatives achieve SSA’s stated objectives and goals. In addition, SSA stated that it will seek opportunities to prioritize initiatives and activities through a risk-based approach to mitigating the risk of program fraud. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send a copy to the Acting Commissioner of Social Security. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Seto J. Bagdoyan at (202) 512-6722 or [email protected] or Cindy Brown Barnes at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contacts named above, Tonita Gillich (Assistant Director), Joel Green (Analyst-in-Charge), Daniel Bertoni, Erin McLaughlin, and Bridgette Savino made key contributions to this report. Also contributing to the report were Holly Dye, Alexander Galuten, Erin Godtland, Maria McMullen, Kevin Metcalfe, James Murphy, Shana Wallace, David Watsula, and Greg Whitney. | SSA's Disability Insurance (DI) and Supplemental Security Income (SSI) programs provide cash benefits to millions of Americans with disabilities who are unable to work. Collectively, payments from DI and SSI were about $200 billion in fiscal year 2015. Although the extent of fraud in these programs is unknown, recent high-profile cases have highlighted instances in which individuals fraudulently obtained millions of dollars in benefits. GAO was asked to review SSA's fraud risk management. This report examines SSA's actions to manage fraud risks and the extent to which these actions align with leading practices for (1) establishing an organizational culture and structure conducive to fraud risk management, (2) identifying, assessing, and addressing fraud risks, and (3) monitoring and evaluating its antifraud activities. GAO reviewed SSA documents, such as training materials and operational guidance; and interviewed SSA officials at the agency's headquarters, its three fraud examination units, and selected state disability determination offices chosen for their proximity to antifraud initiatives. GAO assessed those actions against leading practices identified in its Fraud Risk Framework. The Social Security Administration (SSA) has taken steps to establish an organizational culture and structure conducive to fraud risk management in its disability programs, but its new antifraud office is still evolving. In recent years, SSA instituted mandatory antifraud training, established a centralized antifraud office to coordinate and oversee the agency's fraud risk management activities, and communicated the importance of antifraud efforts. These actions are generally consistent with GAO's Fraud Risk Framework, a set of leading practices that can serve as a guide for program managers to use when developing antifraud efforts in a strategic way. However, SSA's new antifraud office, the Office of Anti-Fraud Programs (OAFP), faced challenges establishing itself as the coordinating body for the agency's antifraud initiatives. For example, the OAFP has had multiple acting leaders, but SSA recently appointed a permanent leader of OAFP to provide accountability for the agency's antifraud activities. SSA has taken steps to identify and address fraud risks in its disability programs, but it has not yet comprehensively assessed these fraud risks or developed a strategic approach to help ensure its antifraud activities effectively mitigate those risks. Over the last year, SSA gathered information about fraud risks, but these efforts generally have not been systematic and did not assess the likelihood, impact, or significance of all risks that were identified. SSA also has several prevention and detection activities in place to address known fraud risks in its disability programs such as fraud examination units, which review disability claims to help detect fraud perpetrated by third parties. However, SSA has not developed and documented an overall antifraud strategy that aligns its antifraud activities to its fraud risks. Leading practices call for federal program managers to conduct a fraud risk assessment and develop a strategy to address identified fraud risks. Without conducting a fraud risk assessment that aligns with leading practices and developing an antifraud strategy, SSA's disability programs may remain vulnerable to new fraud schemes, and SSA will not be able to effectively prioritize its antifraud activities. SSA monitors its antifraud activities through the OAFP and its National Anti-Fraud Committee (NAFC), which serves as an advisory board to the OAFP, but the agency does not have effective performance metrics to evaluate the effect of such activities. The OAFP has responsibility for monitoring SSA's antifraud activities and establishing performance and outcome-oriented goals for them. It collects metrics to inform reports about its antifraud initiatives, and the NAFC receives regular updates about antifraud initiatives. However, the quality of the metrics varies across initiatives and some initiatives do not have metrics. Of the 17 initiatives listed in SSA's 2015 report on antifraud initiatives, 10 had metrics that did not focus on outcomes, and 4 did not have any metrics. For example, SSA lacks a metric to help monitor the effectiveness of its fraud examination units. Leading practices in fraud risk management call for managers to monitor and evaluate antifraud initiatives with a focus on measuring outcomes. Without outcome-oriented performance metrics, SSA may not be able to evaluate its antifraud activities, review progress, and determine whether changes are necessary. GAO recommends SSA (1) conduct a comprehensive fraud risk assessment for its disability programs, (2) develop a corresponding antifraud strategy, (3) develop outcome-oriented metrics for antifraud activities, and (4) review progress and change activities as necessary. SSA agreed with GAO's recommendations. |
The 1952 Immigration and Nationality Act, as amended, is the primary body of law governing immigration and visa operations. The Homeland Security Act of 2002 generally grants DHS exclusive authority to issue regulations on, administer, and enforce the Immigration and Nationality Act and all other immigration and nationality laws relating to the functions of U.S. consular officers in connection with the granting or denial of visas. As we reported in July 2005, the act also authorized the assignment of DHS employees to U.S. embassies and consulates to provide expert advice and training to consular officers regarding visa security, among other things. In particular, the act mandated that VSOs on-site in Saudi Arabia review all visa applications prior to final adjudication by consular officers. A September 2003 Memorandum of Understanding between State and DHS further outlines the responsibilities of each agency with respect to visa issuance. State manages the visa process, as well as the consular corps and its functions at 211 visa-issuing posts overseas. In addition, State provides guidance, in consultation with DHS, to consular officers regarding visa policies and procedures. DHS is responsible for establishing visa policy, reviewing implementation of the policy, and providing additional direction. This agreement also broadly defines the DHS officers’ responsibilities in reviewing visa applications at consular posts overseas, indicating, among other things, that they will provide expert advice to consular officers regarding specific security threats relating to visa adjudication and will also provide training to consular officers on terrorist threats and applicant fraud. The process for determining who will be issued or refused a visa contains several steps, including documentation reviews, in-person interviews, collection of biometrics (fingerprints), and cross-referencing of the applicant’s name against the Consular Lookout and Support System (CLASS) (see fig. 1). In 2002, we recommended actions to strengthen the visa process as an antiterrorism tool, including establishing a clear policy on the priority attached to addressing national security concerns through the visa process; creating more comprehensive, risk-based guidelines and standards on how consular officers should use the visa process to screen against potential terrorists; performing a fundamental reassessment of staffing and language skill requirements for visa operations; and revamping and expanding consular training courses to place more emphasis on detecting potential terrorists. Since 2002, State, DHS, and other agencies have taken numerous steps to strengthen the visa process as an antiterrorism tool and increase its overall efficiency and effectiveness. In particular, the Assistant Secretary in the Bureau of Consular Affairs has taken a leading role in implementing changes to the visa process and promoting its use as a screen against potential terrorists. However, additional actions could enhance the visa process. State has increased and clarified visa policies and guidance, but additional steps are needed to ensure these changes are implemented. Additionally, State has increased resources to strengthen the visa process, including hiring additional consular officers, targeting recruitment, and expanding training efforts; however, staffing limitations remain a concern, posts seek further training, and other gaps remain. Lastly, while interagency information-sharing efforts have increased, consular officers do not have direct access to detailed information from the FBI’s criminal records, which would help facilitate the approval of legitimate travelers. We reported in October 2002 that consular officers held differing views on balancing the need for national security and customer service in the visa process. In addition, State had not issued comprehensive policy guidance to posts regarding how consular officers should react to the heightened border security concerns following the September 11 attacks. Over the past three years, State has implemented several changes to address these issues, and consular officials stated that the report and its recommendations provided a framework for these changes. For example, in February 2003, Consular Affairs issued guidance identifying national security as the first priority of the visa process. Consular officers we interviewed said the guidance was generally clear, and officers at all eight posts we visited viewed security as the most critical element of the visa process. In addition, Consular Affairs identified certain areas where additional guidance was needed to streamline visa procedures. State has issued more than 80 standard operating procedures, in consultation with DHS, to inform consular officers on issues such as fingerprinting and special clearance requirements. Despite these improvements, some consular officers we interviewed stated that it has been difficult to synthesize and consistently apply all of the changes to the visa process. The guidance provided to consular officers in the field is voluminous and can change rapidly, according to consular officials. The Consular Affairs Bureau may notify its officers overseas of policy changes through cables, postings on its internal Web site, and informal communications. However, the bureau has not consistently updated the consular and visa chapters of the Foreign Affairs Manual—State’s central resource for all regulations, policies, and guidance—to reflect these changes. Throughout 2005, the bureau has updated several portions of the manual, but, as of June 2005, some sections had not been updated since October 2004. Consular officials stated that they are overhauling the standard operating procedures to eliminate those that are obsolete and incorporate current requirements into the manual. However, while the Consular Affairs Bureau’s internal Web site contains all of the standard operating procedures, it also links to out-of-date sections in the manual. As a result, there is no single, reliable source for current information. Consular officers also indicated that additional guidance is needed on certain interagency protocols. Specifically, 15 out of 25 visa chiefs we interviewed reported that additional guidance would be helpful regarding the interaction between the Bureau of Consular Affairs and DHS. For example, DHS personnel stationed overseas work on a variety of immigration and border security activities and serve in a regional capacity. However, DHS has not provided guidance to consular officers regarding the roles and geographic responsibilities for its personnel. In 2002, we found that at some posts the demand for visas, combined with increased workload per visa applicant, exceeded the available staff. As a result, we recommended that State perform a fundamental reassessment of staffing requirements for visa operations. In our report issued today, we have noted that State has received funding to address staffing shortfalls, but we continue to see the need for a reassessment of resource needs worldwide. Through the Diplomatic Readiness Initiative and other sources, State has increased its Foreign Service officer consular positions by 364, from 1,037 in fiscal year 2002 to 1,401 in fiscal year 2005. Moreover, a senior human resource official anticipates that many officers hired under the Diplomatic Readiness Initiative will begin to reach promotion eligibility for midlevel positions within the next two years. However, as we have previously reported in 2003, the overall shortage of midlevel Foreign Service officers would remain until approximately 2013. As of April 30, 2005, we found that 26 percent of midlevel consular positions were either vacant or filled by an entry-level officer (see fig. 2). In addition, almost three-quarters of the vacant positions were at the FS-03 level—midlevel officers who generally supervise entry-level staff—which consular officials attribute to low hiring levels prior to the Diplomatic Readiness Initiative and the necessary expansion of entry-level positions to accommodate increasing workload requirements after September 11, 2001. Senior (44) Vacant (58) Midlevel (478) Staffed with entry-level officers (65) Staffed by at least midlevel officers (355) Entry-level (879) During our February 2005 visits to Riyadh, Jeddah, and Cairo, we observed that the consular sections were staffed with entry-level officers on their first assignment with no permanent, midlevel visa chief to provide supervision. Although these posts had other mid- or senior-level consular officers, their availability on visa issues was limited because of their additional responsibilities. For example, the head of the visa section in Jeddah was responsible for managing the entire section as well as services for American citizens due to a midlevel vacancy in that position. At the time of our visit, the Riyadh Embassy did not have a midlevel visa chief. Similarly, in Cairo, there was no permanent midlevel supervisor between the winter of 2004 and the summer of 2005, and Consular Affairs used five temporary staff on a rotating basis during this period to serve in this capacity. Entry-level officers that we spoke with stated that due to the constant turnover, the temporary supervisors were unable to assist them adequately. At the U.S. consulate in Jeddah, entry-level officers expressed concern about the lack of a midlevel supervisor. Officers in Jeddah stated that they relied on the guidance they received from the DHS visa security officer assigned to the post. However, as of July 2005, visa security officers are stationed only at two consular posts in Saudi Arabia—not at any of the other 209 visa-issuing posts overseas. If the Consular Affairs Bureau identifies a need for additional staff in headquarters or overseas, it may request that the Human Resources Bureau establish new positions. In addition, posts can also describe their needs for additional positions through their consular packages—a report submitted annually to the Consular Affairs Bureau that details workload statistics and staffing requirements, among other things. For example, in December 2004, during the course of our work, the consular section in Riyadh reported to Washington that there was an immediate need to create a midlevel visa chief position at post, and State worked with human resource officials to create this position, which, according to State officials, will be filled by summer 2005. However, the current assignment process does not guarantee that all authorized positions will be filled, particularly at hardship posts. Historically, State has rarely directed its employees to serve in locations for which they have not bid on a position, including hardship posts or locations of strategic importance, due to concerns that such staff may be more apt to have poor morale or be less productive. Further, though Consular Affairs can prioritize positions that require immediate staffing, according to a deputy assistant secretary for human resources, it generally does not do so. For example, Consular Affairs could choose not to advertise certain positions of lesser priority during an annual assignment cycle. However, senior Consular Affairs officials acknowledged that they rarely do this. According to these officials, Consular Affairs does not have direct control over the filling of all consular positions and can often face resistance from regional bureaus and chiefs of mission overseas who do not want vacancies at their posts. Therefore, due to State’s decision to not force assignments, along with the limited amount of midlevel officers available to apply for them, important positions may remain vacant. In 2002, we found that not all consular officers were proficient enough in their post’s language to hold interviews with applicants. We also found that training for new consular officers was focused on detecting intending immigrants through the visa process, with little training given on detecting possible terrorists. Today we are reporting that State has made a number of improvements in its recruitment of language proficient Foreign Service officers, expanded and revamped consular training, and increased the attention paid to fraud prevention. However, we found that additional actions would support ongoing improvements. For example, State has created programs to better target its recruitment of Foreign Service officers who speak critical languages. For example, in March 2004, State created the “Critical Needs Language Program,” which increases the opportunities for appointment to the Foreign Service for new hires proficient in Arabic, Chinese, Indic, Korean, Russian, or Turkic, and who have passed the Foreign Service Exam. From March 2004 through May 2005, 172 of the 564 Foreign Service officers hired were proficient in one of these languages. Despite these improvements, additional actions are needed to fill continuing language proficiency shortfalls. As of April 30, 2005, State reported that about 14 percent of consular-coned Foreign Service officers in language designated positions did not meet language requirements for their position. State has revamped and expanded consular training to enhance visa security. For example, in October 2003, the Basic Consular Course was extended from 26 days to 31 days, and classes were added in analytical interviewing and fraud prevention. In addition, in March 2002, State created a new course in advanced name-checking. However, additional training could further assist consular officers. All of the posts we contacted reported that additional training on terrorist travel trends would be helpful, with 16 posts responding that such training would be extremely helpful. Some posts also reported that additional briefings on counterterrorism techniques specific to post and fraud prevention would be helpful. State has taken several steps to increase its focus on preventing and detecting fraud in the visa process. For example, by 2004, State’s Bureau of Diplomatic Security had deployed 25 visa fraud investigators to U.S. embassies and consulates. In addition, State’s Office of Fraud Prevention Programs has developed several ways for consular officers in the field to learn about fraud prevention, including developing an Internet-based “E-room,” with more than 500 members, which serves as a discussion group for consular officers, as well as a place to post cables and lessons learned from prior fraud cases. However, until recently, the department has not used a systematic process to identify consular posts with the highest degree of visa fraud. According to State officials, fraud rankings for consular posts have not been based on an objective analysis using standardized criteria, but have been self- reported by each post. As a result, previous resources for fraud prevention, including the 25 visa fraud investigators assigned in 2004, may not have been allocated to posts with the highest need. We also plan to report later this year on the internal controls that are in place to mitigate the risks of visa malfeasance—the provision of a visa in exchange for money or something else of value—and intend to make several recommendations to help ensure adherence to these controls. The September 11 attacks highlighted the need for comprehensive information sharing. In January 2005, GAO identified effective information sharing to secure the homeland as a high-risk area of the U.S. government due to the formidable challenges the federal government still faces in this area. With cooperation from other federal agencies, State has increased the amount of information available to consular officers in CLASS. Name- check records from the intelligence community have increased fivefold from 48,000 in September 2001 to approximately 260,000 in June 2005, according to consular officials. Moreover, consular officials told us that, as of fall 2004, CLASS contained approximately 8 million records from the FBI. In addition, State has developed more efficient methods of acquiring certain data from law enforcement databases. For example, State established a direct computer link with the FBI to send descriptive information from the FBI’s National Crime Information Center (NCIC) to CLASS on a daily basis. While the additional records in CLASS have helped consular officers detect those who might seek to harm the United States, many consular officers we interviewed stated that the increased volume of records and lack of access to other detailed information can lead to visa-processing delays. In particular, consular officers do not have direct access to detailed information in the FBI’s criminal history records. Section 403 of the USA PATRIOT Act of 2001 directs the Attorney General and the FBI to provide State with access to extracts of certain files containing descriptive information for the purpose of determining whether a visa applicant has a criminal history record contained in the NCIC Interstate Identification Index (or Index). The USA PATRIOT Act also states that access to an extract does not entitle consular officers to obtain the full contents of the corresponding records. In accordance with this mandate, FBI officials stated that the bureau provides to CLASS extracts that contain all available biographical information, such as the date of birth and height of the person with the criminal record. As a result, when conducting a CLASS name check, consular officers told us they may not be able to determine whether an FBI file matches an applicant because the extracts lack sufficient biographical information. Moreover, in accordance with section 403, the extracts do not contain details such as charges or dispositions of the cases, which are necessary to determine if the applicant might be ineligible for a visa. For example, the information in CLASS does not distinguish between a conviction for a crime such as kidnapping, or an acquittal on charges of driving while intoxicated. Consular officers, therefore, must fingerprint applicants who have a potential match in the Index for positive identification in FBI records to then ascertain whether the information contained in the criminal record would make the applicant ineligible for a visa. In fiscal year 2004, of the more than 40,000 sets of fingerprints consular officers sent to the FBI for verification, about 29 percent were positive matches between the applicant and a criminal record in the Index. State officials we spoke with estimated that of those applicants who were positively identified, only about 10 percent were denied a visa based on the information provided by the FBI. Moreover, fingerprinted applicants are charged an additional $85 processing fee and, as of the spring of 2005, must wait an estimated 4 to 8 weeks for a response from Washington, D.C., before adjudication can proceed. According to FBI and State officials, the processing delays are due to inefficiencies in the way the prints are sent to the FBI for clearance (see fig. 3). To facilitate more efficient fingerprint processing, State and the FBI are implementing an electronic fingerprint system whereby consular officers will scan the applicants’ fingerprints at post and submit them directly into the FBI’s database. FBI and State officials told us that posts would be notified if the record in question matched the applicant within 24 hours. However, thousands of visa applicants could still face lengthier wait times and additional fingerprinting fees that they would otherwise not have incurred because consular officers lack enough information at the time of the interview to determine if the records in CLASS match the applicant. The FBI and State have discussed several options to help ensure that consular officers can facilitate legitimate travel; however, each would require legislative changes and would entail associated trade-offs. These options include the following: Consular officials told us that access to additional information in a criminal history file, such as the charge and disposition of a case, would allow their officers to determine which crimes are serious enough to require a positive fingerprint match prior to adjudication. However, FBI officials noted that there are some technical limitations on extracting specific pieces of data from the criminal history records. To avoid some of the technical limitations associated with the Index, FBI officials stated that it would be easier to provide visa adjudicators access to the full criminal history records. However, these officials told us that assurances would need to be in place to prevent misuse of the information, given its sensitive nature. Indeed, State and the FBI have already negotiated a Memorandum of Understanding aimed at protecting the information passed from NCIC to CLASS. However, consular officials indicated that their officers may need access only to the criminal charge and disposition of the case to adjudicate a visa case more efficiently. In our report issued today, we are recommending, among other things, that State and DHS, in consultation with appropriate agencies, clarify certain visa policies and procedures and facilitate their implementation, and ensure that consular sections have the necessary tools to enhance national security and promote legitimate travel, including effective human resources and training. In particular, we recommend that State develop a comprehensive plan to address vulnerabilities in consular staffing worldwide, including an analysis of staffing requirements and shortages, foreign language proficiency requirements, and fraud prevention needs, among other things—the plan should systematically determine priority positions that must be filled worldwide based on the relative strategic importance of posts and positions and realistic assumptions of available staff resources. We also suggest that Congress consider requiring State and the FBI to develop and report on a plan to provide visa adjudicators with more efficient access to certain information in the FBI’s criminal history records to help facilitate the approval of legitimate travelers. In commenting on a draft of our report, State noted that it is a fair and balanced evaluation of the improvements made to the visa process. State agreed with most of our conclusions, and indicated that it is taking action to implement the majority of our recommendations. However, State disagreed with our recommendation that it prepare a comprehensive plan to address vulnerabilities in consular staffing. State argued that it already had such a plan. Based on our analysis, we continue to believe it is incumbent on the department to conduct a worldwide analysis to identify high-priority posts and positions, such as supervisory consular positions in posts with high-risk applicant pools or those with high workloads and long wait times for applicant interviews. As we note in our report, at the time of our work, the midlevel visa chief positions in Riyadh and Jeddah, Saudi Arabia, and Cairo, Egypt, were not filled with permanent midlevel officers. This was a serious deficiency given that the visa sections were staffed with officers on their first tour. Although State noted that it anticipated addressing this shortage of midlevel consular officers before 2013, it did not indicate when that gap would be filled. Moreover, State’s bidding and assignment process does not guarantee that the positions of highest priority will always be filled with qualified officers. Therefore, a further assessment is needed to ensure that State has the right people in the right posts with the necessary skill levels. In September 2003, DHS assigned Visa Security Officers (VSO) to consular posts in Saudi Arabia and plans to assign staff to other posts to strengthen the visa process at these locations. As we addressed in our July 2005 report, according to State Department consular officers, the deputy chief of mission, and DHS officials, VSOs in Saudi Arabia enhance the security of the visa adjudication process at these consular posts, though several issues raise concerns about the VSOs’ role and impact. VSOs in Saudi Arabia provide an additional law enforcement capability to the visa adjudication process and have access to and experience using important law enforcement information not readily available to consular officers. Moreover, VSOs’ border security and immigration experience can assist consular officers during the visa process. The consular sections in Riyadh and Jeddah have incorporated the VSOs’ review of all visa applications into the adjudication process in Saudi Arabia. In addition to reviewing applications, the VSOs may conduct secondary interviews with some visa applicants based either on findings from their application reviews or a consular officer’s request. Despite the VSOs’ positive effect on visa operations, however, several concerns exist about their role and overall impact. The requirement that VSOs review all visa applications in Saudi Arabia limits the amount of time they can spend on training and other valuable services. We observed that VSOs in Riyadh and Jeddah must spend a significant amount of time reviewing all visa applications, including those of low-risk applicants or individuals who do not pose a threat to national security, as well as those that have preliminarily been refused by consular officers. A Visa Security Program official noted that this mandate is only for visa security operations in Saudi Arabia and not other posts to which DHS plans to expand the program. VSOs, DHS and State officials, and the deputy chief of mission all agreed that the mandate to review all applications was forcing the VSOs to spend time on lower priority tasks, limiting their ability to perform other important activities, such as providing training or conducting additional secondary interviews of applicants. DHS has not maintained measurable data to fully demonstrate the impact of VSOs on the visa process. The VSOs that were stationed in Riyadh during our visit estimated that, based on their review of visa applications, they had recommended that visas be refused after the preliminary decision to issue a visa by consular officers in about 15 cases between October 2004 and February 2005. In addition, the DHS officials in Saudi Arabia and in Washington, D.C., were able to provide anecdotal examples of assistance provided to the consular officers. However, DHS has not developed a system to fully track the results of visa security activities in Saudi Arabia. For example, DHS could not provide data to demonstrate the number of cases for which they have recommended refusal. DHS plans to expand the Visa Security Program to five additional posts in fiscal year 2005; however, the assignments of VSOs were delayed at four of the five selected expansion posts. DHS attributed the delay to resistance by State, as well as funding problems; State and chiefs of mission attributed the delays to various outstanding questions about the program. Following DHS’s initial request in June 2004 to assign 21 VSOs to five expansion posts, embassy officials raised questions and concerns, including regarding the criteria used by DHS to select expansion posts, the reasoning for the number of VSOs requested for the posts, and DHS’s plans to coordinate with existing law enforcement and border security staff and programs at post. In 2004 and 2005, DHS provided responses, through State’s Bureau of Consular Affairs, to the questions raised by the chiefs of mission at four of the expansion posts. According to DHS, the responses were sufficient to answer the concerns. We reviewed DHS’s responses to the posts, and identified a number of issues that had not been fully addressed, such as what criteria DHS would use to demonstrate the effectiveness of its officers. Nonetheless, the chiefs of mission at three posts approved DHS’s National Security Decision Directive 38 requests in March and June 2005, while, as of June 2005, one post had still not approved the request. Although DHS plans to expand the Visa Security Program in fiscal year 2005 and beyond, it does not a have a strategic plan that defines mission priorities and long-term goals and identifies the outcomes expected at each post. We have identified the development of a strategic plan as an essential component of measuring progress and holding agencies accountable. The development of an overall strategic plan for the Visa Security Program prior to the expansion of the program may have addressed the questions initially raised by State and embassy officials that led to the delay of the assignment of VSOs. Moreover, a strategic plan would provide a framework for DHS to address broader questions regarding the selection criteria for expansion, the roles and responsibilities of VSOs, and the cost of establishing the program at posts. Officials from DHS and State, as well as consular officials we contacted overseas, all agreed that the development of such a plan would be useful to guide visa security operations in Saudi Arabia and other posts. It would also be useful to inform the Congress, as well as State and other agencies who participate in the visa process at consular posts overseas. In our July 2005 report, we recommended that DHS develop a strategic plan to guide the operations of the Visa Security Program in Saudi Arabia and the program’s expansion to other embassies and consulates. This plan should define mission priorities and long-term goals and identify expected outcomes. In addition, the strategic plan and supporting documents should include the criteria used to select the locations for expansion, justification for the number of VSOs at each post, costs associated with assigning VSOs overseas, and their roles and responsibilities in relation to other agencies at post. In addition, we recommended that DHS develop and maintain comprehensive performance data that track the results and demonstrate impact of VSO activities. We also proposed that Congress consider amending current legislation, which requires the review of all visa applications in Saudi Arabia, to allow DHS the flexibility to determine which applications VSOs will review prior to final adjudication by consular officers. This would allow VSOs to focus on the applications of those who may pose a risk to national security, providing them time to perform other tasks that could benefit consular officers. In commenting on our report, DHS stated that it was taking actions to implement performance measurements and a strategic plan for the Visa Security Program, as described in our recommendations. DHS indicated that it is expanding the tracking and measurement of performance data to better reflect program results, and is developing a strategic plan that will integrate the key elements described in our recommendation. Regarding the matter for congressional consideration to provide DHS with the flexibility to determine the review of visa applications in Saudi Arabia, DHS noted that a legislative change should maintain the department’s authority and discretion in determining the scope of the VSOs’ review. DHS agreed that it needed to expand some of the VSOs’ activities in Saudi Arabia, such as providing additional training, which we found were not being provided because of the volume of work that resulted from fulfilling the legislative requirement. The visa process presents a balance between facilitating legitimate travel and identifying those who might harm the United States. State, in coordination with other agencies, has made substantial improvements to the visa process to strengthen it as a national security tool. DHS has also taken steps to assign personnel to consular posts to provide an additional layer of security to the visa process in these locations. However, we identified areas where additional management actions are needed by State and DHS to further improve the efficiency and effectiveness of the visa process. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or Members of the Subcommittee may have. For questions regarding this testimony, please call Jess T. Ford, (202) 512- 4128 or [email protected]. Individuals making key contributions to this statement include John Brummet, Assistant Director, and Joseph Carney, Daniel Chen, Kathryn Hartsburg, and John F. Miller. Border Security: Strengthened Visa Process Woud Benefit From Improvements in Saffng and Inormaton Sharng. GAO-05-859. t ifii September 13, 2005. Border Security: Actons Needed to Strengthen Management of Department of Homeland Securiy’s Visa Security Program. GAO-05-801. t July 29, 2005. Border Security: Stream ned Visas Man s Program Has Lowered Burden on Foreign Science Students and Schoars, but Further Re nemens fitl Needed. GAO-05-198. February 18, 2005. Border Securiy: State Department Rollou of Biometric Vsas on Schedue, i but Guidance Is Lagging. GAO-04-1001. September 9, 2004. Border Security: Additional Actons Needed to E minate Weaknesses in the Visa Revocation Process. GAO-04-795. July 13, 2004. Visa Operatons at U.S. Posts n Canada. GAO-04-708R. May 18, 2004. Border Security: Improvements Needed to Reduce Time Taken to Adjudicae Visas or Scence Studens and Scholars. GAO-04-371. February tfti 25, 2004. State Departmen: Targets for Hiring, Filling Vacancies Overseas Being Me but Gaps Reman inHard-o-Learn Languages. GAO-04-139. November i t,t 19, 2003. Border Security: New Polcies and Procedures Are Needed to F Gaps in the Visa Revocation Process. GAO-03-798. June 18, 2003. Border Security: Implicaions of Eliminating the Visa Waiver Program. GAO-03-38. November 22, 2002. Technology Assessment: Using Biomerics for Border Securiy. GAO-03- 174. November 15, 2002. Border Securiy: Visa Process Should Be Strengthened as an Ant errorism Tool. GAO-03-132NI. October 21, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In adjudicating a visa application, Department of State (State) consular officers are on the front line of defense against those whose entry would likely be harmful to U.S. national interests. In October 2002, we identified shortcomings and made recommendations on the role of national security in the visa process. This testimony discusses our report issued today on actions taken since our 2002 report to strengthen the visa process, as well as areas that deserve additional management actions. It also discusses our July 2005 report on the status of the assignment of Department of Homeland Security (DHS) personnel to U.S. consular posts overseas. State and DHS have taken many steps to strengthen the visa process as an antiterrorism tool. Consular officers are receiving clear guidance on the importance of addressing national security concerns through the visa process, and State has established clear procedures on visa operations worldwide. State has also increased its hiring of consular officers and language proficient Foreign Service officers, and has enhanced training and fraud prevention efforts. Further, consular officers have access to more information from intelligence and law enforcement agencies. However, some areas require additional attention. For example, officers we spoke with said that guidance is needed on DHS staff's roles and responsibilities overseas. In addition, while State has hired more consular officers, it continues to experience shortages in supervisory staff. As of April 30, 2005, 26 percent of midlevel positions were either vacant or filled by entry-level staff. During our February 2005 visits to three consular posts in Saudi Arabia and Egypt--all of which are of interest to U.S. antiterrorism efforts--the visa sections were staffed with first-tour officers and no permanent midlevel visa chiefs to provide direct oversight. Further improvements are also needed in training and fraud prevention, as well as information sharing with the FBI. In September 2003, DHS assigned visa security personnel to consular posts in Saudi Arabia. According to DHS, State's consular officials, and the deputy chief of mission in Saudi Arabia, the DHS officers in Saudi Arabia strengthen visa security. However, DHS does not maintain comprehensive data on their activities and thus is unable to fully demonstrate the program's impact. Further, DHS has not developed a strategic plan for visa security operations in Saudi Arabia or for the planned future expansion of the program. |
ASR, a disease caused by the fungus Phakopsora pachyrhizi, requires living host cells to survive. It can infect over 90 host species of legumes, such as kidney beans, chickpeas, and kudzu. When ASR infects soybeans, it causes the plants to lose their leaves prematurely, which reduces the size and number of the beans. In areas where the disease commonly occurs, up to 80 percent yield losses have been reported. Environmental factors are critical to the incidence and severity of ASR. Long periods of leaf wetness, high humidity, and temperatures between 60 and 80 degrees Fahrenheit are ideal for spore germination. About 7 days after plants are infected with ASR, small brown spots surrounded by yellow rings appear on the leaf’s upper surface (stage 1). Within 10 days, pustules form in the spots, primarily on the undersides of the leaves (stage 2). These pustules have raised centers that eventually break open to reveal masses of fungal spores, called urediniospores (stage 3). Pustules can produce urediniospores for about 3 weeks. When the wind blows, the spores are dispersed, spreading the infection to other fields. Once windborne, the spores can reportedly travel hundreds of miles within a single day. Figure 1 shows the progression of infection on a soybean plant. ASR was first detected in Japan in 1902. By 1934, the disease was found in several other Asian countries as well as Australia. In 1951, the disease was first reported on soybeans in India. The disease was confirmed, and widespread infestations occurred in several African countries in 1996. In 2001, ASR was found in Paraguay and was detected in Argentina the following year. By 2002 the disease was widespread throughout Paraguay and in some limited areas of Brazil. ASR was first discovered in the continental United States in Louisiana on November 9, 2004. Researchers believe the disease was carried to the United States by tropical storms. Figure 2 shows the pattern of ASR’s spread throughout the world. USDA has been following the path of the disease and planning for its introduction into the United States for several years. In May 2002, three USDA agencies—the Animal and Plant Health Inspection Service (APHIS), the Cooperative State Research, Education, and Extension Service (CSREES), and the Agricultural Research Service—together with the National Plant Board, industry, and several land grant universities formed the ad hoc Soybean Rust Committee. In addition, USDA established the National Plant Diagnostic Network to enable diagnosticians, state regulatory personnel, and first detectors to communicate images and methods of detection for ASR as well as other diseases in a timely manner. USDA determined that once ASR arrived in the United States it could not be eradicated because of its rapid transmission rate and an abundance of host species. Thus, it decided fungicides would be the primary means of managing ASR in the United States and Canada until researchers can develop acceptable soybean cultivars that are resistant to the disease. Although the disease has resulted in significant losses in yield and production in other countries, soybean growers have learned to successfully manage the disease by applying appropriate fungicides. However, the use of such fungicides increases the production costs associated with soybeans, which had typically required relatively little or no management in the United States. For example, during the 2003 to 2004 growing season, Brazilian growers spent close to $1 billion on fungicides to prevent and reduce the spread of the disease. In the United States, the costs of applying fungicides for ASR are estimated to range from $10 to $35 per acre for each application. The total cost of applying fungicides will depend on the number of acres treated. All pesticides, including fungicides, must be registered and labeled in accordance with EPA regulations in order for them to be sold or used in the United States. If emergency conditions exist, however, EPA can grant an emergency exemption to state and federal agencies that allows the unregistered use of the pesticide under section 18 of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA). EPA regulations require state and federal agencies to submit an application for emergency exemptions and set limits on the duration of those exemptions. Under the Federal Food, Drug, and Cosmetic Act, as amended by the Food Quality Protection Act, EPA sets tolerances for pesticides—the maximum residue levels of pesticides permitted on foods. Unlike its process for registering fungicides, EPA may grant an emergency exemption for the use of a fungicide before it sets a tolerance for that fungicide. Fungicides for ASR are classified as preventative or curative. Preventative fungicides, such as strobilurins, prevent fungi from successfully infecting and/or penetrating the host tissue of the plant, while curative fungicides, such as triazoles, inhibit or stop the development of infections that have already begun. In addition, some fungicides contain both preventative and curative chemicals. To properly manage ASR, growers must apply the right class of fungicides at the appropriate time and with proper equipment. Applying fungicides too early can increase production costs, and the fungicide could wear off by the time an infection actually occurs. However, if growers wait too long to apply the fungicide, the disease could progress to an untreatable stage, and some crop could be lost. In order for fungicides to be optimally effective, they must be applied to the whole plant and be placed as deeply into the canopy as possible because the disease usually begins in the lower canopy before traveling into the middle and upper canopies as the crop matures. Fungicides can be applied by ground sprayers or from the air. Aerial application is a viable alternative when rainfall makes the fields too muddy or when large amounts of soybean acreage need to be sprayed within a short time. In April 2004, USDA’s Economic Research Service (ERS) conducted a study to project the potential economic losses associated with various degrees of ASR infestation in the United States. ERS concluded that the extent of economic impacts from ASR will depend on the timing, location, spread, and severity of the disease as well as the response of growers, livestock producers, and consumers of agricultural commodities. For the first year of ASR’s establishment in the United States, ERS estimated that the expected value of net economic losses could range from $640 million to $1.3 billion, depending on the geographic extent and severity of the disease’s initial entry. When ASR was discovered in Louisiana in November 2004, it was too late in the crop year to damage 2004 soybean production. Since ASR must have a living host to survive the winter, USDA believed the disease could only successfully survive over the winter in the southernmost areas of the United States and would have to be reintroduced each year into more northern soybean-producing areas. Therefore, its arrival provided an early warning to USDA, growers, and industry, allowing them time to prepare strategies for minimizing the impact of the disease before the 2005 crop year. USDA’s development and implementation of a coordinated framework was instrumental in providing an effective response to ASR on soybeans in 2005. The framework includes (1) a surveillance and monitoring network, (2) a Web-based information system, (3) decision criteria for fungicide application, (4) predictive modeling, and (5) outreach for training, education, and information dissemination. The goal of the framework was to provide stakeholders with effective decision support for managing soybean rust during the 2005 growing season, and USDA was generally successful in doing so. However, inconsistencies in how researchers monitor, test, and report on the disease could lead to incomplete or inaccurate data and detract from the value of future prediction models. Furthermore, the success of the 2005 framework was due in part to the leadership of senior USDA officials, who were able to mount a national campaign. The transfer of operational responsibilities to a land grant university, under the direction of USDA, raises concerns about the department’s ability to maintain the level of coordination, cooperation, and national priority that was achieved in 2005 to address ASR. The early detection of ASR through the sentinel plot network—one of the key components of the surveillance and monitoring program—was effective, according to officials in 23 of the 25 states we surveyed. Sentinel plots—typically about 2,500 square feet of soybeans, other host plants, or a combination of the two—are planted a few weeks before the beginning of the growing season and serve as an advance warning of approaching ASR. In total, states monitored more than 1,000 sentinel plots in 2005. USDA and the North Central Soybean Research Program, in affiliation with the United Soybean Board, funded the sentinel plot network established under the framework. USDA provided about $800,000 for a total of 300 plots in the 31 soybean-producing states and an additional 20 plots in 4 other states that produce dry beans, such as navy beans and chick peas. (USDA plans to fund a similar number of sentinel plots in 2006.) The North Central Soybean Research Program and United Soybean Board provided approximately $390,000 for a total of 400 plots in 20 states (20 plots per state). In addition, some states established and monitored other plots during the growing season. Officials of the 31 states we surveyed provided data on the number of sentinel plots sponsored by USDA and others during 2005 (see fig. 3). State personnel monitored these plots throughout the growing season to determine the presence and severity of ASR. Within each state, a designated official entered the monitoring data from the plots into USDA’s ASR Web site, an online, real-time data system. Once the data were entered, growers and others could access the information to determine in which counties ASR-infected plants were found. In addition, state specialists used the Web site to provide guidance to growers about whether and what type of fungicides should be applied. Once ASR was detected and confirmed in a state, the framework specified that mobile monitoring teams—one assigned to each of five regions— would be dispatched to the affected areas to help determine the severity and spread of the disease. During the 2005 growing season, the disease was confined to the southeastern region, and therefore only the team assigned to that region was deployed. Researchers use the information from states on sentinel plot monitoring, including diagnostic testing results, to develop prediction models that estimate where and how severe ASR will be in certain areas of a state or county. These models depend in large part on timely and consistent data from the state observations and diagnostic testing results. Researchers will rely on this information, in part, to validate the predictive models over the next few years, while extension personnel and growers rely on this information to make informed and timely decisions on the need to apply fungicides. USDA asked the states to monitor their sentinel plots at least once a week and report the results on a weekly basis by posting them to a restricted USDA Web site. Monitoring results from the sentinel plots supported by USDA and the North Central Soybean Research Program were to include, for example, the location, host, and severity of the disease. However, state officials did not consistently report weekly updated information to the Web site during the 2005 growing season. Updates from the states ranged from a total of 4 each for two states to 162 for another. USDA also provided states considerable flexibility in how they designated sentinel plots. In some cases, fields were planted as stand-alone surveillance fields while in other cases, sentinel plots were part of commercial fields. Such differences might affect the extent to which crops are accessible for crop monitors. While there is no evidence that this variation in plots affected data reporting in 2005, a lack of consistency in designating sentinel plots could ultimately affect the quality of data that are essential to alerting USDA to the initial presence and spread of ASR in future years. Diagnostic testing was important to confirming suspected cases of ASR because several plant diseases resemble it and because U.S. growers have little experience in identifying ASR. States are to send the first suspected sample of ASR on soybeans and each new host to USDA’s APHIS laboratory in Beltsville, Maryland, for confirmation testing. However, subsequent samples submitted within each state may be tested at either a state or National Plant Diagnostic Network laboratory. According to our survey of officials in the 31 soybean-producing states, state diagnostic laboratories received about 12,100 samples for ASR research and screening. Of these samples, about 9,500 were submitted for routine research or monitoring and about 2,600 were submitted specifically because of suspected ASR. Of the total number of samples tested, only 877, or about 7 percent, tested positive for ASR. For samples suspected of having ASR, states primarily relied on morphological examinations—i.e., examining the spores from lesions on leaf samples, visually or under a microscope—to screen the samples suspected of ASR. However, in selected cases, the states conducted advanced screening using the polymerase chain reaction (PCR) test or an enzyme-linked immunosorbent assay (ELISA) test to detect the presence of ASR. Table 1 summarizes the results of states’ tests performed on samples suspected of having ASR in 2005. The National Plant Diagnostic Network issued standard operating procedures for how to submit samples to a diagnostic laboratory and procedures for initially screening the samples and conducting advanced screening. However, the procedures did not specify how often or under what circumstances, the laboratory should conduct advanced screening to confirm an initial diagnosis of ASR. Advanced screening might be warranted because a morphological examination of a sample in the early stages of the disease may fail to detect ASR. Also, in some cases, diagnosticians may have limited experience in detecting the disease morphologically. Conversely, officials in some states where ASR appeared to be no real threat in 2005 may have believed that advanced screening was not necessary. Officials in 13 of the states that we surveyed reported that a morphological examination was the only type of testing they performed on samples of suspected ASR. Officials in 13 states also indicated that they performed a morphological examination as well as at least one other type of advanced screening test, and officials in 3 states reported that they only performed advanced screening on suspected cases of ASR. The various methods used to diagnose ASR, and hence to report the results to the Web site, could determine the difference between detecting the disease early, when it is most easily treated, or delaying detection until it is well established. As of October 31, 2005, state laboratories had spent an estimated total of $465,800 on screening and testing samples for ASR; about $14,600 of this cost was offset by the fees the state laboratories charged for sample testing. Most of the state officials we surveyed reported that their states had sufficient funding and staffing to perform diagnostic screening and testing for ASR during 2005. For 2006, officials from 30 of the states that we surveyed indicated that they plan to have the same number or more laboratory staff. However, officials from nine of the states indicated that they still lacked sufficient equipment to perform recommended diagnostic testing. In addition to testing field samples, USDA sampled rainwater to help in the early detection of ASR. With these samples, scientists can detect spore concentrations before ASR is apparent on the plant. Positive samples were found in most of the regions tested, including the Midwest and the Northeast, where ASR was not apparent on the plant. USDA is using this information for research and plans to publish its findings in a professional journal. As a means to share information among all interested parties, in March 2005, USDA activated the public ASR information Web site, which provided disease observations, management recommendations, and scouting information, among other things. The site allows growers and other interested parties to go to a single location for real-time, county-level information on the spread of the disease in soybean-producing states. The Web site displays two maps of the United States. One map shows the counties in which researchers scouted for ASR and did not find it (in green) and counties in which ASR was confirmed (in red). Another map allows the public to click on a state and obtain information on ASR management, such as disease management, scouting results, growth stages, and forecast outlook. In addition, the Web site provides a chronology of positive ASR detection by date confirmed, county, and state; information on the spread of ASR nationwide; and links to related Web sites. USDA has also established a restricted Web site that has several access levels for various users, such as state specialists, observers, researchers, and selected industry representatives. Among other things, this site presents information on observed and predicted disease severity and spore deposition. The Web site is restricted to prevent unauthorized users from entering erroneous data and to allow state specialists to share and assess data before distributing information to the public. The information in this restricted Web site then becomes the basis for the information on the public Web site. Officials in the soybean-producing states that we surveyed characterized USDA’s Web sites (public and restricted) as useful to their states. However, several officials provided suggestions for improvement. These suggestions included making the Web sites easier to use, giving multiple officials within each state access to update the Web sites, considering the needs of the colorblind, providing better instructions to users, recognizing the efforts of extension service personnel on the Web site, considering the needs of users without high-speed Internet connections, and publicizing the Web sites to a greater extent. To educate and assist growers and extension personnel in making decisions regarding the use of fungicides to combat ASR, state land grant university extension specialists and USDA developed a fungicide guide. The April 2005 ASR fungicide manual—Using Foliar Fungicides to Manage Soybean Rust—was developed under a USDA grant by state extension and scientists at 22 U.S. universities, USDA, and the Ontario Ministry of Agriculture and Food. It was widely available to state officials, growers, and other stakeholders. The manual provides basic fungicide information, such as the chemistry involved and the brand names of different products, as well as information on factors involved in making fungicide spray decisions, including whether to use a preventative or curative fungicide, and how and when to apply the fungicide. Over 150,000 copies of the manual were distributed during 2005. In addition, extension officials in the states we visited commented that the manual was very useful to growers in deciding when and how to apply fungicides during the 2005 crop season. Using information from USDA’s Web site and the ASR fungicide manual, extension service offices in five states where ASR was confirmed suggested that some growers apply fungicides for ASR at least once during the 2005 growing season. During the 2005 growing season, state specialists could obtain ASR forecast information from various models, synthesize the information, and use it to prepare state forecast outlooks for dissemination on USDA’s public Web site. These models included one supported by USDA that predicted the aerial spread of ASR spores from active source regions in the United States to other soybean-growing areas; the results of this model were published on USDA’s restricted Web site. Other ASR prediction models available during 2005 included one from the North American Disease Forecast Center at North Carolina State University and another developed by researchers at Iowa State University. These models depend in large part on timely and consistent data from the states’ observations and diagnostic testing results. According to researchers who used the models, ASR prediction models tended to overstate the spread of ASR in 2005. However, this was the first full year that ASR was in the United States and it generally takes several years to calibrate and validate models like these. One researcher has proposed that USDA use an “ensemble approach” to predict the spread of ASR in 2006—that is, using forecast information from several ASR models in predicting the spread of ASR. Regardless of which models are used, inconsistencies in defining or designating sentinel plots, in diagnosing ASR, and hence in reporting the results to the Web site could affect the development of predictive models and ultimately could determine the difference between detecting ASR early, when it is most easily treated, or delaying detection until ASR is well established. In preparation for the 2005 growing season, USDA and the 31 soybean-producing states we surveyed sponsored about 1,500 presentations, programs, and workshops on ASR. Officials in these states reported that they planned to offer over 400 presentations, programs, and workshops on ASR between November 1, 2005, and April 30, 2006. According to the state officials we surveyed, the three most important topics to include in these workshops are identification of ASR and “look-alike” diseases, availability and use of fungicides, and observations and results from 2005. During the 2005 growing season, several other outreach efforts were also conducted to help growers. For example: Some states supported telephone hotlines that presented the latest information on ASR, enabling growers using cellular phones to get information when they were out in the fields. The University of Kentucky created two ASR electronic mailing lists—one that facilitated discussion and information sharing about ASR among 137 industry, state, federal, and university officials and another that facilitated communication among 108 individuals regarding the soybean rust sentinel plot and surveillance network. The American Phytopathological Society organized a symposium in November 2005—attended by over 350 participants—to discuss ASR and lessons learned during the past growing season. Several states also displayed ASR information on their state Web sites. The national effort for ASR during the 2005 growing season was directed by senior APHIS headquarters officials, who coordinated the federal, state, and industry effort to develop the framework. Before and during the growing season, they conducted regular meetings with state specialists. According to a representative of the American Soybean Association, soybean growers were pleased with the central, coordinated effort led by APHIS to fight against ASR. In addition, 30 of the officials in the states we surveyed reported that communication was effective between their state and USDA in addressing ASR during 2005. APHIS has been involved in preparing for ASR because of its responsibility to protect the nation from the introduction of foreign plant pests. However, now that ASR is in the United States, CSREES is responsible for managing efforts to minimize its effects. In November 2005, USDA formally announced the transition of operational responsibility for managing ASR in 2006, from APHIS to the Southern Region Integrated Pest Management Center (SRIPMC) at North Carolina State University, under the direction and coordination of CSREES. The current ASR national system will be expanded to provide growers with information about additional legume pests and diseases in 2006. SRIPMC and USDA recently signed a cooperative agreement that will provide about $2.4 million to fund ASR monitoring, diagnostics, and communication efforts in 2006. Total funding includes $1 million for sentinel plots and $800,000 for diagnostic testing. In 2005, USDA provided nearly $1.2 million for these activities. During 2006, selected APHIS personnel will assist with the transition to CSREES. One key APHIS official will serve as the national coleader of the USDA Web site and train SRIPMC personnel, and a contractor will continue to serve as data manager to help ensure that the Web site continues to provide current, useful information. In addition, the contractor will continue to provide meteorology and modeling expertise. However, as of January 25, 2006, USDA lacked a detailed plan describing how it plans to ensure that all elements of the 2005 framework will be effectively implemented in 2006. In commenting on a draft of this report, USDA reported that it was developing, but had not completed, such a plan. Changes to the successful management approach employed by USDA in 2005 raise questions about how the program will perform in 2006. We are concerned that without a detailed action plan in place prior to the 2006 growing season, describing how CSREES will assume and manage important responsibilities, USDA may not be able to ensure that the level of coordination, cooperation, and national priority that was achieved in 2005 to address ASR will continue in 2006. As of December 31, 2005, EPA had approved a total of 20 fungicide products for treating ASR on soybeans, including 12 for which emergency exemptions were granted. Officials in the nine states where ASR was confirmed reported no problems in obtaining access to fungicide application equipment. While officials in three of these states reported that not all fungicide products were available to their growers, they did not indicate that growers experienced fungicide shortages overall. To determine which fungicides are the most effective under given conditions, USDA and private companies also supported research efforts at universities across the United States. For the longer term, USDA, universities, and private companies are conducting research to develop ASR-resistant or -tolerant soybeans but expect that these will not be available commercially for 5 to 9 years. Efforts to ensure that fungicides would be approved for treating ASR on soybeans have been under way for some time. (See app. IV for a complete list of approved fungicides.) Before March 2004, 4 fungicides had been registered for preventing ASR on soybeans. However, between March 2004 and June 2005, EPA approved another 16 fungicides—all in time for application during the 2005 growing season. These fungicides included the following: 4 registered fungicides that are preventative; and 12 fungicides for which emergency exemptions were granted. Nine of these products are curative, and 3 have both preventative and curative properties. As of November 2005, five additional fungicides for ASR were pending approval for emergency exemption, and two others were pending full registration. EPA was able to act expeditiously, in part because, in July 2002, USDA and EPA began discussing preparations for emergency exemptions and working with private industry and state departments of agriculture to prepare for ASR. They identified fungicides with known efficacy against ASR and fungicides that needed additional testing to gain EPA approval. During 2003, USDA’s Office of Pest Management Policy hosted several teleconferences and meetings with researchers, EPA, and state officials to discuss the development of emergency exemptions for soybeans and other legumes. In November 2003, EPA suggested a procedure for states to follow for requesting emergency exemptions. That is, although each state typically submits a unique request to EPA for an emergency exemption, EPA allowed Minnesota and South Dakota to prepare a joint request for treating ASR on soybeans and allowed other states to copy this request. USDA also began contacting states to offer help preparing requests for emergency exemptions. As a result of these preparations, when ASR was first confirmed in the continental United States in November 2004, 26 states, representing 99 percent of the U.S. soybean acreage, had requested emergency exemptions for fungicides to treat ASR, and 25 of these states had received at least one emergency exemption. Furthermore, although emergency exemptions are usually granted for a single year, EPA approved the exemptions for ASR fungicides through November 2007, as quarantine emergency exemptions. These exemptions may be authorized for up to 3 years in an emergency condition to control the introduction or spread of any pest new to or not known to be widely prevalent or distributed within and throughout the United States. Consequently, in 2007, states will have to renew their emergency status, with the support of the manufacturer; work to have these fungicides registered; or use already registered fungicides. In addition to these efforts, in April 2004, USDA met with the American Soyfoods Association of North America to plan efficacy research on chemicals permitted to treat organically grown soybeans and to discuss organic certification of fields treated with conventional chemicals. Furthermore, by August 2005, EPA had established maximum residue levels for the exempted fungicides in time for soybean growers to export their products to foreign markets. At the November 2005 ASR symposium, EPA announced that it remains receptive to receiving future registration and exemption requests for additional fungicides to treat ASR. According to state officials with whom we spoke, the variety of fungicides available as a result of the exemption process helped reduce the risk that ASR would become resistant to fungicides and ensured that a supply of fungicides would be available to growers. In terms of the availability of application equipment and fungicides in 2005, the officials we surveyed in the nine states where ASR was confirmed reported no problems with access to equipment. Although officials in three of these states indicated that their growers did not have access to all fungicide products, none of the states reported that growers encountered any shortages of fungicides to treat their crop. State, EPA, and USDA officials cautioned that actual fungicide inventory and availability depends largely on market forces outside their control. These officials also stated that it is not possible to determine the sufficiency of fungicides and equipment for 2006 because of uncertainties about (1) the timing and potential spread of ASR into northern states, which do not generally apply fungicides on soybeans and therefore may not have supplies and equipment available and (2) the potential need in southern states for growers to use fungicides and equipment for other major crops, such as peanuts, thereby creating a shortage for use on soybeans. USDA began evaluating fungicide efficacy for ASR in 2001, and it supported its own field work in this area from 2003 through 2005 in Africa and South America with funding from private companies and the United Soybean Board. In addition, beginning in 2002, the agency began contacting approximately 20 companies and trade organizations to participate in efficacy trials for the registration of ASR fungicides at several U.S. universities and international locations. Efficacy trials examine the impact fungicides have on factors such as crop yield and disease severity by testing the effectiveness of fungicides under various spray conditions, such as volume, pressure, and application frequency; effectiveness of fungicides under different crop conditions, such as maturity, row spacing, and plant varieties; and impact of various application techniques and equipment on such things as coverage and penetration of the crop canopy. Figure 4 shows the application of fungicides at a trial in 2005. Conducting trials at different locations allows researchers to study the effectiveness of fungicides and application methods in different climates and on different strains of ASR. EPA can use efficacy data from these trials to evaluate fungicides for emergency exemptions. USDA started posting fungicide efficacy data, including some data from private companies, to a USDA research Web site in 2003. According to agency officials, these trials showed that (1) fungicides reduced crop losses, (2) some fungicides were more effective than others, and (3) different fungicides with different active ingredients were necessary to combat ASR because what works best in one region may not be as effective in another. In terms of equipment, the trials showed that better coverage of the plant using higher spray volume is more important for effective spraying than the type of nozzles used. USDA has not taken a position concerning the application of fungicides on soybeans not threatened by ASR, although some private companies have promoted such an approach. Most recently, in 2005, researchers at southern universities conducted efficacy trials on several fungicides approved by EPA and some fungicides only approved for use in Brazil. Many of these trials were conducted in areas infected with ASR. These trials produced mixed results, but researchers concluded that timing the first spray may be the most critical factor when applying fungicides to treat ASR. Fungicide trials were also conducted in 2005 in 13 northern states where ASR has not yet been confirmed. The researchers conducting these trials focused on questions such as whether fungicides improved soybean yields in the absence of ASR. These trials produced inconsistent data, in part because different protocols—for example, plot management and fungicide application techniques—were followed; and the researchers concluded that uniform protocols should be established for future trials to ensure consistent data collection and interpretation. Breeding commercial soybeans with resistance to or tolerance of ASR is generally regarded as the best long-term solution for managing the disease; and USDA, several universities, and private companies are currently working to develop such soybeans. Breeding new varieties of soybeans and making them commercially available takes time—up to 9 years—according to USDA officials. The Agricultural Research Service has approximately 16,000 soybean lines in its soybean germplasm collection. As of June 2005, researchers had finished an initial screening of these lines. Approximately 800 lines were identified as having some form of resistance or tolerance to ASR and are currently being evaluated using more advanced screening tests. Subsets of these 800 lines are also being evaluated in field trials in collaboration with researchers in Africa, Asia, and South America. An intermediate screening of these 800 lines was completed and the results published in a scientific journal in January 2006. Some of these lines are only resistant to a few of the known strains of ASR. USDA researchers hope to eventually find lines that are resistant to all known strains. The United Soybean Board and the Iowa Soybean Association and Promotion Board have provided financial support for this work. In addition to the sheer volume of germplasm that researchers need to examine, other factors have also contributed to the time taken to identify soybean varieties that are resistant or tolerant to ASR. Before USDA removed ASR from the select agents and toxins list under the Agricultural Bioterrorism Protection Act of 2002 in March 2005, USDA’s research in the United States was limited to a few containment facilities. Researchers could not conduct yield loss studies because the available containment facilities did not have enough room to allow soybean plants to reach maturity. The limited space in containment facilities has also slowed USDA’s ability to germinate and study foreign strains of ASR (see fig. 5). ASR’s arrival in the United States should facilitate USDA’s efforts to study the disease because researchers in affected states can now work with ASR and soybean plants under field conditions. The Agricultural Research Service expects to have soybean germplasm with some level of resistance to ASR within 5 years. It intends to work with industry through cooperative research and development agreements and other mechanisms to provide access to this germplasm so that private companies can develop commercial soybeans with resistance or tolerance to ASR. Commercialization may take an additional 2 to 4 years. According to agency researchers, it is difficult to develop germplasm that is completely resistant to all strains of ASR; and therefore, the most effective approach for developing resistance will be to develop tolerant soybeans to provide growers more time each season to prepare for and manage ASR. The Agricultural Research Service is also conducting research to examine the genetic variability among the various strains of ASR. The expected outcomes of this project are to identify genes required for the infection process and disease cycle, as well as the discovery of potential targets for new fungicides. Both the Agricultural Research Service and the United Soybean Board have supported this research, and the agency has also worked with the Department of Energy’s Joint Genome Institute. In April 2005, the Agricultural Research Service issued a National Strategic Plan for the Coordination and Integration of Soybean Rust Research. It began to develop this strategic plan at a meeting held in December 2004, 3 weeks after the disease was confirmed in the continental United States. USDA, together with the United Soybean Board and the North Central Soybean Research Program, held a national workshop with more than 90 soybean experts to set priorities, identify strategic goals for ASR research, and develop a national research plan. This plan is linked to the agency’s overall strategic plan and coordinated with other USDA agencies. The research plan also promises project review and program assessment by independent peers via annual research progress reports. Of the research plan’s six strategic goals, three aim directly at developing ASR resistance or tolerance: develop new, high-yielding germplasm with resistance to soybean rust; determine the genetic basis for ASR’s virulence and determine the genetic basis for soybeans’ resistance to ASR; and improve understanding of ASR’s biology and epidemiology. The Agricultural Research Service has since developed a draft of an action plan intended to measure the progress of the research plan initiative. Effective, timely communication and coordination at the federal, state, and local levels, coupled with favorable weather conditions, were keys to limiting the impact of ASR on U.S. soybean production in 2005. Indeed, in many areas of the country, soybean production exceeded expectations, in part because producers were more attentive to their crop. While the experience in 2005 was favorable, it is unlikely that the fungus will be eliminated. Accordingly, it will still be important for all agricultural stakeholders to remain vigilant and to consistently monitor, test, and report on ASR and to develop models for predicting the spread of the disease. Going forward, however, differences in how researchers monitor, test, and report on the disease could detract from the value of future prediction models. The 2005 ASR experience also highlights the importance of preparing for, coordinating, and monitoring a new agricultural disease. The lessons learned from managing ASR could be valuable in minimizing the effects of other agricultural pests that threaten crops and can cause significant economic losses. In this regard, a clear plan of action and strong leadership in coordinating the actions of all stakeholders was important in 2005 and will continue to be critical to the success of efforts to monitor, report, and manage the spread of ASR in 2006. We are making two recommendations to the Secretary of Agriculture to ensure continued strong leadership and improved efforts to predict and limit the spread of ASR. To ensure reliable, quality reporting on the spread of the disease, USDA should provide additional guidance to state ASR program managers and monitors on the timing and frequency of reporting on the incidence of ASR, the designation of sentinel plots, and when to use advanced diagnostic testing. To ensure that ASR continues to receive national priority and the same level of effective coordination and cooperation evidenced in 2005, USDA should develop a detailed action plan, prior to the beginning of the growing season, describing how it will manage ASR in 2006. We provided a draft of this report to USDA and EPA for their review and comment. In oral comments, EPA told us that the factual information in our draft report is correct and provided technical comments, which we incorporated as appropriate. In written comments, USDA said that the report fairly describes USDA’s preparations related to ASR. In addition, it stated that both of the report’s recommendations reflect its ongoing cooperative efforts with states to combat the disease (see app. VI). USDA also provided a number of technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate committees; the Secretary of Agriculture; the Administrator of EPA; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. To determine the U.S. Department of Agriculture’s (USDA) strategy for minimizing the effects of Asian Soybean Rust (ASR) now that the disease has arrived in the continental United States and the lessons learned that could be used to improve future efforts, we interviewed officials from USDA’s Animal and Plant Health Inspection Service (APHIS), the Cooperative State Research, Education, and Extension Service (CSREES), the Agricultural Research Service, the Farm Service Agency (FSA), and the Risk Management Agency (RMA) to identify efforts that have been implemented since November 2004. We also surveyed state officials in the 31 soybean-producing states that were included in USDA’s sentinel plot program to obtain information on their efforts to minimize the effects of ASR through education, training, surveillance, and testing and to obtain information about the lessons learned during the 2005 crop year. The survey included questions about the states’ university extension programs; sentinel plots, monitoring, and scouting; diagnostic screening and testing; fungicide use; and perceptions of USDA’s efforts. Prior to implementing our survey, we pretested the questionnaire with several state officials (university extension faculty) in Florida and Alabama. During these pretests, we interviewed the respondents to ensure that (1) the questions were clear and unambiguous, (2) the terms we used were precise, and (3) the survey did not place an undue burden on the staff completing it. The questionnaire was also reviewed by a GAO survey expert. We made changes to the questionnaire based on these pretests. We received responses from all 31 states surveyed. The state information presented in this report is based on information obtained from this survey and interviews with state officials. Appendix II contains the state questionnaire and aggregated responses. We conducted site visits to Alabama, Georgia, and Florida, where we inspected ASR-infected soybeans while touring sentinel plots, a fungicide efficacy trial, diagnostic facilities, and a commercial soybean field with state extension officials. We interviewed university extension faculty and laboratory diagnosticians in these states, as well as in Indiana and Iowa, to gain more in-depth information about their efforts to mitigate the effects of ASR and test for the disease. We also toured USDA diagnostic facilities in Beltsville, Maryland. In addition, we interviewed industry and trade representatives to discuss the adequacy of available fungicides and application equipment. Finally, we attended the November 2005 National Soybean Rust Symposium in Nashville, Tennessee to determine stakeholders’ assessment of USDA’s efforts. To determine the progress that USDA, the Environmental Protection Agency (EPA), and others have made in developing, testing, and licensing fungicides to treat ASR and in identifying and breeding ASR-resistant or -tolerant soybeans, we interviewed officials from EPA and state departments of agriculture to obtain information about their efforts to license fungicides to treat ASR. In addition, we asked about the adequacy of fungicide supplies and equipment when surveying the 31 soybean- producing states that were included in USDA’s sentinel plot program. We interviewed Agricultural Research Service personnel as well as researchers from academia and industry and reviewed related reports and studies regarding efforts to research fungicide efficacy and identify and breed ASR- resistant or -tolerant soybeans. We also toured USDA research facilities at Ft. Detrick, Maryland. We conducted our work between May 2005 and January 2006 in accordance with generally accepted government auditing standards. Please coordinate with others at your state’s land grant university or in your state’s Department of Agriculture to complete this questionnaire. Please fax your completed questionnaire to: 202-512-2502 (alternate #: 202-512-2514) by November 14. For your convenience, the last page of this questionnaire is a fax cover sheet. Part 1: Extension Programs Someone knowledgeable about your state’s university extension program should answer Questions 1 - 2. 1. Do you plan to offer any Asian Soybean Rust (ASR) presentations, programs, or workshops for growers between November 1, 2005, and April 30, 2006? (For number, enter 0 if none. If you do not know the exact number, please provide an estimate.) Yes How many? Uncertain Go to Q3 N = 30 (Not all respondents answered all parts.) 2a. Which of the following topics will likely be included in upcoming (that is, between November 1, 2005, and April 30, 2006) extension presentations, programs, or workshops on ASR? (Please check ’Will likely be included’ or ’Will likely not be included’ for each topic.) a. b. Identification of ASR and “look-alike” diseases Types and purposes of fungicides c. d. f. Insurance coverage or disaster funding for losses due to ASR g. Observations and results from 2005, “Lessons Learned” 2b. Which three of the above topics do you consider the most critical to include? (In Column 3, please check the three topics you consider the most critical to include.) Part 2: Sentinel Plots, Monitoring, and Scouting Someone knowledgeable about your state’s sentinel plots and monitoring and scouting programs should answer Questions 3 – 14. 3. How many USDA-sponsored sentinel plots were in 3a. How many of these plots used only soybeans as the host? (Enter 0 if none.) 3b. How many of these plots used only other (nonsoybean) plants as hosts? (Enter 0 if 3c. How many of these plots used both soybeans and other (nonsoybean) plants as hosts? (Enter 0 if none.) ) Rows 3a, 3b, and 3c should add up to the total number above. 4. How many other sentinel plots (e.g., funded or sponsored by state government, the North Central Soybean Research Program, the United Soybean Board, or by other grants.) were in your state in 2005? (Enter 0 if none.) 4a. How many of these plots used only soybeans as the host? (Enter 0 if none.) 4b. How many of these plots used only other (nonsoybean) plants as hosts? (Enter 0 if none.) 4c. How many of these plots used both soybeans and other (nonsoybean) plants as hosts? (Enter 0 if none.) Rows 4a, 4b, and 4c should add up to the total number above. Other-sponsored plots are funded or sponsored by state government, the North Central Soybean Research Program, the United Soybean Board, or by other grants. 7. How many individuals in your state worked, on a regular basis, as monitors for 2005 sentinel plots funded or sponsored by USDA or other sources? (Please indicate number of monitors in each category. If none, enter 0.) Hrs./Wk. a. Field-based extension or research personnel b. Campus-based extension or research personnel c. Private, independent crop consultants d. State Department of Agriculture personnel e. Agribusiness employees or consultants h. Other(s) (Please specify below.) Three states listed other responses, including master gardeners, students, retired extension specialists, and temporary employees. 8. Is the number of sentinel plot monitors planned for your state in 2006 the same, more, or less than in 2005? (Please check one.) Same as 2005 More than 2005 9. In your opinion, how effective was the sentinel plot monitoring program as an early warning system in your state? (Please check one.) f. Not applicable 10. In your opinion, which of the following was the most important benefit of your state’s sentinel plot program? (Please check one.) a. Provided an early warning network b. Supported research to quantify spore c. Provided data for epidemiological d. Provided information to guide e. Other (Please describe below.) 11. Did any of the following factors limit your state’s effectiveness in monitoring sentinel plots? (Please check ‘Limited Effectiveness’ or ‘Did Not Limit Effectiveness’ for each factor.) a. Insufficient funds to cover salaries of monitors b. Insufficient funds for travel and travel-related expenses c. Insufficient number of qualified personnel available d. Lack of mobile diagnostic equipment e. Two states listed timeliness of receiving funds and another listed the need to hire a plant pathologist to test samples. 12. Assuming adequate funding, how many sentinel plots are planned in your state for 2006? (Enter 0 if none.) 12a. How many of these plots will be USDA- sponsored sentinel plots? (Enter 0 if none.) 12b. How many of these plots will be sponsored through other sources? (Please enter 0 if none. If not 0, specify source of funding below.) Rows 12a and 12b should add up to the total number above. Two states representing a total of 70 plots did not separate their plots between 12a and 12b so these amounts do not equal the total amount for question 12. Other-sponsored plots include plots funded or sponsored by state government, the North Central Soybean Research Program, the United Soybean Board, or by other grants. N = 31 13. Do you plan to make any major changes in how you will manage your sentinel plots for next year? (Please check one. If ‘Yes,’ please explain below.) Yes What changes do you plan to make? (Please explain below.) Various changes are planned, such as planting different maturity groups, hiring additional monitors, changing the monitoring frequency, and examining more samples in the laboratory. 14. About how often did your state typically update the USDA Soybean Rust Web sites with monitoring data? (Please check one in each column.) (Password Protected) Part 3: Diagnostic Screening Someone knowledgeable about your state’s diagnostic screening for ASR should answer Questions 15 – 23. 15. From January 1, 2005, through October 31, 2005, how many samples were received by your state’s diagnostic lab(s) for ASR research and screening purposes? (Enter 0 if none.) 15a. How many of these samples were submitted for routine research or monitoring purposes? (Enter 0 if none.) 15b. How many of these samples were submitted because of suspected ASR? (Enter 0 if none.) Rows 15a, 15b, and 15c should add up to the total number above. 15c. Rows 15e-1, 15e-2, 15e-3 and 15e-4 should add up to the number of samples in 15b, above. ELISA 16a. Of those samples identified in Question 15b (above), where were the samples collected and what was the host crop? (Please enter number of samples for each type of host for each location. If none, enter 0.) Location Where Screening Samples Were Collected 16b. If you indicated that samples were screened from ‘Other’ hosts or at ‘Other’ locations, please specify host and/or location below. Ten states listed other hosts, such as cowpeas, clover, snap beans, and lima beans, which were screened in roadside mobile plots and field borders where soybeans are commercially grown. N=31 (Not all respondents answered all parts.) 17a. How much did your state’s diagnostic lab(s) spend on screening and testing samples for ASR from January 1, 2005, through October 31, 2005? In your answer include equipment, supplies (e.g., slides), and salaries. (Enter 0 if none. If you do not know the exact amount, please provide an estimate.) 17b. Was any of the cost of screening and testing offset by fees charged for testing samples? (Please check one.) How much was offset? 18. Did your state have sufficient funding to perform diagnostic screening and testing for ASR in 2005? (Please check one.) No 19. How many laboratory staff, including state laboratory staff, performed diagnostic screening and testing for ASR, on a regular basis, during the 2005 season? (Please enter number. Enter 0 if none.) 20. Was the number of laboratory staff sufficient to perform diagnostic testing for ASR in 2005? (Please check one.) 21. Will the number of laboratory staff planned for 2006 be the same as for 2005? (Please check one.) What additional equipment was needed? (Please specify below.) Six states listed PCR equipment and other sample testing equipment and supplies, two listed microscopes, and another listed ELISA. 23. Does your state plan to add laboratory equipment for screening or diagnostic testing for ASR in 2006? (Please check one.) What additional equipment do you plan to obtain? (Please specify below.) Fifteen states responded, and most listed PCR equipment. Other equipment listed includes microscopes, ELISA plate readers, and test kits for screening purposes. No Someone knowledgeable about fungicide application in your state should answer Questions 24 - 28. 24. From January 1, 2005, through October 31, 2005 was ASR confirmed in your state? (Please check one.) No Go to Question 27. 26. Were the suggestion(s) or recommendation(s) for applying fungicides posted on USDA’s Soybean Rust Web sites? (Please check one.) 27. Were there any problems involving equipment availability for ASR fungicide spraying in your state? (Please check one.) Yes 28. Were there any problems involving the availability of fungicides for ASR in your state? (Please check one.) If Yes, please explain problems with fungicide availability. (Please use the space below.) Three states where ASR was detected in 2005 noted that not all fungicides were available to growers in their states. Another state where ASR was not detected made a similar comment, while another state said that the use of fungicides in the south led to a shortage of fungicides for the wheat crop in the north. Part 5: USDA’s 2005 ASR Program Someone knowledgeable about USDA’s efforts to minimize the impact of ASR in 2005 should answer Questions 29 – 33. 29. In your opinion, how effective were USDA’s efforts to minimize the impact of ASR? (Please check one.) f. Uncertain 30. If you have any suggestions for improving USDA’s ASR program, please briefly explain in the space below. We received 13 comments regarding suggestions for improving USDA’s ASR program. For example, some states commented that increased funding is needed or needs to be provided earlier. Another state noted more suspected ASR samples need to be examined by microscope because of look-alike diseases. One state said that USDA needs to determine and specify what sentinel plot monitoring data is essential for modeling purposes, and those monitoring the plots should adhere to a strict methodology in collecting the data. Another state suggested that the program should be reduced in scope until the economic impact is greater. 31. In your opinion, how effective was communication between USDA and your state in addressing ASR during 2005? (Please check one.) c. d. e. f. Uncertain 32. In your opinion, to what extent were USDA’s Soybean Rust Web sites useful to your state? (Please check one in each column.) (Password Protected) a. Very great extent e. Little or no extent 33. If you have specific suggestions for improving USDA’s Soybean Rust Web sites, please note them in the space below. Twelve states provided comments. Several states suggested technical improvements to USDA’s Web site for improved ease of use, and one state suggested that improvements were needed for growers using a dial-up connection to download maps. One state suggested that the USDA Web site should consider using colors other than red and green to aid males who are color blind. One state commented that USDA’s public Web site needs more publicity, and another state suggested that land grant universities and extension educators be given more credit on the Web site. Thank you for taking the time to answer this questionnaire. No questionnaire of this type can cover every relevant topic. If you wish to expand your answer(s) or comment on any other topic related to ASR, please feel free to attach additional pages or to E-mail us. Our report will be available early next spring. We will notify you when it is issued and provide you with a free copy. The yer' firt confirmtion of ASR occrred in Florid. ASR was firt confirmed in Georgi. ASR was firt confirmed in Alaba. ASR was firt confirmed in Missssippi. ASR was firt confirmed in Sth Crolin. The last conty confirmtion of ASR for the month of Septemer occrred in Georgi. ASR was firt confirmed in Lo nd North Crolin. The yer'ast confirmtion of ASR occred in Alaba. Thihow the firt confirmtion of ASR in Texas nd Kentcky, which occrred in Novemer. In addition to the contact named above, Ronald E. Maxon, Jr., Assistant Director; James L. Dishmon, Jr.; Chad M. Gorman; Lynn M. Musser; Deborah S. Ortega; Paul J. Pansini; Carol Herrnstadt Shulman; and Amy E. Webbink made key contributions to this report. Agriculture Production: USDA’s Preparation for Asian Soybean Rust. GAO-05-668R. Washington, D.C.: May 17, 2005. | In 2005, U.S. agriculture faced potentially devastating losses from Asian Soybean Rust (ASR), a fungal disease that spreads airborne spores. Fungicides approved by the Environmental Protection Agency (EPA) can protect against ASR. In 2005, growers in 31 states planted about 72.2 million soybean acres worth about $17 billion. While favorable weather conditions limited losses due to ASR, it still threatens the soybean industry. In May 2005, GAO described the U.S. Department of Agriculture's (USDA) efforts to prepare for ASR's entry, (Agriculture Production: USDA's Preparation for Asian Soybean Rust, GAO-05-668R). This report examines (1) USDA's strategy to minimize ASR's effects in 2005 and the lessons learned to improve future efforts and (2) USDA, EPA, and others' efforts to develop, test, and license fungicides for ASR and to identify and breed soybeans that tolerate it. USDA developed and implemented a framework--with federal and state agencies, land grant universities, and industry--that effectively focused national attention on ASR in 2005 and helped growers make informed fungicide decisions. The framework was effective in several ways. For example, sentinel plots--about 2,500 square feet of soybeans or other host plants planted early in the growing season in the 31 soybean-producing states--provided early warning of ASR. Officials in 23 of 25 states GAO surveyed reported that this effort was effective. Researchers could also promptly identify and report on the incidence and severity of the disease on a USDA Web site, alerting officials and growers to ASR's spread. Going forward, however, differences in how researchers monitor, test, and report on the disease could lead to incomplete or inaccurate data and detract from the value of future prediction models. For example, models to forecast ASR's spread partly rely on states' observations of sentinel plots. USDA asked states to report results weekly, but updates ranged from 4 reports, in total, during the growing season in one state to 162 reports in another state. Inconsistencies also occurred in the designation and placement of plots and in the testing of samples for ASR. Further, changes to the successful management approach employed by USDA in 2005 raise questions about how the program will perform in 2006. For 2006, most operational responsibility for ASR will shift from USDA headquarters to a land grant university. GAO is concerned that USDA's lack of a detailed action plan describing how program responsibilities will be assumed and managed in 2006 could limit the effectiveness of ASR management for this year. EPA, USDA, and others increased the number of fungicides growers can use to combat ASR while efforts continue to develop ASR-tolerant soybeans. As of December 2005, EPA had approved 20 fungicides for treating ASR on soybeans, including 12 that had emergency exemptions. According to officials in the nine states where ASR was confirmed in 2005, growers had access to fungicides. USDA, universities, and private companies are also developing ASR-tolerant soybeans and have identified 800 possible lines of resistant soybeans, out of a total of 16,000 lines. USDA estimates it may take 5 to 9 years to develop commercially available ASR-tolerant soybeans. |
As part of our audit of the fiscal years 2005 and 2004 CFS, we evaluated Treasury’s financial reporting procedures and related internal control, and we followed up on the status of Treasury and OMB corrective actions to address open recommendations regarding the process for preparing the CFS that were in our prior years’ reports. In our disclaimer of opinion on the fiscal year 2005 CFS, which is included in the fiscal year 2005 Financial Report of the United States Government, we discussed material deficiencies relating to Treasury’s preparation of the CFS. These material deficiencies contributed to our disclaimer of opinion on the CFS and also constitute a material weakness in internal control, which contributed to our adverse opinion on internal control. We performed sufficient audit procedures to provide the disclaimer of opinion in accordance with U.S. generally accepted government auditing standards. This report provides the details of the additional weaknesses we identified in performing our fiscal year 2005 audit procedures related to the process for preparing the CFS and our recommendations to correct those weaknesses, as well as the status of corrective actions taken by Treasury and OMB to address recommendations in our prior reports. We requested comments on a draft of this report from the Director of OMB and the Secretary of the Treasury or their designees. OMB provided oral comments, which are discussed in the Agency Comments and Our Evaluation section of this report. Treasury’s comments are reprinted in appendix II and are also discussed in the Agency Comments and Our Evaluation section. As discussed in our fiscal year 2005 audit report, fiscal year 2005 was the second year that Treasury used GFRS to collect agency financial statement information taken directly from federal agencies’ audited financial statements. The goal of GFRS is to be able to directly link information from federal agencies’ audited financial statements to amounts reported in the consolidated financial statements and resolve many of the weaknesses we previously identified in the process for preparing the consolidated financial statements, a goal we strongly support. For both the fiscal year 2005 and 2004 reporting processes, GFRS was able to capture agency financial information submitted to Treasury, but GFRS is still under development and not at the stage that it could be used to fully compile the consolidated financial statements from the information captured. As we have reported in the past, Treasury’s process for compiling the CFS does not yet fully ensure that financial information from federal agencies’ audited financial statements and other financial data directly link to amounts reported in the CFS. In our fiscal year 2005 audit report, we noted that Treasury made progress in demonstrating amounts in the Balance Sheet and the Statement of Net Cost were consistent with federal agencies’ audited financial statements prior to eliminating intragovernmental activity and balances. However, about 25 percent of the significant federal agencies’ auditors reported internal control weaknesses related to the processes the agencies perform to provide financial statement information to Treasury for preparing the consolidated financial statements. In our prior report, we recommended that as Treasury continues to design and further implement its new process for compiling the CFS, the Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to modify Treasury’s closing package to (1) require federal agencies to directly link their audited financial statement notes to the CFS notes and (2) provide the necessary information to demonstrate that all of the five principal consolidated financial statements are consistent with the underlying information in federal agencies’ audited financial statements and other financial data. Progress was made during fiscal year 2005. Treasury has been continuing to design and further implement its new process for compiling the CFS with the development of GFRS. We continue, though, to be concerned that the disciplined processes necessary to reduce risks to acceptable levels have not yet been effectively implemented. For example, Treasury moved forward with the project before ensuring that certain key elements, such as a concept of operations, were developed or even defining and documenting the financial reporting weaknesses that were expected to be addressed by the system. Not effectively implementing such disciplined processes creates an unnecessary risk that the system will cost more and take longer than expected to deploy, while not providing all of the intended system functionality. The implementation of any major system, such as GFRS, is not without risk; however, organizations that follow and effectively implement accepted best practices in systems development and implementation have been shown to reduce these risks to acceptable levels. A more detailed discussion of our assessment of Treasury’s ongoing effort to develop and implement GFRS, along with recommendations to reduce the risk noted above, can be found in a separate report. The CFS includes 2 years of financial information. Because comparative financial statements are intended to furnish useful data about the differences in activity and balances between the 2 years shown, consistency in how amounts are reported for the 2 years is a major factor in creating comparability. We found that Treasury lacked a process to ensure that consolidated financial statements and notes for fiscal years 2005 and 2004 were consistently reported and therefore comparable. During fiscal year 2005, Treasury requested that agencies resubmit fiscal year 2004 financial information along with their fiscal year 2005 financial information. Some agencies resubmitted fiscal year 2004 amounts in fiscal year 2005 that differed from what Treasury published in fiscal year 2004. Also, certain information reported for fiscal 2004 may have required reclassification to be comparable to the fiscal year 2005 amounts. Treasury did not analyze the fiscal year 2004 information submitted in fiscal year 2005 or reclassify amounts within various financial statement line items and notes to achieve comparability and chose to continue to report what was published for fiscal year 2004. For example, the Reconciliations of Net Operating Cost and Unified Budget Deficit showed $47.8 billion and $.2 billion for property, plant, and equipment disposals and revaluations for fiscal years 2005 and 2004, respectively. However, based on the audited financial information provided by agencies to Treasury in GFRS in fiscal year 2005, the fiscal year 2004 amount should be $25.4 billion, rather than $.2 billion. The difference should have been reclassified from the Net Amount of All Other Differences line item on the Reconciliations of Net Operating Cost and Unified Budget Deficit. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to develop a process to help ensure that, for each reporting year, the 2 years of consolidated financial statements and note information presented are consistently and comparably reported, in all material respects. Treasury and OMB did not require closing packages from 4 of the 35 verifying agencies to be audited. Specifically, the Treasury Financial Manual (TFM) states that the Inspector General or a contracted independent public accountant for each federal verifying agency—except those agencies whose fiscal year ends on a date other than September 30— must opine on the closing package data entered by the Chief Financial Officer into GFRS. Because of these year-end differences, the TFM does not require the Federal Deposit Insurance Corporation’s Funds, National Credit Union Administration, and Farm Credit System Insurance Corporation—all of which have a year end other than September 30—to have their closing package data be audited. In addition, for fiscal years 2004 and 2005, OMB waived the closing package audit requirement for the Tennessee Valley Authority (TVA), which does have a September 30 fiscal year end. In these four cases, Treasury and OMB did not develop any alternative solutions that include the requirement for adequate audit procedures to be performed over significant information included in the CFS. As a result, unaudited September 30 information was included in the CFS for 4 agencies that Treasury and OMB consider to be significant. Treasury, therefore, has less assurance that the information included in the CFS for these agencies is fairly stated and directly links to the agencies’ audited financial statements. We recommend that the Director of OMB direct the Controller of the Office of Federal Financial Management, in coordination with the Treasury Fiscal Assistant Secretary, to develop an alternative solution for obtaining audit assurance related to the Federal Deposit Insurance Corporation’s Funds, National Credit Union Administration, and Farm Credit System Insurance Corporation, which includes the requirement for adequate audit procedures to be performed over significant information included in the CFS for these agencies. We also recommend that the Director of OMB direct the Controller of the Office of Federal Financial Management to consider not waiving the closing package audit requirement for any verifying agency in future years, such as TVA. GAO’s Standards for Internal Control in the Federal Government states that internal control is a major part of managing an organization and should include monitoring. Monitoring of internal control should include assessing the quality of performance over time and implementing policies and procedures for the timely follow-up and resolution of findings of audits and other reviews. The goal of these policies and procedures is to ensure that managers (1) promptly evaluate findings from audits and other reviews, including those showing deficiencies and recommendations reported by auditors and others who evaluate agencies’ operations; (2) determine proper actions in response to findings and recommendations from audits and reviews; and (3) complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management’s attention. However, Treasury, in coordination with OMB, had not developed policies and procedures for monitoring internal control or provided us with adequate documentation evidencing an executable plan of action and milestones for short-term and long-range solutions for certain internal control weaknesses we have previously reported regarding the process for preparing the CFS. Without effective monitoring of internal control, findings of audits may not be resolved timely and properly. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop policies and procedures for monitoring internal control to help ensure that (1) audit findings are promptly evaluated, (2) proper actions are determined in response to audit findings and recommendations such as a documented plan of action with milestones for short-term and long- range solutions, and (3) all actions that correct or otherwise resolve the audit findings are completed within established time frames; and an executable plan of action and milestones for short-term and long- range solutions for certain internal control weaknesses we have previously reported regarding the process for preparing the CFS. The TFM prescribes how federal agencies are to submit financial information to Treasury to be used in compiling the CFS. While our planned audit procedures were not to review the entire TFM to determine if its guidance to agencies was clear, we found several areas where the TFM did not give clear guidance to federal agencies about the information that they were required to provide to Treasury, GAO, and OMB. Specifically, we found that the TFM did not give clear guidance for (1) reporting note disclosures for restricted cash, (2) reporting note disclosures for accounts payable, (3) preparing summaries of unadjusted misstatements to be included with federal agencies’ closing package management representation letters, and (4) certain information to be reported by OPM to Treasury that is used to allocate costs on the Statement of Net Cost. For example, the TFM defines restricted cash as “amounts of cash that an entity holds and does not have authority to spend” and cash that is not restricted as “amounts of cash that an entity holds for which it has the authority to spend.” Although these definitions are accurate at the agency level, these definitions are not accurate at the CFS level. For example, an agency may hold cash that it does not have the authority to spend because of a certain law or regulation, but when this cash is consolidated at the governmentwide level, the federal government as a whole may have the authority to spend the cash. Therefore, this cash would appropriately be restricted at the agency level, but not at the governmentwide level. As a result of the unclear guidance, agencies reported certain financial information inconsistently. This increases the risk of incomplete and inaccurate summarization of data in the CFS. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to ensure that the TFM and any other guidance to federal agencies provide clear instructions for providing reliable data to Treasury in the following specific areas: summaries of unadjusted misstatements, and certain information reported by OPM that is used to allocate costs on the Statement of Net Cost. OMB and Treasury require federal agencies to reconcile selected intragovernmental activity and balances with their “trading partners” and report on the extent and results of the reconciliation efforts to Treasury. As part of the reconciliation report, federal agencies were required to categorize any material differences, as determined by Treasury, with their trading partners at fiscal year end within five categories: (1) confirmed reporting; (2) accounting methodology differences; (3) accounting or reporting errors; (4) timing difference—current year, timing difference— prior year; and (5) unknown/unreconciled. According to Treasury, confirmed reporting, the first category listed above, is intended to indicate that the agency has verified that the amount it has reported is accurate. The TFM requires a federal agency that selects the category “confirmed reporting” to provide a detailed explanation to support its response. However, we found that in many cases where a federal agency selected the “confirmed reporting” category, the agency did not provide detailed explanations. We also found cases where both trading partners selected “confirmed reporting” for the same material difference and the agencies did not provide detailed explanations for how both trading partners’ amounts could be accurate when the material difference remained. When this situation occurs, we found that Treasury and OMB do not have an effective process to obtain clarification for inconsistent explanations provided and that agencies may be unclear as to when to select this category. Incorrect use of the confirmed reporting category and lack of detailed explanations may hinder efforts to identify and correct problems that federal agencies are experiencing in reconciling with their trading partners. Further, Treasury received the closing packages that contained each agency’s intragovernmental activity and balances amounts on November 18, 2005, and provided agencies with reconciliation reports that showed material differences with their trading partners on November 21, 2005. Treasury and OMB also require federal agencies’ IGs to annually perform agreed-upon procedures on the intragovernmental activity and balances reported in the closing package. For fiscal year 2005, Treasury required agency IGs for the 35 verifying agencies to complete and report on these agreed-upon procedures by December 2, 2005. The timing of these procedures did not optimize their value because (1) this reporting date is over 2 weeks after federal agencies’ audited financial statements were required to be issued to OMB, and (2) did not allow Treasury sufficient time to review the results and make any necessary adjustments to the CFS. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to provide clear guidance to federal agencies as to when the “confirmed reporting” category in the intragovernmental reconciliation report should be selected, develop an effective process for obtaining clarification from federal agencies for inconsistent or incomplete explanations provided in all material difference categories, and accelerate the due date for IGs to complete and report on the results of agreed-upon procedures on the intragovernmental activity and balances or develop an alternative solution that would allow Treasury sufficient time to review the results and make any necessary adjustments to the CFS. In oral comments on a draft of this report, OMB stated that it generally agreed with the new findings and related recommendations in this report. In addition, OMB provided some technical comments, which we have incorporated as appropriate. In written comments on a draft of this report, which are reprinted in appendix II, Treasury stated that it agrees that the preparation process still needs improvement and that it is addressing many of the recommendations in our previous reports. Treasury also stated that it concurs with all of the new recommendations in this report except for the recommendation to accelerate the due date for IGs to complete the agreed-upon procedures on the intragovernmental activity and balances. For fiscal year 2006, Treasury does not plan to accelerate the due date for completing these intragovernmental agreed-upon procedures. However, Treasury stated that for fiscal year 2006, it plans to expand the audit coverage for intragovernmental activity and balances by requiring the IGs to opine on such information in their audit of the closing package, which is due to Treasury by November 17, 2006. This is an appropriate alternative solution to accelerating the due date for the IGs to complete the intragovernmental agreed-upon procedures. We have modified our recommendation to also include developing an alternative solution to address this finding. This report contains recommendations to the Secretary of the Treasury and the Director of OMB. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations. You should submit your statement to the Senate Committee on Homeland Security and Governmental Affairs and the House Committee on Government Reform within 60 days of the date of this report. A written statement must also be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Homeland Security and Governmental Affairs; the Subcommittee on Federal Financial Management, Government Information, and International Security, Senate Committee on Homeland Security and Governmental Affairs; the House Committee on Government Reform; and the Subcommittee on Government Management, Finance, and Accountability, House Committee on Government Reform. In addition, we are sending copies to the Fiscal Assistant Secretary of the Treasury and the Deputy Director for Management of OMB. Copies will be made available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. We acknowledge and appreciate the cooperation and assistance provided by Treasury and OMB during our audit. If you or your staff have any questions or wish to discuss this report, please contact Jeffrey C. Steinhoff, Managing Director, Financial Management and Assurance, on (202) 512-2600, or Gary T. Engel, Director, Financial Management and Assurance, on (202) 512-3406. Staff contacts and other key contributors to this report are listed in appendix II. This appendix includes open recommendations from three of our prior reports: Financial Audit: Process for Preparing the Consolidated Financial Statements of the U.S. Government Needs Improvement, GAO- 04-45 (Washington, D.C.: Oct. 30, 2003); Financial Audit: Process for Preparing the Consolidated Financial Statements of the U.S. Government Needs Further Improvement, GAO-04-866 (Washington, D.C.: Sept. 10, 2004); and Financial Audit: Process for Preparing the Consolidated Financial Statements of the U.S. Government Continues to Need Improvement, GAO-05-407 (Washington, D.C.: May 4, 2005). Recommendations that were closed in prior reports are not included in this appendix. This appendix includes the status of the recommendations according to the Department of the Treasury (Treasury) and the Office of Management and Budget (OMB) as well as our own assessments. Explanations are included in the status of recommendations per GAO when Treasury and OMB disagreed with our recommendation. Of the 154 recommendations regarding the process for preparing the CFS that are listed in this appendix, 131 remained open as of December 2, 2005, the end of GAO’s fieldwork for the audit of the fiscal year 2005 CFS. Of these 131 recommendations, 76 relate to specific disclosures required under U.S. generally accepted accounting principles (GAAP). Treasury has submitted a proposal to the Federal Accounting Standards Advisory Board (FASAB) seeking to amend previously issued standards and eliminate or lessen the disclosure requirements for the consolidated financial statements so that GAAP would no longer require certain of the information Treasury has not been reporting. Comments on the exposure draft of a proposed FASAB standard, based on the Treasury proposal, are due March 1, 2006. Treasury stated that it is waiting for FASAB approval and issuance of this proposed standard to determine the disclosures that will be required in future consolidated financial statements. 1. See “Agency Comments and Our Evaluation” section. 2. We continue to believe that our recommendations relating to the Statement of Changes in Cash Balance from Unified Budget and Other Activities, Reconciliations of Net Operating Cost and Unified Budget Deficit, and the adjustment process are sound. Our recommendations are intended to allow flexibility in developing viable solutions to address the issues. We will consider any alternative action that Treasury may take to satisfactorily address the recommendations with which it has disagreed. See appendix I for the status of related recommendations. In addition to the above contact, the following individuals made key contributions to this report: Lynda Downing, Assistant Director; Keith Kronin; Katherine Schirano; and Taya Tasse. | For the past 9 years, since our first audit of the consolidated financial statements of the U.S. government (CFS), certain material weaknesses in internal control and in selected accounting and financial reporting practices have resulted in conditions that prevented GAO from expressing an opinion on the CFS. Specifically, GAO has reported that the U.S. government did not have adequate systems, controls, and procedures to properly prepare the CFS. Included with GAO's December 2005 disclaimer of opinion on the fiscal year 2005 CFS was its discussion of continuing weaknesses relating to the Department of the Treasury's (Treasury) preparation of the CFS. The purpose of this report is to (1) provide details of those additional weaknesses, (2) recommend improvements, and (3) describe the status of corrective actions on GAO's previous 154 recommendations. GAO identified weaknesses during its tests of Treasury's process for preparing the fiscal year 2005 CFS. Such weaknesses in the CFS preparation process impair the U.S. government's ability to ensure that the CFS is consistent with the underlying audited agency financial statements, properly balanced, and in conformity with U.S. generally accepted accounting principles. The weaknesses GAO identified during the fiscal year 2005 CFS audit involved the following areas: (1) directly linking audited federal agency financial statements to the CFS, (2) comparability of financial statements, (3) audit assurance over certain federal agencies' closing packages, (4) internal control monitoring, (5) consolidated reporting guidance to federal agencies, (6) reconciling of intragovernmental activity and balances, and (7) various other internal control weaknesses that were identified in previous years' audits but remained in fiscal year 2005. Of the 154 recommendations GAO reported in May 2005 regarding the process for preparing the CFS, 131 remained open as of December 2, 2005, when GAO completed its fieldwork for the audit of the fiscal year 2005 CFS. However, 76 of these 131 recommendations relate to specific disclosures required under U.S. generally accepted accounting principles. Treasury has submitted a proposal to the Federal Accounting Standards Advisory Board (FASAB) seeking to amend previously issued standards and eliminate or lessen the disclosure requirements for the consolidated financial statements so that U.S. generally accepted accounting principles would no longer require certain of the information Treasury has not been reporting. Comments on the exposure draft of a proposed FASAB standard, based on the Treasury proposal, were due March 1, 2006. GAO will continue to monitor the status of corrective actions to address open recommendations during its fiscal year 2006 audit of the CFS. |
Dual-eligible beneficiaries are a particularly vulnerable population. These individuals are typically poorer, tend to have far more extensive health care needs, have higher rates of cognitive impairments, and are more likely to be disabled than other Medicare beneficiaries. About three out of four dual-eligible beneficiaries live in the community and typically obtain drugs through retail pharmacies. Other dual-eligible beneficiaries reside in long-term care facilities and obtain drugs through pharmacies that specifically serve these facilities. In general, individuals become dual-eligible beneficiaries in two ways. One way is when Medicare-eligible individuals subsequently become Medicaid eligible. This typically occurs when income and resources of beneficiaries fall below certain levels and they enroll in the Supplemental Security Income (SSI) program, or they incur medical costs that reduce their income below Medicaid eligibility thresholds. If these Medicare beneficiaries did not sign up for a Part D plan on their own, they have no drug coverage until they are enrolled in a PDP by CMS. CMS data show that this group represented about two-thirds of new dual-eligible beneficiaries the agency enrolled in PDPs in 2006. According to CMS, it is not possible for it to predict which Medicare beneficiaries will become Medicaid eligible in any given month because Medicaid eligibility determinations are a state function. Another way individuals become dually eligible is when Medicaid beneficiaries subsequently become eligible for Medicare by reaching 65 years of age or by completing the 24-month disability waiting period. Once they become dual-eligible beneficiaries, they can no longer receive coverage from state Medicaid agencies for their Part D-covered prescription drugs. In 2006, this group represented approximately one- third of the new dual-eligible beneficiaries enrolled in PDPs by CMS. CMS can generally learn from states when these individuals will become dually eligible. For dual-eligible beneficiaries, Medicare provides a low-income subsidy that covers most of their out-of-pocket costs for Part D drug coverage. This subsidy covers the full amount of the monthly premium that non- subsidy-eligible beneficiaries normally pay, up to the low-income benchmark premium. The subsidy also covers most or all of a dual-eligible beneficiary’s prescription copayments. In 2007, these beneficiaries are responsible for copayments that range from $1 to $5.35 per prescription, depending on their income and asset levels, with the exception of those in long-term care facilities, who pay no copayments. Given the number of entities, information systems, and administrative steps involved, it takes a minimum of 5 weeks for CMS to identify and enroll a new dual-eligible beneficiary in a PDP. As a result, two out of three new dual-eligible beneficiaries—generally those who are Medicare eligible and then become Medicaid eligible—may experience difficulties obtaining their prescription drugs under Part D during this interval. For other new dual-eligible beneficiaries—those switching from Medicaid to Medicare drug coverage—CMS instituted a prospective enrollment process in late 2006 that enrolls these individuals before their date of Medicare eligibility and offers a seamless transition to Part D coverage. Multiple parties and information systems are involved in identifying and enrolling dual-eligible beneficiaries in PDPs. As shown in figure 1, CMS, the Social Security Administration (SSA), state Medicaid agencies, and PDP sponsors play key roles in providing information needed to ensure that new dual-eligible beneficiaries are identified and enrolled properly. SSA maintains information on Medicare eligibility that is used by CMS and some states. State Medicaid agencies are responsible for forwarding to CMS lists of beneficiaries whom the state believes to be eligible for both Medicare and Medicaid. CMS is then responsible for making plan assignments and processing enrollments. PDP sponsors maintain information systems that are responsible for exchanging enrollment and billing information with CMS. The process of enrolling dual-eligible beneficiaries requires several steps. It begins when state Medicaid agencies identify new dual-eligible beneficiaries and ends when PDPs make billing information available to pharmacies and send enrollment information to dual-eligible beneficiaries. We estimate that it takes at least 5 weeks to complete the process under current procedures. During this interval, pharmacies may not have up-to- date PDP enrollment information on new dual-eligible individuals. This may result in beneficiaries having difficulty obtaining Part D-covered drugs at their pharmacies. To illustrate why this occurs, we present the hypothetical example of Mr. Smith, who as a Medicare beneficiary did not sign up for the Part D drug benefit and, therefore, upon becoming Medicaid eligible, was enrolled in a PDP by CMS. (Fig. 2 shows the steps in Mr. Smith’s enrollment process.) From the time Mr. Smith applies for his state’s Medicaid program on August 11, it takes about 1 month for him to receive notification from the state that he is eligible for Medicaid, thus beginning the enrollment process. From there, Mr. Smith’s new status is submitted by his state to CMS in a monthly file transmittal. Once CMS receives the lists of dual- eligible beneficiaries from all of the states, it verifies eligibility for Medicare and sets each beneficiary’s cost-sharing level. Then, around October 8, CMS assigns Mr. Smith to a PDP randomly, based on the premium level and the geographic area served by the PDP. CMS next notifies the PDP sponsor, which then has to enroll him in its plan and assign the necessary billing information. This billing information, such as a member identification number, is necessary for pharmacies to correctly bill the PDP for Mr. Smith’s prescriptions. The PDP also has to inform Mr. Smith of his enrollment information. By the time this process is completed, it is the middle of October. CMS has developed some contingency measures to help individuals like Mr. Smith during the processing interval. However, we found that these measures have not always worked effectively. For instance, CMS designed an enrollment contingency option to ensure that dual-eligible beneficiaries who were not yet enrolled in a PDP could get their medications covered under Part D, while also providing assurance that the pharmacy would be reimbursed for those medications. However, representatives of pharmacy associations we spoke with reported problems with reimbursements after using this option, which has led some pharmacies to stop using it. To avoid a gap in coverage for beneficiaries transitioning from Medicaid to Medicare prescription drug coverage, CMS has implemented a prospective enrollment process. Because states can predict and notify CMS which Medicaid beneficiaries will become new dual-eligible beneficiaries and when, CMS begins the enrollment process for these individuals 2 months before the their anticipated dual-eligible status is attained. By conducting the processing steps early, the prospective enrollment used for this group of new dual-eligible beneficiaries should ensure a seamless transition from Medicaid drug coverage to Medicare Part D coverage. Fully implemented in November 2006, prospective enrollment applies to about one-third of the new dual-eligible beneficiaries enrolled in PDPs by CMS. For the majority of new dual-eligible beneficiaries, CMS requires PDPs to provide drug coverage retroactively, typically by several months. During 2006, Medicare paid PDPs millions of dollars to provide coverage to dual- eligible beneficiaries for drug costs that may have been incurred during the retroactive coverage period. However, we found that CMS did not fully implement or monitor the impact of this policy. CMS made the effective date of Part D drug coverage for Medicare beneficiaries who become Medicaid eligible coincide with the effective date of their Medicaid eligibility. Under this policy, Part D coverage for these beneficiaries is effective the first day of the month that Medicaid eligibility is effective, which generally occurs 3 months prior to the date an individual’s Medicaid application was submitted to the state, if the individual was eligible for Medicaid during this time. Thus, the Part D coverage period can extend retroactively back several months from when the actual PDP enrollment takes place. Medicare makes payments to the PDPs for providing drug coverage retroactively. Specifically, PDPs are paid approximately $90 per month for the retroactive coverage period. PDPs, in turn, are responsible for reimbursing their members (or another payer) for Part D drug costs incurred during the retroactive months. For instance, in the case of Mr. Smith, while he applied for Medicaid in August and learned of his PDP assignment for Part D in October, his coverage was effective May 1. If Mr. Smith incurred any costs for Part D-covered prescription drugs from May—when he became eligible for Medicaid—through October, he could submit his receipts to his assigned PDP and be reimbursed by the PDP, less the copayments he would pay as a dual-eligible beneficiary. We found that CMS’s implementation of this policy in 2006 was incomplete. While dual-eligible beneficiaries were entitled to reimbursement by their PDPs in 2006, neither CMS nor PDPs notified dual- eligible beneficiaries of this right. The model letters used until March 2007 to inform dual-eligible beneficiaries of their PDP enrollment did not include any language concerning reimbursement of out-of-pocket costs incurred during retroactive coverage periods. In response to a recommendation in our report, CMS modified the model letters that the agency and PDPs use to notify dual-eligible beneficiaries about their PDP enrollment. The revised letters let beneficiaries know that they may be eligible for reimbursement of some prescription costs incurred during retroactive coverage periods. Given the vulnerability of this population, it seems unlikely that many dual-eligible beneficiaries would have contacted their PDPs for reimbursement if they were not clearly informed of their right to do so and given information about how to file for reimbursement, neither would they likely have retained proof of their drug expenditures. Mr. Smith, for example, would need receipts for drug purchases made during a 5-month period preceding the date he was notified of his PDP enrollment—at a time when he could not foresee the need for doing so. Further, CMS did not monitor how many months of retroactive coverage PDPs provided, nor did it monitor PDP reimbursements to beneficiaries for costs incurred during retroactive coverage periods. Based on data provided by CMS, we estimate that Medicare paid about $100 million to PDP sponsors in 2006 for retroactive coverage. CMS does not know what portion of this $100 million PDPs paid to dual-eligible beneficiaries to reimburse them for drug costs. If Mr. Smith’s PDP did not reimburse Mr. Smith for any prescription drugs purchased during the retroactive coverage period, the PDP retained Medicare’s payments for that time period. Given the time it takes to complete the enrollment process, CMS has taken action to ensure ready access to Part D for some new dual-eligible beneficiaries, but difficulties remain for others. For the one-third of new dual-eligible beneficiaries whose eligibility can be predicted, CMS’s decision to implement prospective enrollment should eliminate the coverage gap in transitioning from Medicaid to Medicare drug coverage. However, because of inherent processing lags, most new dual-eligible beneficiaries may continue to experience difficulties obtaining their drugs for at least 5 weeks after being notified of their dual-eligible status. In addition, CMS’s incomplete implementation of its retroactive coverage policy in 2006 means that CMS paid PDPs millions of dollars for coverage during periods for which dual-eligible beneficiaries may not have sought reimbursement for their drug costs. Without routine monitoring of this policy, the agency remains unaware of what portion of these funds was subsequently reimbursed to beneficiaries and, therefore, cannot ensure the efficient use of program funds. Our report contains several recommendations. We recommend that CMS require PDPs to notify beneficiaries of their right to reimbursement and monitor implementation of its retroactive payment policy. We also recommend that CMS take other steps to improve the operational efficiency of the program. Although the agency did not agree with all of them, it has already taken steps to implement some of our recommendations. As of March 2007, CMS has modified its letters to dual- eligible beneficiaries to include language informing them of their right to reimbursement for drug costs incurred during retroactive coverage periods and required PDP sponsors to do the same. In addition, CMS officials told us that they plan to analyze data to determine the magnitude of payments made to PDPs for retroactive coverage and the amounts PDPs have paid to beneficiaries. We hope that CMS will use this information to evaluate the effectiveness of its retroactive coverage policy. If, after conducting the analysis, CMS determines that it is paying PDPs substantial amounts of money and dual-eligible beneficiaries are not requesting reimbursements, the agency may want to rethink its policy in light of pursuing the most efficient use of Medicare funds. Mr. Chairman, this concludes my prepared remarks. I would be pleased to respond to any questions that you or other members of the committee may have at this time. For further information regarding this testimony, please contact Kathleen King at (202) 512-7119 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include Rosamond Katz, Assistant Director; Lori Achman; and Samantha Poppe. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Under the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA), dual-eligible beneficiaries--individuals with both Medicare and Medicaid coverage--have their drug costs covered under Medicare Part D rather than under state Medicaid programs. The MMA requires the Centers for Medicare & Medicaid Services (CMS) to enroll these beneficiaries in a Medicare prescription drug plan (PDP) if they do not select a plan on their own. CMS enrolled about 5.5 million dual-eligible beneficiaries in late 2005 and about 634,000 beneficiaries who became dually eligible during 2006. GAO was asked to testify on (1) CMS's process for enrolling new dual-eligible beneficiaries into PDPs and its effect on access to drugs and (2) how CMS set the effective coverage date for certain dual-eligible beneficiaries and its implementation of this policy. This testimony is based on a GAO report that is being released today, Medicare Part D: Challenges in Enrolling New Dual-Eligible Beneficiaries (GAO-07-272). CMS's process for enrolling new dual-eligible beneficiaries who have not yet signed up for a PDP involves many parties, information systems and administrative steps, and takes a minimum of 5 weeks to complete. For about two-thirds of these individuals--generally Medicare beneficiaries who subsequently qualify for Medicaid--pharmacies may not have up-to-date PDP enrollment information needed to bill PDPs appropriately until the beneficiaries' data are completely processed. As a result, these beneficiaries may have difficulty obtaining their Part D-covered prescription drugs during this interval. CMS has created contingency measures to help individuals obtain their new Medicare benefit, but these measures have not always worked effectively. For the other one-third of new dual-eligible beneficiaries--Medicaid enrollees who become Medicare-eligible because of age or disability--CMS eliminated the impact of processing time by enrolling them in PDPs just prior to their attaining Medicare eligibility. This prospective enrollment, implemented in late 2006, offers these dual-eligible beneficiaries a seamless transition to Medicare Part D coverage. CMS set the effective Part D coverage date for Medicare-eligible beneficiaries who subsequently become eligible for Medicaid to coincide with the date their Medicaid coverage becomes effective. Under this policy, which was designed to provide drug coverage for dual-eligible beneficiaries as soon as they attain dual-eligible status, the start of their Part D coverage can extend retroactively for several months before the date beneficiaries are notified of their PDP enrollment. GAO found that CMS did not fully implement or monitor the impact of this policy. Although beneficiaries are entitled to reimbursement for covered drug costs incurred during this retroactive period, CMS did not begin informing them of this right until March 2007. Given their vulnerability, it is unlikely that these beneficiaries would have sought reimbursement or retained proof of their drug purchases if they were not informed of their right to do so. Also, CMS made monthly payments to PDPs for providing drug coverage during retroactive periods, but did not monitor PDPs' reimbursements to beneficiaries during that time period. GAO estimated that in 2006, Medicare paid PDPs millions of dollars for coverage during periods for which dual-eligible beneficiaries may not have sought reimbursement for their drug costs. |
The Social Security Administration (SSA) administers three federal cash payment programs: the Old Age and Survivors Insurance (OASI) program, the Disability Insurance (DI) program, and the Supplemental Security Income (SSI) program. OASI and DI are social insurance programs authorized under title II of the Social Security Act that pay monthly benefits to eligible individuals or their families based on average annual earnings in covered employment. These payments are made from the OASI and DI trust funds, which accumulate income derived from the mandatory payment of employment taxes. In contrast, the SSI program is the nation’s largest cash assistance program for the poor, and it is funded by general revenues. SSI was authorized under title XVI of the Social Security Act to replace federal grants to similar state-administered programs. The program ensures a minimum level of income for individuals who are aged, blind, or disabled, and who meet financial eligibility standards. The monthly benefit amount is based on recipients’ income and consists of a basic federal payment, and in some cases, a state supplement. SSA was authorized to administer the SSI program because it already had a system in place for paying monthly benefits to large numbers of people, including a large number of field offices that could be used as contact points for those seeking information and benefits. Since its implementation, the SSI program has increased rapidly in both size and cost. Between 1974 and 1996, the number of SSI recipients increased almost 65 percent from about 4 million to about 6.6 million, and federal benefit payments rose about 565 percent from $3.8 billion to about $25.3 billion. Program growth has been accompanied by dramatic changes in the make-up of the SSI recipient population as the proportion of the SSI caseload consisting of aged recipients has decreased, while the proportion of younger, mentally disabled recipients has increased dramatically. Because these recipients tend to stay on the rolls longer, they are major contributors to increasing program costs. Over time, changes in the SSI population and long-standing program problems have been accompanied by increasing overpayments. In fiscal year 1997, outstanding SSI debt and newly detected overpayments for the year totaled $2.6 billion (see fig. 1). SSA recovered only $437 million in that year or about 17 percent of the total amount. SSA is headed by a Commissioner who is responsible for administering the operations of the OASI, DI, and SSI programs. According to the most recent agency data, SSA employs about 65,000 employees and serves the public through a network of 1,300 field offices located throughout 10 SSA regions. The Commissioner and Deputy Commissioner head the management team and are responsible for addressing SSA’s existing problems and managing its future challenges. In addition to paying OASI, DI, and SSI benefits, SSA performs four basic functions in support of its programs. These include issuing Social Security numbers, maintaining earnings information, making initial eligibility determinations for payments, and making changes to beneficiaries’ accounts that affect their benefit payments (postentitlement activities). To execute these activities, SSA is organized functionally into eight specialized components. Each component is headed by a Deputy Commissioner whose responsibilities include supporting the agency’s mission and objectives. SSA also has an Office of Inspector General (OIG), which conducts audits and investigations of SSA’s programs, as well as an Office of General Counsel that provides legal advice and litigation services for the agency. (See app. I for SSA’s organizational chart.) To be determined eligible for SSI, individuals must be at least 65 years of age, blind, or disabled. Their income and assets also must be below certain limitations. To qualify as disabled, applicants must be unable to engage in substantial gainful activity (SGA) because of an impairment that is expected to result in death or to last at least 12 months. SSA relies on state Disability Determination Services (DDS) to make the initial medical determination of eligibility. The DDSs are also required to help SSA identify candidates for return-to-work services and may refer eligible individuals to state vocational rehabilitation (VR) agencies for services such as counseling and job placement, as well as therapy and training. Claimants whose initial disability claim is denied may request a reconsideration of their claim by different DDS staff. If this review results in a confirmation of the original denial, claimants may appeal to an administrative law judge (ALJ) located in SSA’s Office of Hearings and Appeals (OHA). Claimants who disagree with an ALJ denial may request that the case be reviewed by SSA’s Appeals Council. After all SSA administrative remedies are exhausted, a claimant has further appeal rights within the federal court system, up to and including the U.S. Supreme Court. SSA field staff are responsible for determining an individual’s financial eligibility for SSI benefits. To meet the financial requirements, individuals may not have combined income greater than the current maximum monthly benefit of $494 ($741 for a couple), or have resources worth more than $2,000 ($3,000 for a couple). During the initial application process at SSA field offices, individuals are required to report any information that may affect their eligibility for benefits. Similarly, once individuals receive SSI benefits, they are required to timely report events such as changes in income, resources, marital status, or living arrangements to SSA field office staff. To a significant extent, SSA depends on program applicants and recipients to accurately report important eligibility information. However, to determine whether recipients remain financially eligible for SSI benefits, SSA periodically conducts redeterminations. These are reviews of financial eligibility factors such as income, resources, and living arrangements. Recipients are reviewed at least every 6 years, but reviews may be more frequent if SSA determines that changes in eligibility are likely. SSA also uses computer matches to determine recipients’ continuing financial eligibility. Matches compare SSI payment records against recipient financial information contained in the payment files of third parties, such as other federal and state government agencies. To determine whether a recipient has medically improved to the extent that he or she is no longer considered to be disabled, SSA also conducts periodic examinations called continuing disability reviews (CDR). SSI differs from OASI and DI in terms of its underlying program principles and eligibility requirements. OASI and DI are intended to help protect working Americans and their survivors against the loss of income as a result of retirement, death, or disability. These programs provide benefits to individuals who have worked a specified amount of time during which they contributed to the Social Security Trust Funds through employment taxes. Determining financial eligibility for the OASI and DI programs is fairly easy and, once established, rarely changes. For example, there normally is little difficulty in establishing whether an OASI or DI beneficiary has the required work for insured status, and what the benefit amount should be. By contrast, SSI is a welfare program composed of aged, blind, and disabled recipients with limited or no work histories. Benefit eligibility and payment amounts for this population are determined by complex and often difficult to verify financial factors such as an individual’s income, resource levels, and living arrangements. Individual financial circumstances may change often, requiring SSA to frequently reassess recipients’ eligibility for and level of benefits. Thus, the SSI program tends to be more difficult, labor-intensive, and time-consuming to administer than the OASI and DI programs. These fundamental differences in SSA’s programs require different management approaches for implementing the programs, as well as different policies and priorities to serve their diverse populations and ensure the accuracy of program payments. Since its inception, the SSI program has been difficult to administer because, similar to other means-tested programs, it relies on complicated criteria to determine initial and continuing eligibility, myriad policies and procedures for assessing recipients’ often changing income and resource levels, and constant monitoring to ensure benefit amounts are adjusted to reflect these changes quickly and adequately. Over the years, the SSI program has become even more complex as a result of policy and procedural changes often influenced by legislation and decisions issued by the courts. Over time, legislative changes have had the effect of liberalizing the SSI eligibility criteria on many occasions, and restricting them at other times. Thus, SSA has been tasked with implementing ever-changing program policies and managing fluctuating recipient populations. The Social Security Disability Benefits Reform Act of 1984 is a primary example of a legislative change significantly affecting the SSI program. This act expanded the general definition of disability for both children and adults. Specifically, the act required new standards for judging the impact of mental impairments on eligibility. It also required SSA to consider the combined effects of multiple impairments if no single impairment was sufficiently disabling to allow someone to qualify for benefits. Furthermore, the act allowed SSA to accept and consider nonmedical evidence provided, for example, by an applicant’s family and friends when making the disability determination. Finally, the act required SSA to obtain medical evidence from an individual’s treating physician, if possible, before evaluating evidence from other sources. This law greatly expanded access to disability benefits and contributed to increases in the number of younger, mentally disabled individuals on the SSI rolls. In 1996, the Congress passed the Personal Responsibility and Work Opportunity Reconciliation Act, which made the SSI eligibility criteria for children and noncitizens more restrictive than it had been. Subsequently, the Balanced Budget Act of 1997 restored a significant number of noncitizens’ SSI benefits. Thus, SSA has expended significant staff and administrative time dealing with the effects of these legislative changes. A major court decision also played a role in the transformation and increasing complexity of the SSI program. For example, in 1990 the Supreme Court held that SSA’s disability determination process violated the law because it held children to a more restrictive standard than that applied to adults. To comply with the decision, SSA developed an individualized functional assessment (IFA) for those children who did not qualify for disability on the basis of SSA’s strict listings of impairments. The IFA determined the extent to which the child’s impairment limited his or her ability to act and behave in age-appropriate ways. By expanding the SSI eligibility criteria, this decision contributed to a tripling in the number of children receiving SSI benefits between 1989 and 1996, from about 300,000 to more than 1 million. It also contributed to changes in the composition of childhood disability rolls as more awards were made to children who might not have met SSA’s listing of impairments but qualified on the basis of the less restrictive Zebley criteria. In addition to increased workloads, these changes further complicated the disability determination process by requiring SSA to develop and implement a new set of regulations and procedures. Moreover, the nature of these disabilities (generally, less severe mental and physical impairments) forced SSA to make more determinations on the basis of eligibility criteria that we reported to be subjective and difficult to consistently administer. With these legislative and court-driven SSI changes, program administration has become increasingly complicated. SSA’s own reviews have found that, over the years, the SSI program has become “encrusted” with a mass of complicating and sometimes contradictory legislation and court-mandated changes. According to SSA, each law or court decision was designed to fix a specific program problem. Cumulatively, however, their effect was to make the program much more complex to administer and difficult for recipients to understand than was originally intended. The objective of this review was to document the underlying “root” causes of long-standing SSI program problems. To do our work, we conducted an extensive literature review of 200 studies on the SSI program conducted by GAO, SSA, congressional committees, and various other external groups. Some of these studies dated back to the SSI program’s inception. We also conducted more than 100 in-depth interviews with SSA personnel at all levels of the organization to obtain their perspectives on the most significant problems in the SSI program, the “root” causes of those problems, and agency actions taken to address them. We supplemented this information with analysis of program performance data related to SSI beneficiary groups, overpayments, payment accuracy rates, penalties, and so forth. Our review was conducted at SSA’s headquarters in Baltimore and four SSA regions: San Francisco, Dallas, Atlanta, and New York—regions that account for more than 50 percent of the SSI population. Our interviews included senior executives and middle managers as well as numerous line staff. We also visited four state DDSs located in the four regions noted as well as SSA’s OHA and Office of Inspector General. While we sometimes state the statistical results of our structured interviews, the results are not generalizable to all SSA employees and managers. However, we believe that these interviews are useful indicators of the views of staff responsible for servicing SSI workloads. Our work was conducted between October 1996 and February 1998 in accordance with generally accepted government auditing standards. This report is organized around the two underlying causes that we believe have had the most significant impact on SSI program performance and integrity: SSA’s organizational culture and its reactive management approach. Thus, a discussion of SSA’s organizational culture and its impact on the financial integrity of the SSI program is addressed in chapter 2. Chapter 3 pertains to SSA’s management approach and its impact on SSA’s ability to provide adequate program direction. In presenting our findings, we have linked each long-standing problem to its most related root cause. However, we are aware that to varying degrees, SSA’s organizational culture and management approach often overlap and cut across several of these problems. As a result, some combination of these two root causes may be present in each problem area. In some instances, the examples we cite as evidence of organizational culture are also applicable to our discussion of SSA’s management approach. Our method of categorization served as a general framework for linking each problem area to its most likely root cause and for determining an appropriate course of action to better control program expenditures and protect program integrity. SSA’s organizational culture has historically placed a greater value on quickly processing and paying SSI claims than on controlling program expenditures. This culture has adversely affected SSA’s ability to address several long-standing program problems and ultimately control program expenditures. To a significant extent, an agency’s culture emanates from and is shaped by top management officials who are charged with establishing the priorities, objectives, and performance measures that drive day-to-day program operations. Thus, over time, what is regularly emphasized, measured, and rewarded by agency management becomes ingrained in the immediate workload priorities of line managers and field staff. To the extent that agency priorities are not adequately balanced, serious vulnerabilities may arise and continue to hinder performance. We reported in 1987 that SSA’s agencywide operations had been heavily influenced by an organizational culture or value system that placed a greater emphasis on processing and paying claims than on controlling program costs or improving operational efficiency. As evidence of this, we cited the results of our survey of a random sample of almost 650 mid-level managers. Over 67 percent of these managers noted that the number one factor emphasized by upper management when assessing their work was timeliness. More than a decade later, our current field work confirmed that little has changed, and SSA’s organizational culture continues to pose a barrier to reducing SSI fraud, abuse, and mismanagement. More specifically, our work shows that, to a great extent, SSI program vulnerabilities are attributable to an agency culture that has tended to view the SSI program in much the same way as SSA’s title II programs—where emphasis is placed on quickly processing claims and making payments to individuals with an earned right to benefits—rather than as a welfare program that requires stronger income and asset verification policies. Consequently, SSI program policies and internal controls do not adequately protect taxpayer dollars from being overspent or abused. SSA’s underlying culture has been most evident as it relates to three long-standing problem areas affecting SSI program performance: (1) inadequate attention to verifying initial and continuing SSI eligibility, (2) lack of priority on recovering SSI overpayments, and (3) insufficient attention to addressing program fraud and abuse. When determining SSI eligibility, SSA relies heavily on applicants and recipients to self-report important information relating to their financial status and disabling condition. Although SSA has procedures in place to verify this information, they are often untimely, incomplete, and subservient to the primary agency goal of quickly processing and paying claims. Data provided to us by SSA’s quality reviewers confirmed that the current system of self-reporting and inadequate verification of this information has been costly to the program. According to SSA, 77 percent of all payment errors resulting in overpayments between fiscal year 1991 and 1995 were attributable to recipients’ noncompliance with reporting requirements. And statistics show that once an SSI overpayment occurs, SSA’s success at recovering the overpayment is limited. Thus, effective initial and posteligibility verification policies are essential to avoiding or mitigating potential overpayment situations. Our prior work suggests that recipients do not always report required information when they should, and may not report it at all. For example, in 1996, we reported that about 3,000 current and former prisoners in 13 county and local jails had been erroneously paid $5 million in SSI benefits, mainly because recipients or their representative payees did not report the incarceration to SSA as required, and SSA had not arranged for localities to report such information. In a report issued last year on SSI recipients admitted to nursing homes, we found that despite legislation requiring recipients and facilities to report such admissions, thousands of SSI recipients residing in nursing homes continued to receive full SSI benefits.These erroneous payments occurred because recipients and nursing homes did not report this information and SSA lacked timely and complete automated admissions data. We also found that some offices placed a much lower priority on investigating nursing home admissions information than on work responsibilities that are monitored by SSA management, such as processing claims. SSA has estimated that overpayments to recipients in nursing homes may exceed $100 million annually. To verify that recipient financial information is correct, SSA generally relies on computer matching of data from other federal and state agencies, such as the Internal Revenue Service, the Department of Veterans Affairs, and state-maintained monthly earnings and unemployment benefits data. In many instances, these matches allow SSA to detect information recipients fail to report. However, SSA’s data matches are not always the most effective means of verifying recipient financial status because the information is often quite old and sometimes incomplete. In 1996, we estimated that direct on-line connections (as opposed to computer matches) between SSA’s computers and databases maintained by state agencies-welfare benefits, unemployment insurance, and workers’ compensation benefits could have prevented or more quickly detected $34 million in SSI overpayments in one 12-month period. In March 1998, we reported that SSA’s computer matches for earned income rely on state data that are from 6 to 21 months old, allowing overpayments to accrue for this entire period before collection actions can begin. We concluded that newly available Office of Child Support Enforcement (OCSE) databases maintained by SSA could prevent or more quickly detect about $300 million in annual SSI overpayments caused by unreported recipient income. These databases include more timely state-reported information on newly hired employees, as well as the quarterly earnings reported for these individuals. In the same report, we also concluded that opportunities existed for SSA to prevent almost $270 million in overpayments by obtaining more timely financial account information on SSI beneficiaries. This could be accomplished if SSA moves to obtain access to a nationwide network that currently links all financial institutions. Such information would help ensure that individuals whose bank accounts would make them ineligible for SSI do not gain eligibility. On average, SSA collects only about 15 percent of outstanding SSI overpayments. Thus, it is paramount that SSA move forward in obtaining and utilizing more timely and complete recipient financial information to prevent overpayments from occurring. By doing so, SSA may reduce the likelihood of having to go through the often difficult and unsuccessful process of trying to recover overpaid SSI benefits. Our recent work confirmed that recipient self-reporting and SSA’s ineffectiveness at verifying this information remain a major SSI program weakness. During our visits to field offices, nearly 80 percent of staff and managers interviewed noted that recipient nonreporting remains a serious problem in the SSI program. In discussing how SSA could encourage individuals to better report important eligibility information, many staff believed SSA should require recipients to furnish additional documentation on their financial status, promote more frequent use of penalty provisions, and acquire more authority to suspend payments to those who chronically fail to report essential information. Staff and managers were particularly concerned that SSA had not addressed long-standing living arrangement verification problems, despite many years of SSA quality reviews denoting this as an area prone to error and abuse. To determine SSI eligibility and benefit amounts, SSA staff apply a complex set of policies to document an individual’s living arrangements and any additional support they may be receiving from others. This process depends heavily on self-reporting by recipients of whether they live alone or with others; the relationships involved; the extent to which rents, food, utilities, and other household expenditures are shared; and exactly what portion of those expenses the individual pays. In one field office we visited, staff identified a pattern of activity involving recipients who, shortly after becoming eligible for SSI benefits, claim that they have separated from their spouse and are living in separate residences. Staff suspected that these reported changes occurred as married recipients became aware that separate living arrangements would substantially increase their monthly SSI benefits. They also suspected that several local attorneys were preparing “boiler plate” separation agreements to help these individuals qualify for higher benefits. However, because of a lack of field representatives necessary to investigate these claims, only rarely were these cases closely reviewed or challenged. Finally, in addition to inadequate verification of recipients’ financial eligibility, SSA has historically placed little priority on determining whether SSI recipients continue to remain medically eligible for benefits. To determine medical eligibility, SSA must conduct continuing disability reviews (CDR). SSA recently estimated that conducting SSI CDRs on the 1.9 million recipients due or overdue for a CDR would remove from the rolls about 5 percent of those individuals. On the basis of this information, we estimated that these recipients would have received $481 million in federal SSI benefits. However, since the program’s inception, SSA conducted relatively few SSI CDRs until it was first mandated to do so by the Congress in 1994. This legislation required SSA to review one-third of the beneficiaries who reached age 18 and at least 100,000 additional beneficiaries annually for fiscal years 1996 to 1998. Subsequent legislation passed in 1996 required SSA to conduct additional CDRs for children who were likely to improve and low birth weight infants in their first year of life, as well as redeterminations for all SSI children beginning on their 18th birthday. SSA estimates that about 600,000 cases will be added to its CDR workloads between fiscal years 1998 and 2000 to meet these requirements. Agency management has attributed its past failure to conduct SSI CDRs to resource constraints and no legal requirement to do so. SSA’s inaction likely resulted in continuing benefit payments to ineligible recipients and hundreds of millions of dollars in unwarranted program costs since the program began. However, SSA recently reported that during fiscal year 1997, it processed over 690,000 CDRs, a 38-percent increase over 1996. The agency expects to process 1.2 million CDRs in fiscal year 1998. In a briefing with SSA management, we conveyed our finding that the eligibility verification aspects of the SSI program have not been adequately emphasized. SSA’s Acting Principal Deputy Commissioner acknowledged that because of the rapidly rising workloads of prior years, SSA decided to emphasize and prioritize the expedient processing and payment of SSI claims rather than delay final decisions by requiring more thorough verification steps and risk hurting some recipients. More recently, however, SSA has begun to take more decisive action to protect the financial integrity of the SSI program. For example, SSA has started a program to identify SSI recipients in jail who should no longer receive benefits and is expanding its use of on-line state data to obtain more real-time applicant and recipient information. In accordance with our recommendation, SSA also plans to give field offices on-line access to OCSE wage data, new-hire data, and unemployment insurance data beginning in March 1999. This should allow field staff to better prevent SSI overpayments by identifying undisclosed earnings at the time of application. SSA’s fiscal year 1999 budget also asks for an additional $50 million to complete financial redeterminations for individuals who have been designated by SSA as having a high probability of being overpaid. SSA also told us that it is continuing to study SSI living arrangement policies and may ultimately consider proposing legislative changes to reduce the complexity of the verification process and protect program dollars from being overpaid to recipients. Finally, in its current strategic plan, SSA acknowledges that the needs of applicants and recipients have been the nearly singular focus of past strategic and business plans. The plan now calls for a better balance between SSA’s traditional approach to its programs and the need to control program costs. Despite SSA’s planned and ongoing efforts, we continue to be concerned that, in many areas, progress has been limited. For example, SSA’s negotiations with states to obtain expanded on-line access to their databases are moving slowly, and the agency still does not adequately use on-line access as an overpayment detection and prevention tool. In regard to SSI recipients residing in nursing homes, SSA plans to use a newly developed Health Care Financing Administration system to more effectively capture information on admissions to these and other facilities. However, we reported last year that automated nursing home data were already available in all state medicaid agencies and could have been used by SSA in the interim to identify SSI recipients living in nursing homes within 1 to 3 months of admission. SSA’s failure to use this information while waiting for the implementation of an alternative system has left the SSI program open to continued abuse and millions of dollars in potential overpayments. Finally, despite SSA’s plans to continue to study SSI living arrangement policies and problems, this costly program vulnerability remains unaddressed more than two decades after implementation of the program. In addition to problems associated with SSA’s verification of important SSI eligibility information, SSA has not aggressively pursued the recovery of overpayments. Thus, over time, SSA’s recovery efforts have been outpaced by outstanding SSI debt, which is becoming an increasingly larger portion of all debt owed to the agency. Between 1989 and 1997, outstanding SSI debt and annual overpayments more than doubled to about $2.6 billion. Although overpayment recoveries also increased each year during this period, the gap between what is owed SSA and what is actually collected each year has continued to widen. One reason overpayment recoveries remain low is that SSA has not adequately used the SSI overpayment recovery tools currently available to it. For example, SSA only this year began using the tax refund offset (TRO) to recover SSI overpayments from former SSI recipients, despite having had the authority to do so since 1984. The TRO has proven effective in another welfare program—Food Stamps—for collecting delinquent debt. In explaining why SSA did not act sooner to implement the TRO, agency officials told us that because the targeted population was generally poor, the expected recovery amounts—estimated at about $6 million for fiscal year 1998—were relatively small compared with the total debt owed SSA. However, an official responsible for overseeing this initiative conceded that SSA has historically experienced little success recovering overpayments from former SSI recipients and, regardless of its dollar impact, the TRO represented one of the few tools available to SSA for increasing recoveries for this population. He also agreed that sustained use of the TRO could deter recipients from misreporting eligibility information to SSA in the future. So far this year, the TRO has far exceeded SSA’s overpayment recovery estimates. In fact, SSA recently testified that, in the first 4 months of 1998, it had collected more than $23 million. Another reason SSI overpayment debt has increased is that SSA does not have, and has not adequately pursued authority to use more aggressive debt collection tools, including the ability to administratively intercept other federal payments recipients may receive, notify credit bureaus of an individuals’ indebtedness, use private collection agencies, and charge interest on outstanding SSI debt. At present, SSA lacks statutory authority to use these tools to recover SSI overpayments. In 1995, we reported that welfare programs that used a broad range of collection tools, such as those listed, experienced better rates of overpayment recovery than programs that did not. In a recent testimony, SSA management also acknowledged that such tools are valuable in recovering program overpayments from individuals who have left the SSI rolls. Following a number of GAO briefings over the last year, and a recent testimony in which we noted SSA’s continued reluctance to pursue more aggressive debt collection tools, SSA announced that it is now seeking authority to recover overpayments of title II benefits made to former SSI recipients, as well as use credit bureaus, collection agencies, interest levies, and so forth to strengthen its collection efforts. To recover overpayments from current SSI beneficiaries, SSA relies primarily on withholding monthly SSI benefit payments. Before 1984, SSA could withhold up to 100 percent of an overpaid individual’s benefit amount. However, pursuant to the Deficit Reduction Act of 1984 (P.L. 98-369), SSA was limited to offsetting a maximum of 10 percent of a recipients’ total monthly income, which includes SSI payments. Thus, SSA lost the discretion to withhold larger amounts, even for individuals who willfully and/or continually fail to report essential information. SSA also lost a valuable means of encouraging timely recipient reporting of information. In discussing the barriers to increased overpayment collections, headquarters officials told us that the 10-percent withholding ceiling has affected SSI collection efforts. However, SSA has not sought a change in this cap, even for individuals who chronically fail to report important eligibility information or abuse the program. SSA also is not adequately utilizing overpayment penalties as a means of ensuring that recipients comply with reporting policies. Field office personnel may impose penalties on recipients who fail to submit timely reports of events that affect their eligibility for, or the amount of, SSI payments. Overpayment penalties range from $25 to $100. SSA’s own reviews have noted that overpayment penalties, if enforced by SSA, could serve as a deterrent to untimely recipient reporting. However, the agency found that penalty provisions were almost universally ignored by field offices. In a sample review, analysts concluded that no penalty was considered in about 50 percent of overpayment cases in which the individual had a history of failure to make timely reports of earnings or living arrangements, or both. During our visit to field offices, almost 80 percent of staff and managers interviewed told us that penalties are still rarely used in the field to encourage recipients to better report essential eligibility information. When asked why penalties were rarely used, staff commonly complained that SSA management did not encourage their use. Many others said that current penalty policies were unclear and too labor intensive to implement, and that dollar amounts were too low to change recipient behavior. Our analysis of data from all 10 of SSA’s regions confirmed that penalties are still rarely levied. In one 12-month period, SSA detected about $1.2 billion in erroneous payments. However, less than $80,000 in penalties was assessed by SSA, and only about $8,000 was actually collected. These infrequent penalty assessments are a concern, considering that SSA’s own reviews have found that, on average, 77 percent of all overpayments are attributable to recipient noncompliance with reporting requirements. Efforts to improve overpayment debt recoveries are particularly important because the gap between what is collected and what is owed the program is continuing to grow (see fig. 2.1). SSA’s data show that in fiscal year 1989, outstanding SSI debt and newly detected overpayments totaled $792 million. During that year, SSA recovered about $165 million. By fiscal year 1997, outstanding SSI debt and new overpayment detections grew to about $2.6 billion. Of the total amount, SSI recovered $437 million. Although annual overpayment recoveries have increased steadily, the amount of outstanding SSI debt has consistently outpaced SSA’s collection efforts. Furthermore, as overpayment debt has grown, the amounts written off by SSA each year have also increased. Write-offs include overpayment waivers and debt deemed uncollectible by SSA. Policies governing the issuance of waivers take into account whether the recipient or SSA was at fault in creating the overpayment and the dollar amounts involved. In addition to waivers, SSA may deem some debts uncollectible for numerous reasons, including the inability to locate an individual for a prolonged period. Since 1989, SSI write-offs have totaled more than $1.8 billion. This number includes $562 million written off in 1997 alone (see fig. 2.2). According to SSA, the 1997 write-offs included about $345 million in debt that had been carried on SSA’s books for years and was determined to be not cost-effective to pursue. Regardless of the reason, these write-offs represent overpaid program benefit dollars that will likely never be recovered. More importantly, when these accumulated write-offs are added to the outstanding SSI debt after collections for 1997, the actual amount of unrecovered SSI debt since 1989 exceeds $3.4 billion. The SSI program remains vulnerable to abuse, despite agency initiatives to address it. Shortly after SSA began administering SSI in 1974, a study group was commissioned to evaluate the program and recommend changes to improve effectiveness and fiscal accountability. The study group noted that a well-designed program with built-in integrity safeguards would hold opportunities and occurrences of fraud to a minimum. However, they ultimately concluded that the SSI program was originally implemented without adequate attention to program integrity considerations and suffered from serious shortcomings in the areas of fraud detection, prevention, and prosecution. Over the ensuing years, reports of program fraud and abuse have often centered on recipients’ failure to report their financial status, the faking of disabilities, and fraudulent residence reporting. For example, we reported in 1995, that “middlemen” were facilitating fraudulent SSI claims while providing translation services to non-English-speaking individuals applying for SSI. These individuals often coached claimants on appearing to be mentally disabled, used dishonest health care providers to submit false medical evidence to SSA, and provided false information on claimants’ medical and family history. The following year, we reported that between 1990 and 1994, approximately 3,500 SSI recipients admitted transferring ownership of resources such as cars, cash, houses, land, and other items valued at an estimated $74 million in order to qualify for benefits. This number represents only resource transfers that recipients actually reported to SSA. The SSI program is designed to help individuals who have limited resources meet basic needs. Although these transfers are not prohibited under current law, using them to qualify for SSI benefits has become an abusive practice that raises serious questions about SSA’s ability to protect taxpayer dollars from waste and abuse. We estimated that for the cases mentioned, eliminating asset transfers would have saved $14.6 million in program expenditures. The Congressional Budget Office (CBO) has estimated that more than $20 million in additional savings could be realized through 2002 by implementing an asset transfer restriction. Although SSI represents less than 8 percent of SSA’s total program expenditures, the prevalence of fraud in the program is significant. In 1997, SSA’s OIG noted that the income and resource requirements associated with determining SSI eligibility created additional opportunities for fraud and abuse. The OIG generally receives allegations of fraud directly from the general public, the Congress, other government agencies, SSA personnel, and through its hot line, which began operation in November 1996. Through mid-July 1997, the hot line received 12,680 allegations of fraud. When compared with SSA’s other programs—OASI and DI—SSI fraud represented about 37 percent of all allegations received and about 24 percent of the fraud convictions obtained. Since becoming an independent agency in 1995, SSA has begun to take more decisive action to address SSI program fraud and abuse. For example, the number of OIG investigators has nearly tripled from 76 to 227 headquarters and field agents, and in 1997, combatting fraud and abuse became a key agency goal. Last year, SSA also created national and regional anti-fraud committees to better identify, track, and investigate patterns of fraudulent activity. In addition, several OIG pilot investigations are also under way that are aimed at detecting fraud and abuse earlier in the SSI application process. One such pilot involves the creation of special units located in DDS offices in several states. As a preventive measure, these units review disability applications, document evidence of fraudulent transactions and identify pervasive patterns of fraud. According to SSA, this new emphasis on early prevention represents a major shift away from how the agency has traditionally dealt with fraud and abuse. SSA also recently established procedures to levy civil monetary penalties against recipients and others who make false statements to obtain SSI benefits. Following our briefings with SSA’s Deputy Commissioner and our two testimonies over the last year that identified asset transfers as an area subject to abuse by recipients, SSA also recently submitted a proposal to the Congress aimed at preventing individuals from transferring assets in order to qualify for SSI benefits. Finally, in its new annual performance plan, SSA has made a commitment to complete a comprehensive action plan to improve the management of the SSI program during fiscal year 1998. This step links to SSA’s strategic goal of making its programs the “best in the business, with zero tolerance for fraud and abuse.” It is too early to tell what immediate and long-term effects SSA’s activities will have on preventing fraud and abuse in the SSI program. However, many years of inadequate attention to program integrity issues has fostered a strong skepticism among both headquarters and field staff that fraud prevention is an agency priority. In fact, SSA’s own studies show that many staff believe the OIG does not adequately investigate fraud referrals or provide adequate feedback on the status of investigations. Other staff noted that constant agency pressure to process more claims impeded the thorough verification of recipient-reported information and the development of fraud referrals. SSA also found that staff were concerned that the agency had not developed office work-credit measures, rewards, and other incentives to encourage staff to devote more time to developing fraud cases—a process that often takes many hours. SSA’s Field Office Work Measurement System is used by management to determine what employees are working on, the volume of work completed, and how much time staff spend on particular activities. SSA assigns numerical values to certain tasks, such as processing disability claims, that are based on the average time it takes to accomplish them. For example, for an initial SSI claim for an aged person SSA determined that it should take about 3.24 hours. Any additional time spent verifying information or investigating suspected fraud for this claim would not receive credit. SSA ultimately uses these data to estimate resource needs, assess component productivity, and justify budget and staffing levels. Our review of SSA’s office work-credit system confirmed that adequate measures of the activities and time necessary to develop fraud referrals have not been developed. Nor has SSA developed a means of capturing fraud activity data or recognizing staff for additional time spent developing fraud cases. This weakness in the current work-credit system was noted by a speaker at SSA’s annual Anti-Fraud Conference. In a speech advocating a better balance between customer service and protecting the public trust, this field office manager voiced concern that staff have been sent the message that “timeliness and volume are the top priorities.” As a result, few staff may be willing to devote significant time to more thorough claims verification because they fear production—cases processed and paid—will be negatively affected. Thus, SSA’s new anti-fraud activities and its current work-credit system may be working against each other. In addition to long-standing problems attributable to SSA’s organizational culture, our work suggests that SSA’s management of the program has often led to untimely and flawed SSI program policies and inadequate program direction. Proactive program management requires a willingness on the part of an agency to identify and decisively address problems before they reach crisis levels. Where internal operational remedies are insufficient to address a particular program weakness, the agency should then suggest and sponsor legislative proposals for addressing underlying policy weaknesses. Proactive management also requires a willingness to identify short- and long-term program priorities and goals and to develop a clearly defined plan for meeting those goals. But SSI program direction and problem resolution have been hindered by SSA’s continued reluctance to take a leadership role in SSI policy development before major program crises occur and the subsequent tendency to react to these crises through a series of often ad hoc and piecemeal initiatives. Program direction has been further impaired by a strategic planning process that has not sufficiently focused on the specific characteristics and needs of the SSI program and its recipients. As the nation’s SSI program expert, SSA is uniquely positioned to assess the program impacts of trends in the SSI population. It is also in the best position to initiate internal policy “fixes” to address specific problems. If internal revisions would not be effective, SSA is best qualified to identify areas where new legislation is needed to address program weaknesses and to assist policymakers in exploring and developing legislative options for change. However, SSA has not been sufficiently aggressive in this regard. Instead, SSA’s approach to policy development has often been reactive in nature, resulting in several missed opportunities to address flawed operational policies and to play a key role with policymakers in addressing critical issues affecting SSI program performance and integrity. Recent examples of SSA’s management approach include its reluctance to develop policy options to address problems associated with assessing the SSI eligibility of children and substance abusers, helping SSI recipients enter the workforce, and determining recipient living arrangements. SSA management has acknowledged the need to take a more proactive policy development role, and in the last few years has initiated several reorganizations of its policy component to strengthen its capacity. In fact, SSA is currently restructuring its research and policy component in a way that it believes will facilitate a stronger focus on SSI program vulnerabilities and address our concerns in this area. In 1997, SSA also made conducting effective policy development, research, and program evaluation a key goal of its strategic plan. However, SSA only recently developed and submitted its first significant package of SSI policy proposals—draft legislation entitled the “Supplemental Security Income Integrity Act of 1998”—to the Congress. In 1984, a Congressional Panel on Social Security noted that it was SSA’s responsibility to contribute to policy-making with advice, information, expert analysis, and the kind of judgment that results from the experience of program operations. However, the panel concluded that SSA’s policy-making had often taken place in an atmosphere of crisis and improvisation rather than as part of a comprehensive strategy for addressing SSA’s major program challenges. In 1995, we also criticized SSA for not taking a more active role in analyzing and suggesting policy options for its programs. More specifically, in monitoring SSA’s transition from a component of the Department of Health and Human Services to a separate and independent agency, we noted that SSA needed to take a more active role in addressing its major program challenges, including those related to its SSI caseloads. In 1997, SSA’s Advisory Board issued a report with similar conclusions. The board noted that SSA should take a leadership role in the initiation of major policy changes rather than continue its pattern of reacting to short-term crises. The board also found that SSA has had an overly cautious attitude toward initiating the analysis of controversial policy issues. Their report concluded that improving program leadership would require SSA to revise its long-standing tendency to focus on operational issues and paying benefits to recipients at the expense of much needed program policy and research activities. As discussed in chapter 1, between 1984 and 1991, legislative changes and a major court decision greatly expanded the SSI eligibility criteria for children. Rapid growth in the number of children receiving SSI disability payments, questions about SSA’s ability to adjudicate claims consistently, and allegations that some parents were coaching their children to fake mental impairments to qualify for benefits, elevated public and congressional concern that the program was being abused. In 1996, the Congress passed welfare reform legislation, which tightened SSI eligibility for children and restricted eligibility to those with the most severe impairments. In this example, SSA possessed several years of program data documenting explosive growth in childhood disability caseloads, changes in the types of impairments qualifying for benefits, and the impacts of legislative and court-mandated program changes. However, the agency failed to take a leadership role in developing and communicating this information to policymakers. Nor did SSA develop formal proposals for addressing identified weaknesses in the childhood eligibility criteria, despite the fact that SSA had information that eligibility guidelines—in particular, the IFA—for determining the severity of childhood mental and behavioral impairments, were difficult to interpret, unclear, and too subjective. At a much earlier time, this information could have been shared with the Congress for its consideration in reassessing whether SSI was meeting the needs of the most severely disabled children. SSA’s reluctance to take a more proactive policy development role was also evident in regard to the recent debate surrounding SSI eligibility for drug addicts and alcoholics (DA&A). In prior work, we reported that DA&A caseloads had increased more than 150 percent between 1989 and 1994. We also raised serious questions about SSA’s payment controls and its ability to prevent recipients from purchasing drugs and alcohol with SSI benefit payments. Despite congressional concern and growing media criticism surrounding this issue, SSA did not take aggressive action to revise operational polices that were subject to abuse by recipients, or suggest legislative options for change. SSA also failed to develop adequate data on the reasons for growth in the DA&A caseloads, the number of substance abusers actually in treatment, and the percentage of individuals who had left the rolls as a result of treatment and rehabilitation. More importantly, however, SSA did not adequately share the information it did have with the Congress to assist it in addressing identified problems. Thus, program abuses continued longer than they should have and the Congress ultimately acted on its own to tighten the eligibility criteria in this area. Our work has also shown that SSA has not provided adequate leadership in regard to helping recipients return to work and promoting economic independence. Thus, few recipients leave the SSI rolls. In several reports and testimonies issued over the last several years, we have faulted SSA’s administration of SSI work incentives, and documented the need for a comprehensive return-to-work strategy that includes earlier intervention, return-to-work assistance, and changes in the structure of cash and health benefits to encourage work. We have also documented SSA’s reluctance to play a leadership role in devising a return-to-work plan. SSA management told us that it is only one player among many in the complex VR process, and that it does not have the ability to develop a comprehensive strategy on its own. However, in its strategic plan, SSA has now pledged to pursue the objective of helping people return to work. As a first step, SSA has developed a proposal, currently under consideration by the Congress, that would expand recipients’ choices of VR providers. In this instance, recipients would receive a “ticket” (similar to a voucher), which they could use to obtain services from public or private VR providers of their choice. Although all pertinent players should be involved in formulating a comprehensive VR strategy, we continue to believe SSA has the fiduciary responsibility and is the appropriate agency to take the lead in ensuring that returning to work receives much greater emphasis. A final example of SSA’s reluctance to address important program policy issues involves the policies and procedures SSA uses to determine recipient living arrangements and in-kind support and maintenance. As discussed previously, when determining living arrangements, claims processors are required to apply a complex set of policies designed to document an individual’s living situation and any additional support they may be receiving from others. These numerous rules and polices have made living arrangement determinations one of the most complex and error-prone aspects of the SSI program, and a major source of SSI overpayments. During our review, staff and managers told us that living arrangement policies needed reassessment and change. SSA quality reviewers have also consistently identified living arrangement calculations as a major source of benefit payment errors. During our review, we identified several internal and external studies of SSI living arrangement issues conducted over many years. Some of these studies recommended ways to simplify the process by eliminating many complex calculations and thereby making it less susceptible to manipulation by recipients. Others contained recommendations for making the SSI program less costly to taxpayers by requiring that benefit calculations be subject to maximum family caps or economies of scale or both when two or more recipients reside in the same household. In 1989, SSA’s OIG reported that a more simplified process that applied an economies-of-scale rationale to all SSI recipients living with another person would result in fewer decision errors and reduce annual overpayments by almost $80 million. The OIG also concluded that such a change would require legislative action. Despite these studies, and the potential cost savings associated with addressing this issue, we could find no evidence that SSA has ever acted on the recommendations or submitted proposals for changing laws governing living arrangement policies. Our work has shown that SSI program direction has suffered as a result of SSA’s failure to develop program-specific goals, priorities, and associated plans for addressing program weaknesses. In this report, we have discussed a number of long-standing problems that the SSI program continues to experience. The continuation of these problems demonstrates SSA’s inability to focus agency attention on its most significant program challenges and ensure that corrective actions are carried out and sustained over time. To a significant degree, this may be due to SSA’s strategic planning efforts, which generally involve agencywide goals and concerns, with no programmatic focus. Thus, SSA still lacks a plan that lays out the SSI program areas most in need of attention in order to ensure effective service to the public and control program expenditures, both of which are critical to the Congress, oversight entities, and SSA’s employees. Typically, organizations rely on their planning processes to document long- and short-term programmatic priorities. The planning process also facilitates organizational agreement as to where efforts will be focused, what resources (both personnel and budgetary) will be devoted to initiatives, what the time frames for action will be, and how managers will be held accountable for meeting the stated objectives of each effort. If the planning process fails to provide adequate management direction for a program, the organization tends to take ad hoc measures to address program problems when they become critical. In 1987, we noted that because of SSA’s passivity in the area of program planning, it (1) lacked a top management focus on surfacing important program and operational issues, (2) was largely reactive to external pressure from the Congress and the courts to improve its programs, and (3) often addressed program challenges in an uncoordinated or inefficient manner. SSA developed its first agencywide strategic plan in 1988 and then significantly revised it in 1991. As required by the Results Act, SSA developed and submitted its current strategic plan in 1997. This plan outlines SSA’s strategic goals and objectives for the next 5 years. As also required by the Results Act, SSA recently published its fiscal year 1999 performance plan. This plan provides more detailed information on how SSA intends to achieve its goals and the measures it will use to hold itself accountable over the next year. Together, these two documents chart SSA’s future course. In reviewing SSA’s strategic and performance plans, we found that the manner in which SSA has framed them may undermine SSA’s ability to address its most significant program vulnerabilities. In particular, SSA’s plans still do not adequately address the specific needs and problems of the SSI program, as well as the unique characteristics of its recipient population. Instead, SSA’s approach to planning has remained at the agencywide level, resulting in general goals and objectives, for SSA’s three major programs. Although macro-level goals and objectives are essential to SSA’s operations, the absence of an SSI-specific strategy and the fact that few goals, initiatives, and performance measures are targeted to the program have impeded the establishment of clear program-specific priorities. For example, we designated SSI a high-risk program primarily because of the magnitude of SSI overpayments. Despite noting that SSA faces considerable SSI program challenges, the current strategic plan and annual performance plan contain few specifics as to how SSA intends to reduce the more than $1 billion in overpayments the program incurs annually. Furthermore, despite SSA’s acknowledgement that SSI overpayments are difficult to recover and are becoming an increasingly greater portion of outstanding debt owed to the agency, SSA’s plans do not include future SSI overpayment recovery targets or other measures to gauge whether debt collection rates are increasing or decreasing for this program. Instead, SSA’s current plans call for increasing debt collection agencywide by a total of 7 percent annually through 2002. Using this aggregate measure, however, could mask a worsening in future SSI debt collection levels if they were offset by slightly increased debt collections in the much larger OASI program. Consequently, SSA could meet its goal of increasing debt collection agencywide without establishing new initiatives to address SSI debt collection or actually recovering more SSI overpayments. In addition, SSA has also acknowledged that the SSI program is highly susceptible to fraud and abuse. However, SSA’s plans contain no measures or goals specifically targeted to SSI fraud prevention and detection, such as the number of SSI fraud referrals received, cases developed, convictions obtained, or penalties levied. Without these elements, it may be difficult for SSA to determine the true extent of fraud and abuse in the SSI program and the impact its new fraud prevention initiatives will ultimately have on the program. Although agencies may appropriately choose to aggregate such data in order to monitor agencywide progress, long-standing weaknesses in the SSI program argue for more closely monitoring debt collection and anti-fraud activities on a program-specific basis. Without a comprehensive strategy or plan for addressing specific SSI program problems, it is uncertain whether SSA will focus adequately on those areas that put SSI at the greatest risk or that corrective actions will be sustained over time. In its new annual performance plan, SSA has committed to developing a comprehensive SSI action plan in fiscal year 1998. However, such a plan has not yet been completed, so it is still unclear whether SSA will focus adequately on its most significant SSI challenges. In many areas, substantive program improvement may depend on the degree to which SSA succeeds in separately identifying SSI program needs, goals, and performance measures from those of its other programs, and targeting its efforts to the specific long-standing SSI problems discussed in this report. After more than 20 years of operation, the SSI program still faces significant problems. To a large extent, these long-standing problems and SSA’s inability to address them are attributable to an ingrained organizational culture that has historically placed a greater value on quickly processing and paying SSI claims than on controlling program costs, and a management approach that has been reluctant to address SSI program problems requiring long-term solutions and/or legislative change. Together, these two underlying themes have allowed long-standing SSI program problems to continue and have contributed to the growth in SSI overpayments and outstanding debt. As noted in chapters 2 and 3, SSA has not always struck an adequate balance between meeting the needs of program recipients and fiscal accountability for its programs. Often SSA’s actions to address SSI program weaknesses have been ad hoc and crisis-driven, rather than part of a comprehensive strategy for improving program performance in the long term. Thus, for many years, billions of program dollars have been erroneously paid to ineligible individuals, and SSA has not always dealt proactively with its most pressing program problems. In its most recent strategic plan, SSA acknowledges the role of management leadership in shaping SSA’s programs and the need to rebalance its program priorities in a way that improves accountability. Significantly revising SSA’s underlying organizational culture and management approach will likely take many years of concerted effort at the highest levels of the agency. However, we believe such a change is important to restoring public confidence in a program that provides critical assistance to so many needy, aged, and disabled recipients. More specifically, to address the issue of SSA’s underlying culture and its effect on the financial health of the SSI program, sustained emphasis should be placed on reassessing SSA’s traditional program priorities and better controlling program expenditures. This will require SSA to revise and strengthen its approach to verifying SSI eligibility, deterring and recovering SSI overpayments, and combatting program fraud and abuse. At present, many of the difficulties experienced by the SSI program are the result of more than 20 years of inattention to payment controls and SSA management’s failure to make them a significant workload priority. As a result, financial verification and program integrity issues receive inadequate emphasis, and continued abuse of the program by recipients often goes unchallenged by field staff. A significant step in addressing SSA’s prevailing culture is management’s acknowledgement in its strategic plan that a rebalancing of SSI program priorities is overdue and in recent agency-sponsored proposals to address SSI program integrity issues. However, SSA’s management must be willing to direct change by providing sustained programmatic leadership, enhancing its commitment to the verification aspects of claims processing, implementing more stringent payment controls and debt collection tools, holding staff and managers accountable for protecting program funds, and finding better ways to reward those who do so. If successful, SSA’s actions should serve to reduce SSI overpayments, improve the financial integrity of the program, and ultimately reshape SSA’s prevailing culture and value system. Regarding SSA’s management approach, our work shows that SSA needs to demonstrate a greater willingness to identify and actively address emerging issues, and to provide a longer-term vision for the SSI program through its policy development and strategic planning activities. As the acknowledged expert on the SSI program, SSA possesses myriad performance data and more than two decades of experience serving this often changing population. Thus, the agency is in a unique position to lead and inform public debate on the range of issues affecting program performance and to establish an agencywide program vision. SSA is also in the best position to offer long-term policy solutions for its most significant management and operational challenges before they reach crisis levels. However, our work suggests that SSA has not yet maximized its research and policy development role, nor has it developed adequate SSI program plans to serve as a blueprint for managing the program more strategically, focusing long-term program priorities and defining specific program goals. In implementing the Results Act, SSA recently committed to developing an SSI action plan in fiscal year 1998. To be effective, this plan should include a carefully designed set of initiatives aimed at addressing the long-standing problems affecting SSI program performance as well as specific measures to evaluate progress and hold the agency accountable. Without such a plan, SSA risks continuing the policies and procedures that have allowed SSI overpayments to grow and perpetuating its often piecemeal approach to addressing program problems. It also may forgo a valuable opportunity to communicate to its employees, the Congress, and other oversight entities its commitment to operating a more efficient and fiscally responsible program. Revising SSA’s organizational culture will likely take several years of sustained effort at the highest levels of the agency. To facilitate such a change, we recommend that the Commissioner of Social Security take the following actions: Enhance SSA’s ability to verify applicant- and recipient-reported eligibility information and deter overpayments by accelerating efforts to identify more timely and complete sources for verifying SSI financial eligibility information. Sustain efforts to obtain and implement additional SSI overpayment deterrence and debt collection tools commonly available to other means tested programs. These include using credit bureau reporting, collection agencies, intercepts of other state and federal benefit payments, and interest levies to recover more SSI debt. For recipients who chronically and willfully abuse SSI reporting requirements, seek legislative authority to withhold higher amounts than the current 10-percent maximum. Reassess current policies for imposing penalties on recipients who do not report important eligibility information. This may include examining whether current penalty usage is sufficient to deter recipient nonreporting and removing any external or agency-created obstacles to using penalties. Reevaluate SSA’s field office work-credit and incentive structure at all levels of the agency and make appropriate revisions to encourage better verification of recipient information and greater staff attention to fraud prevention and detection. For improved accountability, line staff and middle management expectations, as well as senior executive contracts, should include specific requirements and performance measures in this area. To facilitate a change in SSA’s management approach and improve SSI program direction, we recommend that the Commissioner of Social Security take the following actions: Better utilize SSA’s policy development component to address SSI program policies that, for many years, have placed the program at risk of fraud, waste, and mismanagement. This would include, but not be limited to, the development and advancement of legislative proposals aimed at simplifying complex SSI living arrangement and in-kind support and maintenance policies and continuing SSA’s sponsorship of legislation restricting the transfer of valuable assets and resources to qualify for SSI benefits. Move forward in developing an SSI-focused strategy or plan with clearly defined priorities, goals, and performance measures to gauge SSA’s progress in addressing its most significant SSI program challenges. This document should be consistent with the Results Act and include specific initiatives, goals, and performance measures aimed at addressing long-standing SSI program problems and facilitating a change in SSA’s organizational culture and management approach to the SSI program. In providing comments on this report, SSA agreed that the SSI program faces significant challenges. However, the agency disagreed with our conclusion that it has historically placed a greater emphasis on processing and paying SSI claims than on controlling program expenditures. SSA was also concerned that the draft report did not adequately acknowledge a number of initiatives it has undertaken to improve the financial integrity of the SSI program. Accordingly, SSA’s comments include a discussion of several key initiatives. Finally, SSA either fully or partially concurred with five of our seven recommendations. SSA did not agree with our recommendation that it should seek legislative authority to withhold higher benefit amounts than the current 10-percent limit from individuals who chronically and willfully abuse reporting requirements. SSA also disagreed with our recommendation that it reassess current penalty provisions for recipient nonreporting and remove any barriers to their use. According to SSA, its newly submitted administrative proposal requesting authority to impose a period of ineligibility for individuals who provide false information to the agency is a more effective approach than reassessing current penalty provisions. SSA agreed that its current field office work-credit and incentive structure should be reassessed to better ensure that payment accuracy and fraud prevention receive adequate staff attention. However, the agency was concerned that incorporating specific requirements and performance measures for collections and fraud prevention into managers’ performance plans could be misperceived both within and outside the organization. In regard to SSA’s disagreement with our conclusion that it has historically placed inadequate emphasis on controlling program expenditures, we believe our audit work was sufficient to reach such a conclusion. Our review involved an extensive analysis of 200 studies of the SSI program dating back to its inception, more than 100 interviews with staff and managers at all levels of the agency, and an assessment of performance data encompassing nearly a decade of program operations. Throughout this report, we have also provided numerous examples drawn from this evidence to demonstrate how SSA’s focus on quickly processing and paying SSI claims has allowed long-standing problems to continue and contributed to a program environment in which program policies and internal controls have not adequately protected taxpayer dollars from being overspent or abused. In interviews with us and in a recent testimony, SSA’s Principal Deputy Commissioner also acknowledged that, because of the rapidly rising workloads of prior years, SSA made a conscious decision to prioritize expedited claims processing over instituting additional steps to better ensure payment accuracy and verify benefit eligibility. Thus, we believe there is ample evidence to support our conclusion that SSI claims processing has historically received greater management emphasis. However, this report also acknowledges the recent steps SSA has begun to take to address its most significant program challenges, and in the future, strike a better balance between meeting the needs of SSI recipients and protecting the financial integrity of the program. We also disagree with SSA’s concern that we have not adequately recognized its initiatives to address long-standing SSI program vulnerabilities. Although our primary audit work was conducted through February 1998, we frequently updated program performance data and the status of SSA’s initiatives as information was provided to us by the agency. To the extent that SSA’s initiatives were either in the planning stages, partially initiated, or fully operational, we have fairly characterized them in this report. We recognize that since the SSI program was designated a high-risk area by GAO, SSA has begun numerous initiatives to address the problems discussed here. We will continue to monitor SSA’s progress in this area and relevant program performance data to determine whether its efforts to address identified SSI program vulnerabilities are successful. In regard to SSA’s response to our specific recommendations, we continue to believe SSA should seek legislative authority to recover larger amounts than the current 10-percent limit from overpaid recipients who chronically fail to report important eligibility information. We believe that providing field staff with the discretion to recover larger amounts from overpaid recipients who regularly fail to report information relevant to their disability or financial status will provide SSA with an additional deterrent against future instances of nonreporting. Statistics also show that SSA collects only about 15 percent of overpaid benefits. For those recipients who leave the rolls because they become employed or die, recovery becomes even less likely. SSA’s 1997 write-off of $562 million in outstanding debt provides a clear example of how overpayments that are carried on SSA’s books for many years become extremely difficult to recover. Increasing the 10-percent limit should improve SSA’s debt collection efforts by allowing it to more quickly recover a greater portion of overpaid benefits from individuals who abuse program requirements before leaving the SSI rolls. However, it should be noted that our recommendation is designed to address chronic non-reporting by recipients. For the majority of SSI recipients, we continue to believe that field staff should have the discretion to calculate repayment amounts for overpayments on the basis of an individual’s ability to pay, rather than on a specified percentage of his or her monthly benefit payment or a specified dollar amount as determined by law. We have similar concerns about SSA’s statement that a review of current overpayment penalty policies is unnecessary, despite the infrequent use of penalties and field staff complaints that the process is administratively burdensome. By failing to act on this issue, SSA is forgoing a valuable opportunity to both demonstrate its commitment to deterring future instances of recipient nonreporting and to internally address agency policies that may be ineffective or difficult for staff to implement. In its comments, SSA contends that its current legislative proposal seeking authority to suspend the benefits of individuals who knowingly fail to report important eligibility information renders our recommendation to reassess its penalty process unnecessary. While we do not dispute the value of SSA’s proposal, it is unclear whether such authority will ultimately be granted, and we continue to believe that improving SSA’s existing penalty process to allow staff to quickly sanction individuals who regularly fail to comply with program reporting requirements will provide SSA with a deterrent against future overpayments. In short, SSA’s overpayment penalty process was intended to be an important internal control mechanism that should be fully utilized by the agency. If SSA’s legislative proposal is enacted into law, field staff will have another valuable tool, in addition to penalties, to control SSI payments. Finally, we believe SSA’s decision to reevaluate its current field office work-credit and incentive system represents a positive step toward rebalancing SSI program priorities. However, we do not agree with SSA’s objection to developing specific performance measures to hold managers accountable for thoroughly verifying recipient information and combatting program fraud and abuse. If properly designed and managed, these measures would provide much-needed incentives to encourage staff to devote more time to program integrity issues while servicing their daily workloads. Such performance measures would also further demonstrate to field staff SSA management’s commitment to protecting SSI benefits from being overpaid. The full text of SSA’s comments and our response are included in appendix II. | GAO provided information on the management problems associated with the Social Security Administration's (SSA) Supplemental Security Income (SSI) Program. GAO noted that: (1) to a great extent, SSA's inability to address its most significant long-standing SSI problems is attributable to two underlying causes: (a) an organizational culture or value system that places a greater priority on processing and paying claims than on controlling program expenditures; and (b) a management approach characterized by SSA's reluctance to fulfill its policy development and planning role in advance of major program crises; (2) SSA's organizational culture has traditionally valued quickly processing and paying SSI benefit claims more highly than controlling program expenditures by ensuring that only eligible individuals receive benefits; (3) other important financial controls such as aggressively pursuing the recovery of overpaid funds and combatting SSI fraud have also often received inadequate attention; (4) SSI problem resolution and program direction have also been hindered by SSA's hesitance to take a leadership role in SSI research and policy development, and its tendency to react to resulting crises through a series of ad hoc initiatives; (5) SSA's management approach was most evident regarding its reluctance to play a leadership role in recent policy debates surrounding SSI eligibility for children and substance abusers, and its failure to devise a comprehensive strategy to help SSI recipients return to work; (6) program direction has been further impaired by SSA's reluctance to develop agencywide plans that adequately focus on the specific characteristics and needs of the SSI program and its recipients; (7) SSA's current plans do not adequately communicate SSI priorities, goals, and objectives to staff; (8) reversing how the SSI program has traditionally operated will require sustained and expanded attention to developing and promoting tighter payment controls, increasing SSA's role in SSI research and policy formulation, and a willingness to define a long-term vision and strategy for improving program performance; (9) recently, SSA has initiated several measures aimed at improving the financial integrity of the SSI program; (10) as required by the Government Performance and Results Act of 1993, SSA also intends to develop a comprehensive SSI Action Plan in fiscal year 1998, which will serve as a blueprint for long-term program operations; and (11) however, such a plan has not yet been developed, and decisive action is needed to ensure that SSA will focus on those program areas that pose the greatest management challenges and that corrective actions will be implemented and sustained over time. |
The United States has assisted the Mexican government in its counternarcotics efforts since 1973, providing about $350 million in aid. Since the later 1980s, U.S. assistance has centered on developing and supporting Mexican law enforcement efforts to stop the flow of cocaine from Colombia, the world’s largest supplier, into Mexico and onward to the United States. In January 1993, the government of Mexico initiated a new drug policy under which it declined U.S. counternarcotics assistance and assumed responsibility for funding its own counternarcotics efforts. This policy remained in effect until 1995 when, according to the State Department, economic conditions and the growing drug-trafficking threat prompted the Mexican government to again begin accepting U.S. counternarcotics assistance for law enforcement organizations. Among other things, the Foreign Assistance Act of 1961, as amended, requires the President to certify annually that major drug-producing and -transit countries are fully cooperating with the United States in their counternarcotics efforts. As part of this process, the United States has established specific objectives for evaluating the performance of these countries. In 1997, the United States set the following objectives for evaluating Mexico’s counternarcotics cooperation as part of the 1998 certification process: (1) reducing the flow of drugs into the United States from Mexico, (2) disrupting and dismantling narco-trafficking organizations, (3) bringing fugitives to justice, (4) making progress in criminal justice and anticorruption reform, (5) improving money laundering and chemical diversion control, and (6) continuing improvement in cooperating with the United States. In February 1998, the President certified Mexico as fully cooperating with the United States. Since our 1996 report, Mexico has undertaken actions intended to enhance its counternarcotics efforts and improve law enforcement and other capabilities. The results of these actions are yet to be realized because (1) many of them are in the early stages of implementation and (2) some are limited in scope. According to U.S. and Mexican officials, it may take several years or more before the impact of these actions can be determined. Some of the actions include (1) increasing counternarcotics cooperation with the United States; (2) initiating efforts to extradite Mexican criminals to the United States; (3) passing an organized crime law that enhanced the government’s authority against money laundering and illegal use and diversion of precursor and essential chemicals; and (4) implementing measures aimed at reducing corruption, such as increasing the role of Mexico’s military forces in law enforcement activities. With respect to U.S.-Mexico counternarcotics cooperation, since we reported on these matters in 1996 additional activities have taken place. For example, the High-Level Contact Group on Drug Control, comprised of senior officials from both governments responsible for drug control, has met several times. Results of these meetings include the following: A U.S.-Mexico Binational Drug Threat Assessment was issued in May 1997, which addressed illegal drug demand and production, drug trafficking, money laundering, and other drug-related issues. A joint U.S.-Mexico Declaration was issued in May 1997 that includes pledges from both governments to work toward reducing illegal drug demand, production, and distribution; improving interdiction capacity; and controlling essential and precursor chemicals, among other issues. On February 6, 1998, a joint U.S.-Mexico binational drug strategy was issued. Mexican executive and legislative actions include instituting extradition efforts, passing various laws to address illegal drug-related activities, and passing several anticorruption measures. The United States and Mexico have had a mutual extradition treaty since 1980. Although no Mexican national has ever been surrendered to the United States on drug-related charges, since 1996 Mexico has approved the extradition of 4 of 27 Mexican nationals charged with drug-related offenses. Two are currently serving criminal sentences in Mexico, and two are appealing their convictions in Mexico. The remaining drug-related extradition requests include 5 persons currently under prosecution in Mexico and 14 persons still at large. It is not clear whether any Mexican national will be surrendered on such charges before the end of 1998. Another example of increased cooperation is the November 1997 signing of a joint United States and Mexico “temporary extradition protocol.” This protocol allows suspected criminals who are charged in both countries to be temporarily surrendered for trial while evidence is current and witnesses are available. The protocol is not yet in effect because it requires legislative approval in the United States and Mexico, and it has not been submitted to either body. In November 1996, Mexico passed an organized crime law that provides authority for Mexican law enforcement organizations to employ modern techniques to combat crime. These include authority to use plea bargaining and confidential informants, establish a witness protection program, and conduct controlled deliveries and court-authorized wiretaps. The law also has provisions for asset seizures and forfeitures. U.S. embassy officials stated that the passage of the organized crime law represents a major advancement in Mexico’s law enforcement capabilities. According to U.S. and Mexican officials, the impact of the organized crime law is not likely to be fully evident for some time. For example, Mexican and U.S. officials told us that the process of conducting investigations is inherently lengthy and that the capabilities of many Mexican personnel who are implementing and enforcing the law are currently inadequate. Mexican agencies are investigating a number of drug-related cases. U.S. embassy officials stated that, although some guidelines and policies have been established, additional ones still need to be developed, including the use of wiretaps and the witness protection program. While this law provides the law enforcement community with the necessary tools to fight organized crime, including drug trafficking, ONDCP reported in September 1997 that the law still lacks some important elements needed to meet the 1988 United Nations (U.N.) Vienna convention and other international agreements. For example, according to ONDCP, the law lacks provisions allowing the seizure of assets of a suspected criminal who has either died or fled Mexico. Furthermore, according to U.S. and Mexican officials, Mexico also needs to develop a cadre of competent and trustworthy judges and prosecutors that law enforcement organizations can rely on to effectively carry out the provisions of the organized crime law. Several U.S. agencies are assisting Mexico in this area. In May 1996, money laundering was made a criminal offense, with penalties of up to 22 years in prison. The law requires banks and other financial institutions to report transactions over $10,000 U.S. dollars and to obtain and retain customer account information. Under the prior law, money laundering was a tax offense, there were no reporting requirements, and violators were only subject to a fine. However, U.S. and Mexican officials are concerned that the new law does not cover so called “structuring”—intentionally making transactions just below the $10,000 reporting threshold. In addition, there is no reporting requirement on currency leaving the country. Between May and December 1997, the Mexican government initiated 27 money laundering cases. To date, one case has been prosecuted, and the remaining 26 cases are still under investigation. In the one case that was prosecuted, the charges were dismissed because a federal judge ruled that no link could be established between an illegal activity and the money. The Mexican government has appealed the judge’s decision. In May 1996, trafficking in drug precursor and essential chemicals was made a criminal offense. Although some chemicals that the United Nations recommends be controlled were not included in the law, Mexico passed additional legislation in December 1997 that included all chemicals, thus bringing Mexico into full compliance with U.N. and other international agreements. In addition, Mexico has taken further action to control chemicals by limiting the legal importation of precursor and essential chemicals to eight ports of entry and by imposing regulatory controls over the machinery used to manufacture drug tablets or capsules. The impact of the new chemical control law is not yet evident. Currently, the development of an administrative infrastructure for enforcing it is under way. Various U.S. agencies including the Departments of Justice and State have provided technical assistance and training to help Mexico carry out the law. It is well established and the President of Mexico acknowledges that narcotics-related corruption is pervasive and entrenched within the criminal justice system, and he has made rooting it out a national priority. Beginning in 1995, the President of Mexico expanded the role of the Mexican military in counternarcotics activities. The Mexican military, in addition to eradicating marijuana and opium poppy, has also taken over some law enforcement functions. For example, airmobile special forces units have been used to search for drug kingpins and detain captured drug traffickers until they can be handed over to civilian law enforcement agencies. In September 1996, the President of Mexico publicly acknowledged that corruption is deeply rooted in the nation’s institutions and general social conduct. He added that the creation of a new culture of respect for law must start with public officials and affirmed his administration’s intent to gradually eliminate official corruption. To do so, the President began to initiate law enforcement reforms. First, the primary Mexican government agency involved in counternarcotics-related activities has been reorganized. In 1996 the Attorney General’s office, commonly called the PGR, began a reorganization connected to a long-term effort to clean up and professionalize federal law enforcement agencies. As part of this action, the State Department reported that over 1,250 officials were dismissed for incompetence and/or corruption. U.S. and Mexican officials stated that about 200 of these officials have subsequently been rehired by the PGR because Mexico’s labor laws prevented the PGR from removing some of these personnel. Further, in February 1997, the Mexican military arrested General Jesus Gutierrez Rebollo, the head of the National Institute for Combat Against Drugs—the Mexican equivalent of the Drug Enforcement Administration—for corruption. In April 1997, the Attorney General dissolved the Institute and dismissed a number of its employees. A new organization, known as the Special Prosecutor for Crimes Against Health, was established to replace the Institute. This organization includes two special units: The Organized Crime Unit, with an authorized strength of 300, was established under the organized crime law to conduct investigations and prosecutions aimed at criminal organizations, including drug trafficking activities. The Bilateral Task Forces, with an authorized strength of 70, are responsible for investigating and dismantling the most significant drug-trafficking organizations along the U.S.-Mexican border. Finally, in 1997, the Attorney General instituted a screening process that is supposed to cover all PGR personnel including those who work for the special units. This process consists of personal background and financial checks, medical and psychological screening, urinalysis, and regular polygraph testing. However, U.S. embassy officials stated that the screening requirements do not apply to judges, most units of the military, and other key law enforcement organizations engaged in drug control activities. U.S. agencies are supporting this initiative by providing equipment, training, and technical assistance. Moreover, U.S. embassy personnel are concerned that Mexican personnel who failed the screening process are still working in the Special Prosecutor’s office and the special units. Although all of Mexico’s actions are positive steps to reducing drug-related activities, there are still many issues that need to be resolved. For example, U.S. and Mexican officials indicated that personnel shortages exist in the Special Prosecutor’s office and the special units; the special units face operational and support problems, including inadequate Mexican government funding for equipment, fuel, and salary supplements for personnel assigned to the units, and the lack of standard operating procedures; U.S. law enforcement agents assigned to the Bilateral Task Forces cannot carry arms in Mexico; and Mexico continues to have difficulty building competent law enforcement institutions because of low salaries and little job security. U.S.-provided assistance has enhanced the counternarcotics capabilities of Mexico’s military. However, the effectiveness and usefulness of some equipment provided or sold to Mexico is limited due to inadequate planning and coordination among U.S. agencies, particularly military agencies within DOD. In October 1995, the U.S. Secretary of Defense visited Mexico in an effort to strengthen military-to-military relationships between the two countries. As a result of this visit, the Mexican military agreed to accept U.S. counternarcotics assistance. Table 1 shows DOD’s counternarcotics assistance provided to the Mexican military during fiscal years 1996-97. All of the helicopters and the C-26 aircraft were delivered to the Mexican military during 1996 and 1997. According to DOD officials, Mexico has also received some logistics and training support; however, they could not provide us with the exact level of support given because this data was not readily available. DOD plans to provide about $13 million worth of counternarcotics assistance under section 1004 of the Defense Authorization Act of 1989 to Mexico’s military in fiscal year 1998. Furthermore, the Mexican military used its own funds to purchase two Knox-class frigates from the U.S. Navy through the Foreign Military Sales Program. These two frigates were valued at about $7 million and were delivered to Mexico in 1997. While some of the equipment has helped improve Mexico’s capabilities, some has been of limited usefulness. Additionally, inadequate logistics support to the Mexican military has hindered its efforts to reduce drug-related activities in Mexico. The following examples illustrate some of the problems. The U.S. embassy has reported that the UH-1H helicopters provided to Mexico to improve the interdiction capability of Mexican army units are of little utility above 5,000 feet, where significant drug-related activities, including opium poppy cultivation, are occurring. The average operational rates for the UH-1H helicopters have remained relatively low, averaging between 35 and 54 percent, because of inadequate logistics support such as delays in the delivery of spare parts. The four C-26 aircraft were provided to Mexico without the capability to perform the intended surveillance mission. U.S. embassy officials stated that the Mexican military has not decided how many of the aircraft will be modified to perform the surveillance mission, but modifying each aircraft selected for surveillance will cost at least $3 million. Regarding the two Knox-class frigates, when they were delivered in August 1997, the ships lacked the equipment needed to ensure the safety of the crew, thus rendering the ships inoperable. The U.S. Navy estimated that it will cost the Mexican Navy about $400,000 to procure this equipment and that it will be at least 2 years before the ships will be operational. Even though the U.S. Navy knew that the ships would not be operational when they were delivered, DOD began providing the Mexican Navy with about $1.3 million worth of training to 110 personnel related to the two Knox-class frigates. U.S. embassy officials stated that this training will be completed in March 1998. The Mexican Navy will reassign these personnel until the ships can be used. According to DOD officials, they approved the training because they were not informed by the U.S. Navy that the ships would not be operational. We believe that planning and coordination of U.S. counternarcotics assistance to Mexico could be improved. Thus, we believe that the Secretary of State, in close consultation with the Secretary of Defense and the National Security Council, should take steps to ensure that future assistance is, to the maximum extent possible, compatible with the priority requirements identified in U.S. counternarcotics programs and that adequate support resources are available to maximize the benefits of the assistance. Without measures of effectiveness, it is difficult for U.S. decisionmakers to evaluate the progress that the United States and Mexico are making to reduce the flow of illegal drugs into the United States. We have previously noted the need for ONDCP to develop drug control plans that include performance measures to allow it to assess the effectiveness of antidrug programs. In February 1997, we recommended that ONDCP complete its long-term drug control plan, including quantifiable performance measures and multiyear funding needs linked to the goals and objectives of the international drug control strategy. Subsequently, in February 1998, ONDCP issued a national drug control strategy covering a 10-year period. In March 1998, ONDCP issued general performance measures, but they do not include targets and milestones for specific countries, such as Mexico. As I noted earlier, the United States and Mexico issued a joint U.S.-Mexico binational drug strategy in February 1998. Although the binational strategy is indicative of increased U.S.-Mexico cooperation, it does not contain critical performance measures and milestones for assessing performance. State Department officials stated that the bilateral process of establishing performance measures and milestones is incremental and will be addressed during 1998. ONDCP officials told us that they plan to issue specific targets and milestones for the binational strategy by the end of this year. This concludes my prepared remarks. I would be happy to respond to any questions you may have. Drug Control: Observations on Counternarcotics Activities in Mexico (GAO/T-NSIAD-96-239, Sept. 12, 1996). Drug Control: Counternarcotics Efforts in Mexico (GAO/NSIAD-96-163, June 12, 1996). Drug Control: Observations on Counternarcotics Efforts in Mexico (GAO/T-NSIAD-96-182, June 12, 1996). Drug War: Observations on U.S. International Drug Control Efforts (GAO/T-NSIAD-95-194, Aug. 1, 1995). Drug War: Observations on the U.S. International Drug Control Strategy (GAO/T-NSIAD-95-182, June 27, 1995). Drug Control: Revised Drug Interdiction Approach Is Needed in Mexico (GAO/NSIAD-93-152, May 10, 1993). Drug Control: U.S.-Mexico Opium Poppy and Marijuana Aerial Eradication Program (GAO/NSIAD-88-73, Jan. 11, 1988). Gains Made in Controlling Illegal Drugs, Yet the Drug Trade Flourishes (GAO/GGD-80-8, Oct. 25, 1979). Opium Eradication Efforts in Mexico: Cautious Optimism Advised (GAO/GGD-77-6, Feb. 18, 1977). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed its work on the counternarcotics efforts of the United States in Mexico, focusing on the: (1) nature of the drug threat from Mexico and results of efforts to address this threat; (2) planning and coordination of U.S. counternarcotics assistance to the Mexican military; and (3) need to establish performance measures to assess the effectiveness of U.S. and Mexican counternarcotics efforts. GAO noted that: (1) Mexico is the principle transit country for cocaine entering the United States and, despite U.S. and Mexican counternarcotics efforts, the flow of illegal drugs into the United States from Mexico has not significantly diminished; (2) no country poses a more immediate narcotics threat to the United States than Mexico, according to the Department of State; (3) the 2,000-mile U.S.-Mexican border and the daunting volume of legitimate cross-border traffic provide near-limitless opportunities for smuggling illicit drugs, weapons, and proceeds of crime, and for escape by fugitives; (4) Mexico, with U.S. assistance, has taken steps to improve its capacity to reduce the flow of illegal drugs into the United States; (5) among other things, the Mexican government has taken action that could potentially lead to the extradition of drug criminals to the United States and passed new laws on organized crime, money laundering, and chemical control; (6) it has also instituted reforms in law enforcement agencies and expanded the role of the military in counternarcotics activities to reduce corruption--the most significant impediment to successfully diminishing drug-related activities; (7) while Mexico's actions represent positive steps, it is too early to determine their impact, and challenges to their full implementation remain; (8) no Mexico national has actually been surrendered to the United States on drug charges, new laws are not fully implemented, and building competent judicial and law enforcement institutions continues to be a major challenge; (9) since fiscal year 1996, Department of Defense (DOD) has provided the Mexican military with $76 million worth of equipment, training, and spare parts; (10) the Mexican military has used this equipment to improve its counternarcotics efforts; (11) however, due, in part, to inadequate planning and coordination within DOD, the assistance provided has been of limited effectiveness and usefulness; (12) improved planning and coordination could improve Mexico's counternarcotics effectiveness; (13) although the Mexican government has agreed to a series of actions to improve its counternarcotics capacity, and the United States has begun to provide a larger level of assistance, at the present time there is no system in place to assess their effectiveness; and (14) even though the United States and Mexico have recently issued a binational drug control strategy, it does not include performance measures. |
To help federal agencies manage their respective travel programs and achieve travel cost savings, GSA issues and revises the FTR. According to the FTR website, GSA promulgates the FTR to: (1) interpret statutory and other policy requirements in a manner that balances the need to ensure that official travel is conducted responsibly with the need to minimize administrative costs, and (2) clearly communicate the resulting requirements to federal agencies and employees. Formal changes to the FTR are identified as amendments and published in the Federal Register in accordance with the rulemaking provision of the Administrative Procedure Act. GSA officials stated that, while the agency develops and promulgates the rules and amendments that comprise the FTR, it does not have enforcement authority for agencies’ compliance with FTR requirements. In addition to FTR amendments, GSA also issues travel bulletins (nonbinding guidance) that GSA officials said can typically be issued within a shorter timeframe than final rules published in the Federal Register. According to GSA officials, the travel bulletins are generally issued to remind agencies of existing FTR requirements. Administration actions have encouraged agencies to develop mechanisms by which federal agencies could reduce travel. For example, EO 13589, “Promoting Efficient Spending” called for agencies and their components to: 1) to devise strategic alternatives to government travel, including local or technological alternatives, such as teleconferencing and video conferencing; 2) conduct business and host or sponsor conferences in a space controlled by the federal government, wherever practicable and cost effective; and 3) designate a senior official responsible for developing and implementing policies and controls to ensure efficient spending on travel- and conference-related activities. Following the issuance of EO 13589, OMB issued a supporting memorandum on Promoting Efficient Spending to Support Agency Operations. In order to support the cost saving goals of the Executive Order, the memorandum explained the role that travel plays in supporting agency missions and supporting local economies. At the same time, the memorandum required that each agency reduce its spending on travel costs, and provided that specific travel policies be established or clarified to manage travel budgets more efficiently and to reduce reliance on travel. The reductions were time limited through fiscal year 2016, but the memorandum explained that the intent was, among other things, to make the reductions in travel budgets sustainable. In 2012, GSA formed the Government-wide Travel Advisory Committee (GTAC) to review existing travel policies, processes, and procedures. GSA formed GTAC to: 1) review existing travel policies, processes, and procedures; 2) ensure that the policies and procedures are accountable and transparent; and 3) help federal agencies achieve their missions effectively and efficiently at the lowest logical travel cost. In 2015, GTAC issued a report that provided advice and recommendations to GSA to, among other things, incorporate industry best practices. In 2015, GSA also established the Senior Travel Official Council to assist in the administration’s efforts to promote efficient spending. Data used by the agencies to prepare Travel Reporting Information Profile (TRIP) reports for GSA are maintained within each agency’s travel system, which is part of the E-Gov Travel Service 2 (ETS2). The TRIP report requires agencies to provide aggregate travel cost data (transportation, lodging, and meals and incidentals) for five travel categories: employee emergency—travel related to an unexpected occurrence/event or injury/illness that affects the employee personally and/or directly that requires immediate action/attention; mission (operational)—travel to a particular site to perform operational or managerial activities; special agency mission—travel to carry out a special agency mission and/or perform a task outside the agency’s normal course of day-to-day business activities that is unique or distinctive. These special missions are defined by the head of agency and are normally not programmed in the agency annual funding authorization; nontraining conference—travel performed in connection with a prearranged meeting, retreat, convention, seminar, or symposium for consultation or exchange of information or discussion; and, training—travel in conjunction with educational activities to become proficient or qualified in one or more areas of responsibility. GSA encourages federal agencies to use ETS2 reporting capabilities as a means to track, monitor, and report on costs related to travel spending. ETS2 is a comprehensive travel services program that brings a reporting capability to the agencies’ travel programs. In addition, GSA created ETS2 with the expectation that the services available under ETS2 would 1) enable federal agencies to further consolidate travel services, platforms, and channels; 2) improve the leverage of government travel spending; 3) increase transparency for improved accountability; and 4) reduce waste. According to GSA officials, ETS2 provides the travel reporting capability that aligns with and supports OMB’s Memorandum M- 12-12. ETS2 can generate reports related to an agency’s travel costs and other travel-related activities. For example, through ETS2 an agency can generate operational reports to monitor day-to-day travel services, travel management, and regulatory reports that can be used to foster and encourage an agency’s managers to make informed decisions. By 2013, GSA awarded the development of ETS2 to two contractors that allowed agencies to design a travel data cost system that best meets their respective needs. Officials at each of the six selected agencies stated that, while their respective agencies pursued a wide range of cost-saving efforts to address the GSA cost-saving provisions, all of them had policies in place that addressed these provisions prior to GSA issuance of either an amendment or travel bulletin. Some of the agencies’ cost-saving efforts involved these agencies updating internal policy statements, issuing internal guidance, configuring their ETS2 to require justifications for making a policy exception, and providing in-person and web-based trainings. For example, Department of Defense (DOD) officials stated that for half of the travel bulletin provisions (10 of 20) DOD’s related actions had been in effect since at least 1998. In one instance, DOD included a provision in the December 1998 Joint Travel Regulations (JTR) that limited reimbursement of employees buying rental car insurance. This included the collision damage waiver adjustment and theft protection. This aligned with the 2014 GSA Bulletin FTR 14-05’s guidance recommending that travelers decline additional insurance when renting vehicles. Similarly, the Department of Agriculture’s (USDA) travel system contained a restriction prompting travelers to provide a justification for not using a CPP fare, as well as confirmation that the alternative air fare is greater than the “least logical airfare plus 75.00 dollars.” In addition, USDA’s internal travel policies state that each agency and staff office is expected to use the method of travel most advantageous to the government, including lower-cost airfares. According to officials at the six selected agencies, they were already taking action to address the issues introduced by FTR amendments and GSA travel bulletins. However, because the bulletins served to reinforce existing policy, these actions resulted in these agencies’ officials taking further actions—either developing new travel policy or issuing a memorandum to staff reminding them of existing agency travel policy—to highlight the policy. For example, according to officials at the Department of State (State), while the agency had codified the requirement for using contract air carriers in its own Foreign Affairs Manual (FAM) at least as early as 2005, the more recent GSA travel bulletin on contract and noncontract airfares influenced State to reinforce the policy by issuing departmental notices and cables to all diplomatic and consular posts outlining the parameters for the use of commercial airfares. Similarly, DOD officials said that the agency addressed two cost-saving provisions within the same GSA travel bulletin by taking a new action to augment an existing policy. One provision addressed the issue of reviewing internal policies to ensure that the agency’s use of CPP contract and non-contract air carriers resulted in overall cost savings. The other provision addressed agencies’ management of their internal procedures to assess risk associated with using non-contract fares, which officials said they addressed through the same cost-saving effort. DOD officials stated that they had already taken a related action prior to the issuance of the bulletin. However, the travel bulletin resulted in DOD updating the JTR by strengthening the language related to using restricted fares and requiring the use of a Decision Support Tool to assist in determining if a restricted fare may be advantageous to the government. In other cases, FTR amendments or GSA travel bulletins prompted the six agencies to take new actions. In these instances, these agencies took action as a direct result of the recently-issued GSA provisions for which they had not previously pursued a related cost-saving effort. This usually happened through the development of a new agency-specific travel policy or the issuance of a memorandum to staff reminding them of existing agency travel policy. For example, one FTR amendment addressed the use of rental cars that resulted in the Department of Justice (DOJ) issuing a memorandum advising travelers that pre-paid fueling options for rental vehicles are not authorized according to the amendment. This was a cost- saving effort that the agency had not pursued prior to the amendment. Officials at five of the six selected agencies described a few cases where their respective agencies took no specific policy action, but either: 1) advised employees to follow the FTR; 2) asked individual components to create unique policies that ensured FTR compliance; or 3) provided approving officials with the discretion to oversee employees’ compliance with the FTR as appropriate, and determine whether or how to adopt promising practices from the GSA travel bulletins. For example, according to the Department of Homeland Security (DHS) officials, employees were reminded to follow the FTR. However, if any of the agency’s components needed to clarify or address parts of the FTR not specifically covered by the agency’s Financial Management Policy Manual (FMPM), DHS empowered them to do so. For example, FMPM does not have a specific policy prohibiting reimbursement for purchasing pre-paid fueling options for rental cars, a GSA requirement. However, DHS officials stated that employees were asked to “exercise the same care in incurring expenses that a prudent person would exercise if using his or her personal funds while on personal business.” According to officials, components and travelers were still responsible for adjusting their travel actions to remain compliant with the FTR even when DHS policy could not be promptly updated. According to DHS officials, “as questions are raised by the travelers, the policy staff at each component provides the guidance necessary to maintain FTR compliance.” In another example, officials at State refrained from taking a policy action to address two provisions in GSA Bulletin FTR 13-03 because the department wanted to provide approving officials with some discretion over travel decisions on individual trips and vouchers, such as determining if a rental car is a better option than public transportation for travelers, and whether travelers should be required to share rental cars and taxis while on official travel in groups and when public transportation is not a better option. The cost of transportation is among many factors approving officials weigh when deciding. Given a wide diversity of factors across global locations in safe, secure, and available local transportation options, State officials told us that at that time they did not dictate a centralized policy on employee sharing of rental cars and taxis in order to leave this discretion to the traveler’s approving official. Use of public transportation over rental car usage, remains in the traveler’s approving official’s discretion. State subsequently updated their FAM on Dec 7th, 2015 with specific provisions for rental cars on official travel. They said this allows approving officials the discretion to apply the prudent traveler rule, which in application “should clearly require employees to share rental cars while on official travel in groups.” We also found that the selected agencies initiated travel cost-saving efforts that were in addition to provisions recommended by GSA. For example, a September 2015 Defense Travel Management Office (DTMO) report—DOD Travel Reform—said that DTMO tracked numerous cost- saving travel reform initiatives for policy simplification that it pursued outside of the actions taken related to GSA’s bulletins and FTR amendments. These initiatives include: 1) standardization of reimbursement rates for privately-owned vehicles into a single rate; 2) creation of a standard travel rate to ensure that per diem is very limited for trips in which it takes a day to travel to a temporary duty (TDY) travel location; and 3) expansion of the definition of incidental expenses to include miscellaneous expenses. In another example, the Department of Veterans Affairs (VA) officials stated that in addition to the web-based training material providing for reduced per diem for long-term TDY, VA had additional requirements that were beyond the scope of GSA’s bulletins and FTR amendments and that targeted additional cost savings. These requirements stated that travelers must stay in weekly or monthly rentals during extended assignments whenever possible and reduce their meals, incidentals, and expenses when the traveler is able to obtain lodging or meals at lower costs. The Senior Travel Official Council (STOC) brings travel officials from all federal agencies together to share information and best practices to further cost-saving efforts. GSA established the STOC in 2015 to identify consistencies and best practices in the areas of travel policy, programs, and procurement. According to GSA officials, STOC is designed to help agencies make better use of their travel cost data to make informed decisions about internal policymaking and replicate actions taken by other agencies that implemented successful cost-saving policies and practices. According to STOC meeting minutes, the council has taken some steps toward information-sharing efforts that could benefit all federal agencies. For example, in 2015, the council initiated a pilot program involving eight agencies to share promising practices in five areas: online booking, airfare savings, hotel reservations, car rentals, and SmartPay usage. Based on the pilot, STOC will come up with policies and processes that other agencies can choose and implement. In a December 2015 STOC meeting, officials from DOJ and the National Science Foundation provided information on actions they took prior to the pilot, and how those actions resulted in cost savings at their agencies. DOJ noted that their cost savings programs had top-down agency support and agency officials implemented a policy change that required online booking and the lowest logical airfare. By implementing this policy, DOJ officials claimed a savings of more than $9.2 million when using lowest logical airfare (non- refundable tickets) and an online booking rate of 68 percent in fiscal year 2015. According to GSA officials, the STOC plans to encourage agencies to pursue these policies beyond the pilot program. However, according to GSA officials and STOC meeting minutes, STOC members have yet to take full advantage of the STOC to network and learn about other agency-initiated policies that could lead to potential cost savings. According to GSA officials, the STOC members had not yet formally shared much information about other promising practices for tracking and monitoring savings that could be replicated to benefit other federal agencies. However, these officials said that the STOC is still in the early stages, and opportunities for agencies to share information have been limited. Such practices could help agencies to develop and implement cost-saving efforts, and quantify those efforts when possible. Without using the STOC meetings to engage in information sharing about these practices, agencies have a limited ability to learn from and apply other agencies’ methods for using E-Gov Travel Service 2 (ETS2) data reports to track and monitor the impact and effectiveness of their policies on travel spending reductions. Only four cost-saving efforts at two of the selected agencies—DOJ and DOD—could be quantified. DOJ officials were able to quantify their estimated cost savings for implementing three cost-saving measures: 1) a savings of more than $15 million due to the use of non-contract, non- refundable fares from fiscal year 2015 through the first quarter of fiscal year 2016, 2) an increase in the use of video conferencing that saved an estimated $16.3 million during fiscal year 2010, and 3) a requirement for the use of online booking for travel reservations whenever possible that saved nearly $3.4 million in transaction fee savings for fiscal year 2015. DOD officials also reported quantifiable cost-savings related to a GSA provision—which promoted implementing a per diem reduction for travel over 30 days that DOD adopted by mandating a per diem reduction to 75 percent of the locality rate for TDY at a single location that extends between 31 and 180 days and 55 percent for TDY of 181 or more days— that resulted in savings of over $56 million between November 2014 and December 2015. Although most cost-saving efforts at the selected agencies were not quantifiable, agency officials described how cost savings were likely achieved and what data limitations existed. Officials said that attempting to quantify many of the cost savings efforts would be a very labor intensive effort, and would require documenting the cost of decisions which are not made at the individual travel voucher level. For example, at VA, a travel policy stipulated that in selecting a particular local transportation method, the agency should consider, among other things, the accessibility and availability of public transportation at the TDY location. Therefore, VA officials said they did not approve car rentals in areas where public transportation is accessible and available, such as Washington, D.C. While VA officials stated that this approach likely resulted in cost savings by having travelers avoid expensive hotels and city parking fees, they could not quantify these cost savings. In other instances, the selected agencies could not provide evidence of quantifiable cost savings at the aggregate level because travel data were not available from certain components within their agencies. For example, many sub-agencies within DHS kept track of and reported some cost savings through the elimination of non-mission critical travel, and by maximizing the use of conference calls and web-based training and meetings. However, DHS did not track such savings at the department level and could not quantify them for us. According to DHS officials, they were unable to determine or quantify cost savings at an aggregate level for most measures because DHS components had a wide variety of ways to avoid costs that were most appropriate for their unique missions, and the agency did not have a tracking mechanism to identify costs avoided. In addition, State officials said that there may be some limitations in their ability to collect and report data since travel at State is decentralized with each office authorizing, conducting, and collecting its own travel data. Generally, it was not possible for most of the selected agencies to associate any one description of cost savings with a specific GSA provision. Officials at the agencies we interviewed told us that a wide range of factors influenced cost-saving efforts, including, but not limited to, GSA provisions contained within FTR amendments or GSA travel bulletins. For example, in addition to responding to GSA provisions at DHS, officials credited the Secretary’s initiative to cut costs and improve overall operational efficiency as a reason for taking action on some of the same issues later recommended by GSA. Officials also cited other government-wide factors including requirements under Executive Order 13589 and OMB Memorandum M-12-12 that influenced cost saving efforts at agencies at the same time officials responded to GSA’s cost- saving provisions. According to officials at five of the six selected agencies, a number of limitations in the travel data system designed by GSA and maintained by the agencies affected their ability to identify cost savings related to implementation of cost-saving provisions in FTR amendments and GSA travel bulletins. GSA officials stated that while most agencies have the capability via ETS2 to track, monitor, and report on cost savings, this reporting capability is not being leveraged consistently across the federal agencies to manage their travel costs. However, because agencies can customize ETS2 to fit their particular needs, these advances may still not provide for common reporting of travel information across federal agencies. Further, officials at GSA confirmed that while ETS2 offers better tracking and monitoring of travel costs compared to its predecessor, it still cannot provide for a central means of collecting and reporting data that would be reliable for the purposes of comparison across agencies. Access to quality travel data is essential to performing key travel management tasks. The 2015 Government-wide Travel Advisory Committee (GTAC) final report stated that access to quality data requires maintaining data from multiple sources in an integrated framework whether a system, database, or data management tool that would allow federal agencies to compile and maintain enterprise-level travel data sufficient to support business decisions, respond to government-wide data calls, leverage sourcing strategies, and comply with the Government Performance and Results Act of 1993. However, GTAC found that the federal government still had not started maintaining data from multiple sources in a single, centralized, and integrated framework, whether a system, database, or data management tool. The report’s findings supported the GSA officials’ statements. According to GSA officials, there is a need for more standardization in travel management reports to give GSA the ability to compare travel cost data between agencies that use either of the two ETS2 vendors, and report government-wide trends. Agencies’ abilities to customize their reporting options without also meeting standard reporting requirements hinder GSA’s ability to establish a common metric for tracking and monitoring federal travel spending. Agencies are also unable to fully assess the travel costs incurred by their staff. Thus, they are unable to fully identify areas for potential cost savings. ETS2 allows agencies the option to select between two vendors who can assist them with the tracking and monitoring of travel cost data. Agencies can customize and configure their travel data systems to meet their travel needs in alignment with their policies and business needs. According to GSA officials, because agencies can customize and configure their travel data systems, and because of the lack of government-wide standards on cost savings metrics, it is difficult for GSA to facilitate peer-to-peer comparisons across the federal agencies. For example, officials said that while both ETS2 vendors have a standard report to help agencies determine their hotel attachment rates within the system, one vendor’s report collects information on hotels booked within the data travel system, and the other vendor’s report collects information on hotels booked outside of the data travel system. As a result, it is difficult for agencies and GSA to determine if travelers at all agencies are taking advantage of the reduced rates available when booking their lodging along with transportation. Additionally, one of the goals of ETS2 is to help agencies achieve greater data transparency. The required TRIP reports provide travel spending information that was previously not tracked in ETS. This includes breaking out the travel and lodging costs into the five different categories of federal travel. Because of a standardized template used to generate the TRIP reports and the increased capacity of ETS2, agencies are now able to report travel spending in more consistent formats. While this indicates the potential for a greater level of tracking and monitoring of travel spending by agencies, GSA officials believe that this may not be achieved unless agencies adopt common reporting practices. According to GSA officials, in late 2015, they began to plan a shared services model to help agencies better manage their travel programs. This model would allow agencies to share a wide range of travel services with each other to reduce both administrative costs and burden to the government, and enable data-driven decision making. According to GSA officials, based on the draft business plan the shared services model would be able offer government-wide data collection, benchmarking, and reporting standards that agency managers can access and use to inform decisions. According to officials, under the shared services model, while the agencies would still be the decision makers regarding agency-specific travel policies, GSA would be able to advise them on which system configuration would allow the agencies to obtain the lowest travel costs relative to mission objectives. The six selected agencies that accounted for more than three-quarters of federal travel dollars (Agriculture, Defense, Homeland Security, Justice, State, and Veterans Affairs) did pursue a variety of efforts aimed at reducing travel costs that generally aligned with GSA’s amendments to the FTR and travel bulletins. However, these agencies generally lacked data to track these efforts. GSA’s efforts to track and monitor travel costs across federal agencies is similarly limited by a lack of standardized data as reported by individual agencies. More standardized data reporting could help GSA advise agencies on how to limit travel costs while still achieving their agencies’ missions. GSA created the STOC to identify efficiencies and discuss practices for achieving travel cost savings. However, STOC members have not yet taken full advantage of this opportunity. As the STOC moves forward, opportunities exist to help agencies share information. Additional attention to these issues by STOC can help agencies develop, implement, and share their travel cost-saving efforts. Such additional attention to these issues could in turn help STOC promote the more efficient use of travel funds across the federal government without imposing additional requirements on agencies. The Administrator of General Services, in consultation with the STOC, should develop a travel data management approach, including common reporting formats that would provide GSA with more consistent travel cost data allowing GSA to compare travel costs across federal agencies. GSA could also include in this data management approach the planned implementation of the shared services model that would allow agencies to share a wide range of travel services with each other. This process could reduce both administrative costs and burden to the government and enable data-driven decision making. The Administrator of GSA, as chair of the STOC, should work with the STOC to identify promising opportunities and implement leading practices to help agencies leverage their travel resources and implement travel cost-saving efforts. We provided a draft of this report to the Administrator of the GSA and the Secretaries of Agriculture, Defense, Homeland Security, Justice, State, and Veteran Affairs for review and comment. GSA agreed with both recommendations as discussed below. The Departments of Defense, Justice, State, and Veterans Affairs provided technical comments which we incorporated as appropriate. The Departments of Agriculture and Homeland Security had no comments to our report. In written comments received on June 30, 2016, GSA staff agreed with the two recommendations in this report and agreed to take the following actions to address them. In response to the first recommendation, GSA officials stated that they will conduct a test program to determine the opportunities and barriers of creating a reliable standardized data repository containing government-wide travel spending data. To address the second recommendation, GSA officials agreed to establish an STOC working group to implement a process of documenting promising travel management and cost-saving practices, which could be used by Senior Travel Officials at their respective agencies. We are sending copies of this report to the appropriate congressional committees, and the aforementioned agencies. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. 76 Fed. Reg. 63844, (Oct. 14, 2011) (codified at 41 C.F.R. § 301-11) Agency must not reimburse real estate expenses or the lodging portion of per diem for the purchase or sale of a personal residence or recreational vehicle at the temporary duty (TDY) travel location. Agency must not reimburse the lodging portion of per diem to travelers who lodge at their personal residences while on TDY. Removal of Conference Lodging Allowance Provisions 78 Fed. Reg. 65210, (Oct. 31, 2013) (codified at 41 C.F.R. §§ 301-11 and 301-70) Agency must no longer use the conference lodging allowance reimbursement option for employees on TDY. This refers to the allowance for travelers to exceed the lodging rate by up to 25 percent. If per diem lodging rates are unavailable at conference location, travelers should construct a cost comparison to decide whether to find lodging within per diem that is away from the conference location or reimburse actual expenses for lodging at the conference location that does not have a per diem rate. 80 Fed. Reg. 27259, (May 13, 2015) (codified at 41 C.F.R. §§ 300-3, 301-10 and 301-70) Travelers must use the least expensive compact car available unless an exception is approved. Travelers will not be reimbursed for purchasing pre-paid fuel for rental cars. Travelers should refuel prior to returning the vehicle to the rental car company. Travelers will not be reimbursed for fees associated with rental car loyalty points or transfer of points charged by car companies. FTR 13-03, Dec. 21, 2012 Agency should justify that employee travel is necessary to accomplish the mission. Agency should consider technological alternatives to travel. Agency should consider all viable lowest-cost transportation options, such as selecting a non- contract airfare. Agency should have controls in place to collect all refunds for unused or partially used airline tickets. Agency should encourage employees to use public transportation as the first option for local transportation when on TDY. Agency should increase employee sharing of rental cars and taxis. Agency should encourage employees to evaluate all lodging options that are within per diem. Agency should evaluate reduced per diem for TDY assignments that last more than 30 days and a Temporary Change of Station for TDY assignments that last more than 180 days. FTR 13-07, June 4, 2013. Agency should review internal policies to ensure that use of City Pair Program contract and non- contract air carriers results in overall cost savings. Agency may authorize use of non-contract airfares in three scenarios, including when a non- contract fare, if used, would result in a lower total trip cost to the government. Agency must consider (1) all direct costs, including per diem and actual transportation cost, and (2) indirect costs, including overtime and lost work time, when authorizing a method of transportation. Agency must assess risk associated with using non-contract airfares, which includes ensuring that the traveler reasonably anticipates using the ticket. Agency should configure the E-Gov Travel Service (ETS) to reflect an airfare selection policy that is designed to achieve the lowest total trip cost. Agency must book all travel airfare through appropriate ETS booking channels regardless of fare type. FTR 14-05, Jan. 16, 2014. Agency should review agency policy to verify that it is abundantly clear that all rental cars must be authorized only when in the best interest of the government. Agency should ensure travelers book rental car reservations through ETS where available or arrange car rentals through agency’s Travel Management Center (TMC). No other methods may be used. Agency should ensure travelers are familiar with the Defense Travel Management Office (DTMO), U.S. Government Rental Car Agreement and encourage them to rent cars from participating vendors. Agency should educate travelers to decline additional insurance, such as collision damage waiver or theft insurance, which travelers generally may not be reimbursed for. FTR 14-08, May 13, 2014. Agencies are strongly encouraged to notify GSA’s Office of Government-wide Policy of the name and contact information of the employee selected to be responsible on an agency-wide basis (i.e. the “Senior Travel Official”) for ensuring efficient travel spending. Agencies should consider including certain major responsibilities as part of the Senior Travel Official position, such as researching best practices and recommending actions to improve efficiency and effectiveness of travel programs and directing and managing agency travel programs to obtain economy and efficiency. In addition to the contact name above, Tara Carter (Assistant Director) and Joseph Santiago (analyst-in-charge) supervised the development of this report. Jehan Chase, Kelvin Dawson, Keith Logan, Michael O’Neill, Laurel Plume, Silvia Porres-Hernandez, Steven Putansu, Wesley Sholtes, and Stewart Small made key contributions to this report. | Federal agencies rely on travel to achieve a broad range of missions. GSA helps agencies develop travel policy by providing guidance to agencies, including issuing and revising the FTR. The administration and GSA have encouraged agencies to take steps to adopt cost-savings efforts and promote efficient travel spending. House Report 112-136 included a provision for GAO to report on whether FTR revisions resulted in measurable reductions in travel costs. This report: 1) describes selected agencies' actions taken to address FTR revisions; 2) determines the extent to which FTR revisions led to cost savings; and 3) determines any cost savings achieved during fiscal years 2012 to 2015. GAO reviewed information from six selected federal agencies with the largest amount of travel spending in fiscal year 2015. GAO also reviewed how these agencies responded to GSA's FTR amendments and travel bulletins to achieve cost savings. The Departments of Agriculture, Defense, Homeland Security, Justice, State, and Veterans Affairs, the six federal agencies with the largest travel spending in fiscal year 2015, pursued a variety of cost-saving efforts that generally aligned with regulations and guidance issued in either Federal Travel Regulation (FTR) amendments or General Services Administration (GSA) travel bulletins from fiscal year 2011 to fiscal year 2015. GSA administers and revises the FTR—which interprets statutory and other policy requirements to ensure that official travel is conducted responsibly—and minimizes administrative costs. Although GSA does not have the authority to enforce the FTR, it issues FTR amendments and travel bulletins to help federal agencies manage their respective travel programs and achieve travel cost savings through the provisions contained in the amendments and travel bulletins. GSA FTR amendments and travel bulletins issued between fiscal years 2011 and 2015 contained a total of 27 cost-saving provisions. Agency officials at each of the six selected agencies stated that their respective agencies either had policies in place that already addressed the cost-saving provisions; developed new travel policies or issued guidance that reinforced the provisions or updated existing policies related to the provisions; or advised employees to follow the FTR without implementing an agency-specific policy. The six agencies reported that GSA's review of the FTR to revise obsolete and outdated policies influenced their actions and resulted in cost savings. However, most of these savings could not be quantified. Only four cost-saving efforts at two agencies—the Departments of Defense and Justice—could be quantified. These agencies reported that a wide range of factors influenced their cost-saving efforts. In addition to FTR-compliance efforts, these agencies reported that administration actions on reducing travel costs, cutting waste, and promoting efficient spending influenced their approaches to managing travel costs. Agency officials also reported that broader efforts to improve operational efficiency, and efforts to responsibly use resources, also influenced their agency-specific policies and practices to promote efficient travel spending. According to GSA and officials from the six selected agencies, data limitations existed both within the selected agencies in terms of their ability to quantify travel-related cost savings, and government-wide in terms of comparing and aggregating travel data across agencies. Without standardized reporting practices, the federal government lacks common metrics for identifying, comparing and evaluating travel spending across federal agencies. The Senior Travel Official Council (STOC) was formed in 2015 to identify efficiencies and discuss best practices related to travel cost savings. According to its charter, the STOC allows agencies to work toward more consistent reporting of travel data and share information on cost-saving efforts. Although the STOC has taken some initial action to bring agencies together, additional efforts to facilitate agencies' information sharing and identification of promising practices could further enhance these efforts to encourage and achieve travel cost-saving across the federal government. GAO recommends that the Administrator of GSA should work with the STOC to: 1) develop a travel data management approach that would provide GSA with more consistent travel cost data; and 2) as chair of the STOC, identify and implement promising practices to help agencies leverage travel resources and achieve cost savings. |
According to DHS’s 2014 Quadrennial Homeland Security Review (QHSR), biological threats and hazards—ranging from bioterrorism to naturally occurring pandemics—are a top homeland security risk. The QHSR acknowledges that numerous departments and agencies at the federal, state, local, tribal, and territorial levels, as well as the private sector, contribute to the national effort to address biological threats and hazards. As such, according to the QHSR, DHS aims to focus on those activities and responsibilities assigned to it through statute or presidential directive. Among the identified activities and responsibilities is one that is specific to biosurveillance—biosurveillance integration and detection— and others that can help to support efficient and effective biosurveillance action, such as information sharing and analysis, threat and risk awareness, and technical forensic analysis to support attribution. The Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Commission Act) established the National Biosurveillance Integration Center (NBIC) within DHS. NBIC was specifically tasked with integrating and analyzing information from human health, animal, plant, food, and environmental monitoring systems across the federal government and supporting the interagency biosurveillance community. As defined in the July 2012 NBIC Strategic Plan, integration involves combining biosurveillance information from different sources and domains (e.g., human, animal, and plant health; food and environmental safety and security; and homeland security) to provide partners and stakeholders with a synthesized view of the information, and what it could mean. Primary goals of integration include creating a common picture or understanding of potential and ongoing biological events and providing insights that cannot be gleaned in isolation. The 9/11 Commission Act outlines certain requirements for NBIC. Drawing upon these requirements as well as the NBIC Strategic Plan, we identified three main roles that NBIC, as a federal-level biosurveillance integrator, must carry out to achieve the duties and outcomes described by NBIC’s authorizing legislation. Senior NBIC officials agreed that these three roles—analyzer, coordinator, and innovator—are consistent with the center’s responsibilities. These roles are not mutually exclusive and can reinforce one other. For example, NBIC’s efforts as an Innovator might result in the development of data that could enhance its role as an Analyzer by providing the center with another dataset to review. The biosurveillance integrators’ roles we identified: Analyzer: Use technological tools and subject matter expertise to develop shared situational awareness by creating meaningful new insights from disparate datasets and information that could not be gleaned in isolation. Coordinator: Bring together multi-disciplinary partners across interagency organizations to enhance understanding of new or potential biological events, such as through the collaborative development of products and services. Innovator: Facilitate the development of new tools, technology, and approaches to address gaps in biosurveillance integration. According to Homeland Security Presidential Directive 10 (HSPD-10): Biodefense for the 21st Century, a national bioawareness capability providing early warning, detection, or recognition of a biological weapon attack is an essential component of biodefense. To contribute to this national capability, in 2003, DHS created the BioWatch program to provide early warning, detection, or recognition of a biological attack. The BioWatch program uses routine laboratory testing designed to detect an aerosolized biological attack for five specific biological agents considered high risk for use as biological weapons. When DHS was established in 2002, a perceived urgency to deploy useful—even if immature—technologies in the face of potentially catastrophic consequences catalyzed the rapid deployment of many technologies. DHS completed the initial deployment of BioWatch quickly—within 80 days of the President’s announcement of the BioWatch program in his 2003 State of the Union Address. In 2005, DHS expanded BioWatch to an additional 10 jurisdictions, for a total of more than 30. The expanded deployment—referred to as Generation 2 (Gen- 2)—also included the addition of indoor monitoring capabilities in three high-threat jurisdictions and provided additional capacity for events of national significance, such as major sporting events and political conventions. In 2015, we reported that the BioWatch program collaborates with more than 30 BioWatch jurisdictions throughout the nation to operate approximately 600 Gen-2 aerosol collectors. These units rely on a vacuum-based collection system that draws air through a filter. These filters are manually collected and transported to state and local public health laboratories for analysis. Using this manual process, a result can be generated from 12 to 36 hours after an agent is initially captured by the aerosol collection unit. To reduce detection time, DHS began to develop an autonomous detection capability in 2003 for the BioWatch program—known as Generation 3 (Gen-3). Envisioned as a laboratory-in-a-box, the autonomous detection system would automatically collect air samples, conduct analysis to detect the presence of biothreat agents every 4 to 6 hours, and communicate the results to public health officials via an electronic network without manual intervention. By automating the analysis, DHS anticipated that detection time could be reduced to 6 hours or less, making the technology more appropriate for monitoring indoor high-occupancy facilities such as transportation nodes and enabling a more rapid response to an attack. DHS also anticipated a reduction in operational costs by eliminating the program’s daily manual sample retrieval and laboratory analysis. However, as we reported in 2015, the Gen-3 acquisition was canceled in April 2014, after testing difficulties and after an analysis of alternatives was interpreted by DHS as showing that any advantages of an autonomous system over the current manual system were insufficient to justify the cost of a full technology switch. In December 2009, we reported that NBIC was not fully equipped to carry out its mission because it lacked key resources—data and personnel— from its partner agencies, which may have been at least partially the result of collaboration challenges it faced. For example, some partners reported that they did not trust NBIC to use their information and resources appropriately, while others were not convinced of the value that working with NBIC provided because NBIC’s mission was not clearly articulated. In order to help NBIC enhance and sustain collaboration, including the provision of data, personnel, and other resources, in 2009, we recommended that NBIC develop a strategy for addressing barriers to collaboration and develop accountability mechanisms to monitor these efforts. In August 2012, NBIC issued the NBIC Strategic Plan, which is intended to provide NBIC’s strategic vision, clarify the center’s mission and purpose, articulate the value that NBIC seeks to provide to its partners, and lay the groundwork for setting interagency roles, responsibilities, and procedures. Further, in November 2014, NBIC completed its first biannual NBIC Federal Stakeholder Survey, which NBIC uses to assess the usefulness of its products and activities and to determine what improvements should be made on the basis of those results. We believe DHS’s actions addressed the recommendations in our December 2009 report. In September 2015, we reported that NBIC had actions and activities underway to fulfill all three of the roles we identified as essential to its ability to carry out its mission—analyzer, coordinator, and integrator. For example, to fulfill its analyzer role NBIC compiled information to create and circulate a variety of products to support disease outbreak monitoring on a daily, weekly, or period basis. Similarly, in its coordinator role, NBIC had put in place a variety of procedures and protocols to convene partners on a routine basis or in response to specific emerging events. Finally, in its innovator role NBIC had efforts to conduct gap analyses, fund pilot projects that aim to develop new biosurveillance tools and technology (such as examining the use of social media data to identify health trends), sought new sources of data and information, and made efforts to enhance its internal IT system. Although NBIC had made efforts to collaborate with interagency partners to create and issue a strategic plan that would clarify its mission and the various efforts to fulfill its three roles, we reported a variety of challenges that remained when we surveyed NBIC’s interagency partners for our 2015 report. Notably, many of these partners continued to express uncertainty about the value NBIC provided. Specifically, 10 of 19 partners stated that NBIC’s products and activities enhance their agencies’ ability to carry out their biosurveillance roles and responsibilities to little or no extent, 4 responded to a moderate extent, and 5 responded that they did not have a basis to judge. Generally, partners that responded to little or no extent noted that NBIC products and activities do not, for example, identify trends and patterns or describe potential impacts of a biological event. For instance, one official stated that NBIC’s products and activities do not “connect the dots” between dissimilar information, provide novel synthesis of information, or recommend possible courses of action. Moreover, most of the federal partners with key roles in biosurveillance (8 of 11) stated that NBIC’s products help their agencies identify biological events to little or no extent, generally because they already obtain such information directly from other federal partners more quickly. We also found in 2015, as in 2009, that a variety of challenges limited the extent to which federal agencies shared data and personnel with NBIC, as envisioned by the 9/11 Commission Act. First, data that NBIC could use to identify and characterize a biological event of national concern using statistical and analytical tools, as called for in the 9/11 Commission Act, are limited. Also, apart from searches of global news reports and other publically available reports generated by National Biosurveillance Integration System (NBIS) partners, NBIC has been unable to secure streams of raw data from multiple domains across the biosurveillance enterprise that would lend themselves to near-time quantitative analysis that could reveal unusual patterns and trends. Moreover, we found that few federal partners (5 of 19) reported that they share the data they do have with NBIC, citing legal and regulatory restrictions, among other reasons. Some agencies are reluctant to share their data with NBIC because they are unsure how the information will be used. For example, one official explained that the agency does not share some data with NBIC because sharing such information too broadly might have substantial implications on agricultural trade or public perception of safety. Officials from another agency noted that there is sometimes reticence to share information and data with components of DHS because, given the department’s roles in law enforcement and national security, the information might be shared outside of the health security community in a way that lacks appropriate context and perspective. Finally, other agencies stated that they are unable to share data for regulatory or legal reasons, or because appropriately protecting the data would take too long. Similarly, although NBIC would like to obtain liaisons from each of its federal partners, only 3 of 19 partners provided NBIC with dedicated liaisons. Officials from one agency with key biosurveillance responsibilities stated that it is difficult to provide personnel to NBIC on a full- or part-time basis because of resource constraints. Further, officials from another agency noted that the lack of clarity about NBIC’s value to its partners is a barrier to providing the center with detailees. We also reported in September 2015 that NBIC faces challenges prioritizing developmental efforts to identify and address needs for new biosurveillance tools. For example, partners noted limitations in NBIC’s ability to address gaps, like limited resources and the difficulty in prioritizing the center’s innovation efforts because its partners have diverse needs. NBIC officials stated that the center is working to improve its products and its ability to contextualize the information it collects from open sources, and has sought partner input to do so. For example, beginning in late June 2015, partly on the basis of feedback the center received from its November 2014 Federal Stakeholder Survey, NBIC modified its daily Monitoring List to include an up-front summary that identifies the status of ongoing biological events as worsening, improving, unchanged, or undetermined. Further, NBIC officials noted that the center is also working to better integrate forecasts and projections into its products and activities by collaborating with others and developing a common interagency vision for specific federal capabilities and practical next steps leading to the application of reliable infectious disease forecasting models in decision-making processes. Nevertheless, a persistent challenge NBIC faces is skepticism on the part of some of the NBIS partners regarding the value of the federal biosurveillance mission as well as NBIC’s role in that mission. In our 2009 report, most of the NBIS partners we interviewed at that time expressed uncertainty about the value of participating in the NBIS or confusion about the purpose of NBIC’s mission. In September 2015, the NBIS partners and other major stakeholders in the biosurveillance community acknowledged—and we agreed—that no single problem limits NBIC’s mission to integrate biosurveillance data. Rather, over the years, several long-standing problems have combined to inhibit the achievement of this mission as envisioned in the 9/11 Commission Act. We identified options in our 2015 report for policy or structural changes that could help better fulfill the biosurveillance integration mission, which are summarized below. We identified these options and their benefits and limitations, on the basis of the roles of a federal-level biosurveillance integrator we identified in the 9/11 Commission Act, NBIC’s strategic plan, and the perspectives of the NBIS partners obtained during our structured interviews. These options are not exhaustive, and some options could be implemented together or in part. Since 2003, DHS has focused on acquiring an autonomous detection system to replace the current BioWatch Gen-2, but has faced challenges in clearly justifying the BioWatch program’s need and ability to reliably address that need. In September 2012, we found that DHS approved the Gen-3 acquisition in October 2009 without fully developing critical knowledge that would help ensure sound investment decision making, pursuit of optimal solutions, and reliable performance, cost, and schedule information. Specifically, we found that DHS did not engage the early phases of its Acquisition Life-cycle Framework, which is designed to help ensure that the mission need driving the acquisition warrants investment of limited resources and that an analysis of alternatives (AoA) systematically identifies possible alternative solutions that could satisfy the identified need. BioWatch officials stated that they were aware that the Mission Needs Statement prepared in October 2009 did not reflect a systematic effort to justify a capability need, but stated that the department directed them to proceed because there was already departmental consensus around the solution. However, we found that the AoA prepared for the Gen-3 acquisition did not reflect a systematic decision-making process. As with the Mission Needs Statement, program officials told us that they were advised that a comprehensive AoA would not be necessary because there was already departmental consensus that autonomous detection was the optimal solution. Because the Gen-3 AoA did not evaluate a complete solution set, consider complete information on cost and benefits, and include a cost-benefit analysis, we concluded that it did not provide information on which to base trade-off decisions. To help ensure DHS based its acquisition decisions on reliable performance, cost, and schedule information developed in accordance with guidance and good practices, in our September 2012 report, we recommended that before continuing the Gen-3 acquisition, DHS reevaluate the mission need and possible alternatives based on cost- benefit and risk information. DHS concurred with the recommendation and in 2012, DHS directed the BioWatch program to complete an updated AoA. In April 2014, DHS canceled the acquisition of Gen-3 because the AoA did not confirm an overwhelming benefit to justify the cost of a full technology switch to Gen-3. Having canceled the Gen-3 acquisition, DHS continues to rely on the Gen-2 system for early detection of an aerosolized biological attack. However, we found DHS lacks reliable information about BioWatch Gen- 2’s technical capabilities to detect a biological attack, in part, because in the 12 years since BioWatch’s initial deployment, DHS has not developed technical performance requirements for Gen-2. We reported in 2015 that BioWatch has been criticized because it was deployed quickly in 2003 to address a perceived urgent need, but without sufficient testing, validation, and evaluation of its technical capabilities. In 2015, we reported that DHS officials said that the system can detect catastrophic attacks, which they define as attacks large enough to cause 10,000 casualties. DHS has commissioned tests of Gen-2’s technical performance characteristics, but DHS has not developed performance requirements that would enable it to interpret the test results and draw conclusions about the system’s ability to detect attacks. According to DHS guidance and standard practice in testing and evaluation of defense systems, in order to assess Gen-2’s capability to detect a biological attack, DHS would have to link test results to its conclusions about the deployed detectors’ ability to detect attacks in BioWatch operational environments. This would ordinarily be done by developing and validating technical performance requirements based on operational objectives, but DHS has not developed such requirements for Gen-2. In the absence of technical performance requirements, DHS officials said their assertion that the system can detect catastrophic attacks is supported by modeling and simulation studies. However, we found none of these studies were designed to incorporate test results from the Gen-2 system and comprehensively assess the system against the stated operational objective. The modeling and simulation studies were designed for purposes other than to directly and comprehensively assess Gen-2’s operational capabilities. For example, one set of modeling and simulation studies, conducted by Sandia National Laboratories (Sandia) in collaboration with other national laboratories, did not incorporate information about the actual locations of Gen-2 collector units, because they were designed to model hypothetical BioWatch deployments in which collectors were placed in optimal locations. Sandia also analyzed ranges of hypothetical system sensitivities rather than incorporating the test results on the performance characteristics of Gen-2. Therefore, these studies drew no conclusions about the actual capabilities of the deployed Gen-2 system. DHS officials also described modeling and simulation work that used a measure of operational capability that does not directly support conclusions about the BioWatch objective of detecting attacks large enough to cause 10,000 casualties. Additionally, we found that because none of the modeling and simulation work was designed to interpret Gen-2 test results and comprehensively assess the capabilities of the Gen-2 system, none of these studies has provided a full accounting of statistical and other uncertainties—meaning decision makers have no means of understanding the precision or confidence in what is known about system capabilities. Because it is not possible to test the BioWatch system directly by releasing live biothreat agents into the air in operational environments, limitations of the tests described earlier limit the applicability of the results and underscore the need for a full accounting of statistical and other uncertainties, without which decision makers lack a full understanding of the Gen-2 system’s capability to detect attacks of defined types and sizes. At the time DHS canceled the Gen-3 acquisition, it also announced that S&T will explore development and maturation of an effective and affordable automated aerosol biodetection capability, or other operational enhancements, that meet the operational requirements of the BioWatch system. As such, DHS officials told us they are considering potential improvements or upgrades to the Gen-2 system. However, because DHS lacks reliable information about Gen-2’s technical capabilities, decision makers are not assured of having sufficient information to ensure future investments are actually addressing a capability gap not met by the current system. Also, because DHS lacks targets for the current system’s performance characteristics, including limits of detection, that would enable conclusions about the system’s ability to detect attacks of defined types and sizes with specified probabilities, it cannot ensure it has complete information to make decisions about upgrades or enhancements. In our September 2015 report, to help ensure that biosurveillance-related funding is directed to programs that can demonstrate their intended capabilities, and to help ensure sufficient information is known about the current Gen-2 system to make informed cost-benefit decisions about possible upgrades and enhancements to the system, we recommended that DHS not pursue upgrades or enhancements to the current BioWatch system until it establishes technical performance requirements necessary for a biodetection system to meet a clearly defined operational objective for the BioWatch program; assesses the Gen-2 system against these performance requirements; and produces a full accounting of statistical and other uncertainties and limitations in what is known about the system’s capability to meet its operational objectives. DHS concurred and is taking steps to address the recommendation. As DHS faces decisions about investing in the future of the BioWatch program, there are lessons to be learned from the program’s recent attempt to acquire an autonomous detection system, Gen-3. Our recent work on BioWatch also evaluated DHS’s efforts to test the Gen-3 technology from 2010 through 2011 against best practices for developmental testing. In our 2015 report, we recommended that DHS incorporate the best practices we identified to help enable DHS to mitigate risk in future acquisitions, such as upgrades or enhancements to Gen-2. DHS concurred and stated its updated acquisition guidance largely addresses these best practices. Chairman McSally, Ranking Member Payne, and Members of the subcommittee, this concludes my prepared statement. I would be happy to respond to any questions you may have. For questions about this statement, please contact Chris Currie at (404) 679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Kathryn Godfrey (Assistant Director), Russ Burnett, Tracey King, Susanna Kuebler, Jan Montgomery, Tim Persons, and Sushil Sharma. Key contributors for the previous work that this testimony is based on are listed in each product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The potential threat of a naturally occurring pandemic or a terrorist attack with a biological weapon of mass destruction underscores the importance of a national biosurveillance capability—that is, the ability to detect biological events of national significance to provide early warning and information to guide public health and emergency response. The Implementing Recommendations of the 9/11 Commission Act of 2007 addresses this capability, in part by creating NBIC. The center was tasked with integrating information from human health, animal, plant, food, and environmental monitoring systems across the federal government, to improve the likelihood of identifying a biological event at an earlier stage. Similarly, DHS's BioWatch program aims to provide early indication of an aerosolized biological weapon attack. GAO has published a series of reports on biosurveillance efforts spanning more than a decade. This statement describes progress and challenges GAO has reported in DHS's implementation of NBIC and BioWatch and considerations for the future of biosurveillance efforts at DHS. This testimony is based on previous GAO reports issued from December 2009 through September 2015 related to biosurveillance. To conduct our prior work, we reviewed relevant presidential directives, laws, policies, and strategic plans; and interviewed federal, state, and industry officials, among others. We also analyzed key program documents, including test plans, test results, and modeling studies. Since 2009, GAO has reported on progress and challenges with two of the Department of Homeland Security's (DHS) biosurveillance efforts—the National Biosurveillance Integration Center (NBIC) and the BioWatch program (designed to provide early detection of an aerosolized biological attack). In December 2009, GAO reported that NBIC was not fully equipped to carry out its mission because it lacked key resources—data and personnel—from its partner agencies, which may have been at least partially the result of collaboration challenges it faced. For example, some partners reported that they did not trust NBIC to use their information and resources appropriately, while others were not convinced of the value that working with NBIC provided because NBIC's mission was not clearly articulated. GAO recommended that NBIC develop a strategy for addressing barriers to collaboration and develop accountability mechanisms to monitor these efforts. DHS agreed, and in August 2012, NBIC issued the NBIC Strategic Plan, which is intended to provide NBIC's strategic vision, clarify the center's mission and purpose, and articulate the value that NBIC seeks to provide to its partners, among other things. In September 2015, GAO reported that despite NBIC's efforts to collaborate with interagency partners to create and issue a strategic plan that would clarify its mission and the various efforts to fulfill its three roles—analyzer, coordinator, and innovator—a variety of challenges remained when GAO surveyed NBIC's interagency partners in 2015. Notably, many of these partners continued to express uncertainty about the value NBIC provided. GAO identified options for policy or structural changes that could help NBIC better fulfill its biosurveillance integration mission, such as changes to NBIC's roles. Since 2012, GAO has reported that DHS has faced challenges in clearly justifying the need for the BioWatch program and its ability to reliably address that need (to detect attacks). In September 2012, GAO found that DHS approved a next-generation BioWatch acquisition in October 2009 without fully developing knowledge that would help ensure sound investment decision making and pursuit of optimal solutions. GAO recommended that before continuing the acquisition, DHS reevaluate the mission need and possible alternatives based on cost-benefit and risk information. DHS concurred and in April 2014, canceled the acquisition because an alternatives analysis did not confirm an overwhelming benefit to justify the cost. Having canceled the next generation acquisition, DHS continues to rely on the currently deployed BioWatch system for early detection of an aerosolized biological attack. However, in 2015, GAO found that DHS lacks reliable information about the current system's technical capabilities to detect a biological attack, in part because in the 12 years since BioWatch's initial deployment, DHS has not developed technical performance requirements for the system. GAO reported in September 2015 that DHS commissioned tests of the current system's technical performance characteristics, but without performance requirements, DHS cannot interpret the test results and draw conclusions about the system's ability to detect attacks. DHS is considering upgrades to the current system, but GAO recommended that DHS not pursue upgrades until it establishes technical performance requirements to meet a clearly defined operational objective and assesses the system against these performance requirements. DHS concurred and is working to address the recommendation. |
To respond to the Gulf Coast devastation, the federal government has committed an historically high level of resources—over $110 billion— through an array of grants, loan subsidies, and tax relief and incentives. The bulk of this assistance was provided between September 2005 and June 2006 through four emergency supplemental appropriations. A substantial portion of this assistance was directed to emergency assistance and meeting short-term needs arising from these hurricanes, such as relocation assistance, emergency housing, immediate levee repair, and debris removal efforts. Consequently, a relatively small portion of federal assistance is available for longer-term rebuilding activities such as the restoration of the region’s housing and infrastructure. Later in this statement, I will discuss in greater detail the two programs that the federal government has used so far to provide assistance to the Gulf Coast for longer-term rebuilding. It is useful to view the federal assistance provided to the Gulf Coast within the context of the overall costs of the damages incurred by the region and the resources necessary to rebuild. Although there are no definitive or authoritative estimates of these costs, the various estimates of aspects of these costs offer a sense of their magnitude. For example, early damage estimates from the Congressional Budget Office (CBO) put capital losses from Hurricanes Katrina and Rita at a range of $70 billion to $130 billion while another estimate put losses solely from Hurricane Katrina— including capital losses—at over $150 billion. Further, the state of Louisiana has estimated that the economic impact on its state alone could reach $200 billion. While the exact costs of damages and rebuilding the Gulf Coast may never be known, they will likely surpass those from the three other costliest disasters in recent history—Hurricane Andrew, the September 2001 terrorist attacks, and the 1994 Northridge earthquake. These estimates raise important questions regarding additional assistance that will be needed to help the Gulf Coast rebuild in the future—including how the assistance will be provided and by whom. The federal government has so far used two key programs—FEMA’s Public Assistance and the Department of Housing and Urban Development’s (HUD) CDBG programs—to provide long-term rebuilding assistance to the Gulf Coast states. These two programs follow different funding models. Public Assistance provides funding on a project-by- project basis—involving an assessment of specific proposals to determine eligibility, while CDBG—a block grant—affords broad discretion and flexibility to states and localities. FEMA’s Disaster Relief Fund (DRF) supports a range of grant programs in providing federal assistance to state and local governments, nongovernment organizations, and individuals when a disaster occurs. One of its largest programs—Public Assistance—provides assistance primarily to state and local governments to repair and rebuild damaged public infrastructure and includes activities such as removing debris, repairing roads, and reconstructing government buildings, and utilities. Pursuant to the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act), this assistance is limited to either a fixed-dollar amount or a percentage of costs for restoring damaged facilities. Specifically, applicants submit requests for work which is considered for eligibility and subsequent funding. FEMA obligates funds for approved projects, providing specific amounts to complete discrete work segments on projects, while state and local governments pay the remainder based on the state’s cost share agreement with FEMA. As of March 16, 2007, FEMA has obligated about $4.6 billion to Louisiana and about $2 billion to Mississippi through its Public Assistance program. HUD’s Community Development Block Grant program—so far, the largest federal provider of long-term rebuilding assistance—received $16.7 billion in supplemental appropriations to help the Gulf Coast states rebuild damaged housing and other infrastructure. As shown in figure 1, Louisiana and Mississippi were allocated the largest shares of the CDBG appropriations, with $10.4 billion allocated to Louisiana, and another $5.5 billion allocated to Mississippi. Florida, Alabama, and Texas received the remaining share of CDBG funds. These formula-based grants afford states and local governments a great deal of discretion in designing directed neighborhood revitalization, housing rehabilitation, and economic development activities. In some instances, Congress has provided even greater flexibility when allocating additional CDBG funds to affected communities and states to help them recover from presidentially declared disasters, such as the Gulf Coast hurricanes. The Federal Coordinator for Gulf Coast Rebuilding has said that the CDBG program allows state leaders “who are closest to the issues” to make decisions regarding how the money should be spent. To receive CDBG funds, HUD required that each state submit an action plan describing how the funds would be used, including how the funds would address long-term “recovery and restoration of infrastructure.” This process afforded the states broad discretion in deciding how to allocate their funding and for what purposes. To coordinate and oversee the state’s rebuilding efforts, Louisiana created the Louisiana Recovery Authority (LRA) within the state’s executive branch. As part of its responsibility, the LRA was also charged with establishing spending priorities and plans for the state’s share of CDBG funds, subject to the approval of Louisiana’s state legislature. Mississippi developed its spending plans through the Mississippi Development Authority (MDA)—the state’s lead economic and community development agency within its executive branch—and the Governor’s Office of Recovery and Renewal. In contrast to Louisiana, Mississippi’s state legislature was not involved in the approval process for these state funding decisions. Consistent with HUD requirements, both Louisiana and Mississippi published their action plans to solicit public input within their state regarding the planned use of CDBG funds. As shown in figure 2, each state allocated the majority of its share of CDBG funding to housing priorities. The remaining funds were allocated primarily to economic development and infrastructure priorities. With the vast number of homes that sustained damage in Louisiana and Mississippi, each state had opted to direct the vast majority of their housing allocations to homeowners, although each state tailored its program to address the particular conditions in its state. A portion of these allocations also was directed to other housing programs such as rental housing and public housing, as well as to projects that will alleviate costs associated with housing, such as utility and insurance costs. Louisiana and Mississippi homeowner assistance programs are similar in that each is designed to compensate homeowners whose homes were damaged or destroyed by the storms. In each program, the amount of compensation that homeowners receive depends on the value of their homes before the storms and the amount of damage that was not covered by insurance or other forms of assistance. However, these programs differ in their premise and eligibility requirements. Louisiana witnessed a significant population loss in the wake of the Gulf Coast hurricanes, with many residents living in other states and debating whether to return to Louisiana. The LRA, in consultation with state and federal agencies, developed a program to restore the housing infrastructure in Louisiana, using CDBG funds from supplemental appropriations, as described earlier. Referred to as the Road Home, this program is designed to encourage homeowners to return to Louisiana and begin rebuilding. Under the program, homeowners who decide to stay in the state and rebuild in Louisiana are eligible for the full amount of grant assistance—up to $150,000—while those leaving the state will receive a lesser share. Accordingly, aside from the elderly, residents who choose to sell their homes and leave the state will have their grant awards reduced by 40 percent. Residents who do not have insurance will have their grant awards reduced by 30 percent. Further, to receive compensation, homeowners must comply with applicable code and zoning requirements and FEMA advisory base flood elevations when rebuilding and agree to use their home as a primary residence at some point during a 3-year period after closing. As of March 28, 2007, the Road Home program had received 119,945 applications, of which 60,675 had been verified and an award amount had been calculated. Applicants were then asked to decide how they wanted to proceed (for example, whether to rebuild or sell). As of that date, 25,597 applicants notified the program of their decision. Of those, the program awarded payments to 4,808 homeowners with an average award amount of $74,250. In Mississippi, Katrina’s storm surge destroyed tens of thousands of homes, many of which were located outside FEMA’s designated flood plain and not covered by flood insurance. Mississippi developed a two- phase program to target homeowners who suffered losses due to the storm surge. Accordingly, Phase I of the program is designed to compensate homeowners whose properties were located outside the floodplain and were otherwise fully insured. Eligible for up to $150,000 in compensation, these homeowners are not subject to a requirement to rebuild. Phase II of the program, on the other hand, is designed to award grants to uninsured and underinsured homeowners with incomes at or below 120 percent of the Area Median Income (AMI). Eligible for up to $100,000 in grant awards, these homeowners must demonstrate that they meet current building codes and standards as a condition to receiving their grants. While they are required to rebuild in south Mississippi, they are not required to stay in their homes once they have been rebuilt. In addition, homeowners who do not have insurance will have their grant reduced by 30 percent, although this penalty does not apply to the “special needs” populations as defined by the state (i.e., elderly, disabled, and low income). As of March 28, 2007, Mississippi had received 18,465 applications for Phase I of its program, of which 14,974 were determined eligible for consideration. Of those, Mississippi awarded payments to 11,894 homeowners with an average award amount of $69,669. Mississippi has yet to complete processing applications for any of the more than 10,000 uninsured and underinsured homeowners in Phase II of the program. It is clear that Louisiana’s and Mississippi’s homeowner assistance programs are proceeding at different paces. While we did not assess the causes for these differences, we have begun work as requested by the Senate Homeland Security and Governmental Affairs Committee to examine particular aspects of the CDBG program that may provide important insights into these issues. Restoring the region’s housing and infrastructure is taking place in the context of broader planning and coordination activities; in Louisiana and Mississippi, state and local governments are engaged in both short- and long-term planning efforts. The federal government—specifically, the Coordinator of Federal Support for the Recovery and Rebuilding of the Gulf Coast Region—is responsible for coordinating the activities of the numerous federal departments and agencies involved in rebuilding as well as supporting rebuilding efforts at the state and local level. Based on our preliminary work, I would like to describe some of these activities being undertaken in Louisiana and Mississippi as well as the activities of the federal government. What will be rebuilt in many areas of Louisiana remains uncertain, as a number of planning efforts at the state and local levels are still evolving. At the state level, the LRA has coordinated a statewide rebuilding planning effort that included retaining professional planners and moving towards a comprehensive rebuilding plan. To facilitate this effort, the LRA endorsed Louisiana Speaks—a multifaceted process for helping the LRA develop a comprehensive rebuilding plan for Southern Louisiana and for providing rebuilding planning resources to homeowners, businesses, communities, and parishes. For example, Louisiana Speaks developed and distributed a pattern book for homeowners, architects, and permitting officials about how to redesign and rebuild commercial and residential buildings. Through this process, local design workshops—called charrettes—have been developed to guide neighborhood planning efforts in the impacted areas, while teams of professional planners, FEMA officials, and LRA officials and representatives work with affected local parishes to develop long-term parish recovery plans. Through extensive public input, Louisiana Speaks also seeks to develop a regional plan for Southern Louisiana, focusing on a number of critical challenges for the state’s redevelopment. The regional plan will evaluate economic, environmental, and social issues that affect Southern Louisiana and explore alternative ways that growth and development can be accommodated in the context of varying environmental, economic, and cultural changes. The state of Louisiana will then use the regional plan to help direct rebuilding policy and Louisiana’s long-term spending over the next 30 years. Given the central importance of the city to Louisiana’s overall economy, I would like to highlight planning efforts in New Orleans. After several attempts to develop a rebuilding plan for New Orleans—including the Bring New Orleans Back Commission, efforts initiated by the city council, Urban Land Institute, and others—in August 2006, New Orleans embarked on a comprehensive rebuilding planning process, which continues to date. Referred to as the Unified New Orleans Plan (UNOP), this effort was designed as a grassroots approach to planning to incorporate the vision of neighborhoods and districts into multiple district-level plans and one citywide plan that establishes goals and priorities for rebuilding the city. In particular, the citywide plan will include priority programs and projects for repairing and rebuilding the city over a 5- to 10-year period and will help to inform critical funding and resource allocation decisions by state and federal agencies. The citywide plan is currently under review by the New Orleans Planning Commission. Mississippi created an overall plan to serve as a framework for subsequent planning efforts in affected areas of the state. More specifically, in September 2005—within days of the hurricanes’ landfall—Governor Barbour created the Governor’s Commission on Recovery, Rebuilding and Renewal to identify rebuilding and redevelopment options for the state. Comprised of over 20 committees, the Commission held numerous public forums across multiple counties in an effort to solicit input and public participation from residents throughout the state. In December 2005, the commission’s work culminated in a final report containing 238 policy recommendations aimed at addressing a range of rebuilding issues and concerns across the state, from infrastructure and economic development to human services and finance. The report also addressed potential financing mechanisms identifying state, local, private, and federal sources. Further, the recommendations identified parties responsible for implementing the recommendations, including the creation of new state and regional entities to oversee selected recommendations. In addition, Governor Barbour created the Office of Recovery and Renewal to oversee and coordinate implementation of these recommendations. Also charged with identifying funding for rebuilding projects, the office continues to work with public and private entities as well as state and local governments. Local governments in south Mississippi are also engaged in rebuilding planning activities. For example, modeled after the Governor’s Commission on Renewal and Recovery, the city of Biloxi established a volunteer steering committee to develop a rebuilding plan for the city. Biloxi’s final rebuilding plan resulted in 162 recommendations to address core issues affecting the city, such as infrastructure, economic development, human services, and finance. In addition, the steering committee commissioned a separate rebuilding plan for East Biloxi—a low-lying area that had been heavily damaged by Hurricane Katrina—that included 27 recommendations for addressing this area of the city. A number of other impacted communities in south Mississippi have undertaken planning initiatives as well. In light of the magnitude of the Gulf Coast hurricanes, the administration recognized the need to provide a mechanism to coordinate with—and support rebuilding activities at—the federal, state, and local levels. More specifically, in November 2005, the President issued executive orders establishing two new entities to help provide a governmentwide response to federal rebuilding efforts. The first of these orders created the position of Coordinator of Federal Support for the Recovery and Rebuilding of the Gulf Coast Region within the Department of Homeland Security. Accordingly, the Federal Coordinator is responsible for developing principles and goals, leading the development of federal recovery activities, and monitoring the implementation of designated federal support. The Coordinator also serves as the administration’s focal point for managing information flow, requests for actions, and discussions with Congress, state, and local governments, the private sector, and community leaders. Our discussions with state and local officials in Louisiana revealed a largely positive disposition towards the Federal Coordinator and his role in support of the Gulf Coast. During our field work, for example, Louisiana state and local officials said the Coordinator had played an integral role in helping to identify and negotiate an appropriate level of CDBG funding for the state. The second executive order established a Gulf Coast Recovery and Rebuilding Council within the Executive Office of the President for a period of 3 years. Chaired by the Assistant to the President for Economic Policy, the council includes most members of the Cabinet and is charged with examining issues related to the furtherance of the President’s policy on recovery and rebuilding of the Gulf Coast. Rebuilding efforts in the Gulf Coast are at a critical turning point—a time when decisions now being made in community rooms, city halls, and state houses will have a significant impact on the complexion and future of the Gulf Coast. As states and localities begin to assume responsibility for developing plans for rebuilding, there are difficult policy decisions Congress will need to make about the federal government’s contribution to the rebuilding effort and the role it might play over the long-term in an era of competing priorities. Based on the preliminary work I have discussed today, the Subcommittee way wish to consider the following questions as it continues to carry out its critical oversight function in reviewing Gulf Coast rebuilding efforts: How much will it cost to rebuild the Gulf Coast and how much of this cost should the federal government bear? How effective are current funding delivery mechanisms—such as Public Assistance and CDBG—and should they be modified or supplemented by other mechanisms? How can the federal government further partner with state and local governments and the nonprofit and private sectors to leverage the public investment in rebuilding? Madam Chair and Members of the Subcommittee, this concludes my statement. I would be happy to respond to any questions you or other members of the Subcommittee may have at this time. For information about this testimony, please contact Stanley J. Czerwinski, Director, Strategic Issues, at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Charlesetta Bailey, Dean Campbell, Roshni Davé, Peter Del Toro, Laura Kunz, Brenda Rabinowitz, Michael Springer, and Diana Zinkl. Hurricane Katrina: Allocation and Use of $2 Billion for Medicaid and Other Health Care Needs. GAO-07-67. February 28, 2007. Disaster Assistance: Better Planning Needed for Housing Victims of Catastrophic Disasters. GAO-07-88. February 28, 2007. Small Business Administration: Additional Steps Needed to Enhance Agency Preparedness for Future Disasters. GAO-07-114. February 14, 2007. Small Business Administration: Response to the Gulf Coast Hurricanes Highlights Need for Enhanced Disaster Preparedness. GAO-07-484T. February 14, 2007. Hurricanes Katrina and Rita: Federal Actions Could Enhance Preparedness of Certain State-Administered Federal Support Programs. GAO-07-219. February 7, 2007. Hurricanes Katrina and Rita Disaster Relief: Prevention is the Key to Minimizing Fraud, Waste, and Abuse in Recovery Effort. GAO-07-418T. January 29, 2007. Hurricane Katrina: Status of Hospital Inpatient and Emergency Departments in the Greater New Orleans Area. GAO-06-1003. September 29, 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. September 6, 2006. Disaster Relief: Governmentwide Framework Needed to Collect and Consolidate Information to Report on Billions in Federal Funding for the 2005 Gulf Coast Hurricanes. GAO-06-834. September 6, 2006. Coast Guard: Observations on the Preparation, Response, and Recovery Missions Related to Hurricane Katrina. GAO-06-903. July 31, 2006. Hurricane Katrina: Improving Federal Contracting Practices in Disaster Recovery Operations. GAO-06-714T. May 4, 2006. Hurricane Katrina: Planning for and Management of Federal Disaster Recovery Contracts. GAO-06-622T. April 10, 2006. Hurricane Katrina: Status of the Health Care System in New Orleans and Difficult Decisions Related to Efforts to Rebuild It Approximately 6 Months after Hurricane Katrina. GAO-06-576R. March 28, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. March 8, 2006. Hurricanes Katrina and Rita: Preliminary Observations on Contracting for Response and Recovery Efforts. GAO-06-246T. November 8, 2005. Hurricanes Katrina and Rita: Contracting for Response and Recovery Efforts. GAO-06-235T. November 2, 2005. Hurricane Katrina: Providing Oversight of the Nation’s Preparedness, Response, and Recovery Activities. GAO-05-1053T. September 28, 2005. Biscuit Fire Recovery Project: Analysis of Project Development, Salvage Sales, and Other Activities. GAO-06-967. September 18, 2006. September 11: Overview of Federal Disaster Assistance to the New York City Area. GAO-04-72. October 31, 2003. Disaster Assistance: Information on FEMA’s Post 9/11 Public Assistance to the New York City Area. GAO-03-926. August 29, 2003. Small Business Administration: Response to September 11 Victims and Performance Measures for Disaster Lending. GAO-03-385. January 29, 2003. September 11: Small Business Assistance Provided in Lower Manhattan in Response to the Terrorist Attacks. GAO-03-88. November 1, 2002. Los Angeles Earthquake: Opinions of Officials on Federal Impediments to Rebuilding. GAO/RCED-94-193. June 17, 1994. Hurricane Iniki Expenditures. GAO/RCED-94-132R. April 18, 1994. Time-Critical Aid: Disaster Reconstruction Assistance—A Better Delivery System Is Needed. GAO/NSIAD-87-1. October 16, 1986. Guidelines for Rescuing Large Failing Firms and Municipalities. GAO/GGD-84-34. March 29, 1984. Rebuilding Iraq: More Comprehensive National Strategy Needed to Help Achieve U.S. Goals. GAO-06-788. July 11, 2006. Foreign Assistance: USAID Completed Many Caribbean Disaster Recovery Activities, but Several Challenges Hampered Efforts. GAO-06- 645. May 26, 2006. Foreign Assistance: USAID Has Begun Tsunami Reconstruction in Indonesia and Sri Lanka, but Key Projects May Exceed Initial Cost and Schedule Estimates. GAO-06-488. April 14, 2006. Foreign Assistance: USAID’s Earthquake Recovery Program in El Salvador Has Made Progress, but Key Activities Are Behind Schedule. GAO-03-656. May 15, 2003. Foreign Assistance: Disaster Recovery Program Addressed Intended Purposes, but USAID Needs Greater Flexibility to Improve Its Response Capability. GAO-02-787. July 24, 2002. Foreign Assistance: Implementing Disaster Recovery Assistance in Latin America. GAO-01-541T. March 21, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The size and scope of the devastation caused by the 2005 Gulf Coast hurricanes presents unprecedented rebuilding challenges. Today, more than a year and a half since the hurricanes made landfall, rebuilding efforts are at a critical turning point. The Gulf Coast must face the daunting challenge of rebuilding its communities and neighborhoods--some from the ground up. This testimony (1) places the federal assistance provided to date in the context of the resources likely needed to rebuild the Gulf Coast, (2) discusses key federal programs currently being used to provide rebuilding assistance, with an emphasis on the Department of Housing and Urban Development's (HUD) Community Development Block Grant (CDBG) program, (3) describes Louisiana's and Mississippi's approach to using CDBG funds, and (4) provides observations on planning activities in Louisiana and Mississippi and the federal government's role in coordinating rebuilding efforts. GAO visited the Gulf Coast region, reviewed state and local documents, and interviewed federal, state, and local officials. While the federal government has provided billions of dollars in assistance to the Gulf Coast, a substantial portion was directed to short-term needs, leaving a smaller portion for longer-term rebuilding. It may be useful to view this assistance in the context of the costs of damages incurred by the region and the resources necessary to rebuild. Some damage estimates have put capital losses at a range of $70 billion to over $150 billion, while the State of Louisiana estimated that the economic impact on its state alone could reach $200 billion. Such estimates raise important questions regarding additional assistance that will be needed to help the Gulf Coast rebuild in the future. To date, the federal government has provided long-term rebuilding assistance to the Gulf Coast through 2 key programs, which follow different funding models. The Federal Emergency Management Agency's public assistance program provides public infrastructure funding for specific projects that meet program eligibility requirements. HUD's CDBG program, on the other hand, provides funding for neighborhood revitalization and housing rehabilitation activities, affording states broad discretion and flexibility. To date, the affected states have received $16.7 billion in CDBG funding from supplemental appropriations--so far, the largest share of funding targeted to rebuilding. With the vast number of homes that sustained damage in Louisiana and Mississippi, each state allocated the bulk of its CDBG funds to homeowner assistance. Louisiana developed an assistance program to encourage homeowners to return to Louisiana and begin rebuilding while Mississippi developed a program to target homeowners who suffered losses due to Katrina's storm surge that were not covered by insurance. As of March 28, 2007, Louisiana has awarded 4,808 grants to homeowners with an average award amount of $74,250. Mississippi has awarded 11,894 grants with an average award amount of $69,669. Restoring the region's housing and infrastructure is taking place in the context of broader planning and coordination activities. In Louisiana and Mississippi, state and local governments are engaged in both short-and long-term planning efforts. Further, the President established a position within the Department of Homeland Security to coordinate and support rebuilding activities at the federal, state, and local levels. As states and localities begin to develop plans for rebuilding, there are difficult policy decisions Congress will need to make about the federal government's contribution to the rebuilding effort and the role it might play over the long-term in an era of competing priorities. Based on our work, we raise a number of questions the Subcommittee may wish to consider in its oversight of Gulf Coast rebuilding. Such questions relate to the costs for rebuilding the Gulf Coast--including the federal government's share, the effectiveness of current funding delivery mechanisms, and the federal government's efforts to leverage the public investment in rebuilding. |
DOT provides billions of dollars to states and other grantees annually to improve the nation’s highway and transit infrastructure and safety. Most of this federal funding is provided by FHWA, FTA, and NHTSA. FHWA provides the vast majority of federal surface transportation funds—about $40 billion each year to states to design, construct, and maintain the nation’s roadway and bridge infrastructure through the federal-aid highway program. This program includes the nation’s National Highway System, which consists of approximately 220,000 miles of the nearly 1- million miles of roadways eligible for federal aid, including the 47,000-mile Interstate Highway System. Historically, most surface transportation funds are distributed through annual apportionments established by statutory formulas that take into account a number of factors, including the estimated share of taxes highway users in each state contribute to the Highway Trust Fund. We have previously reported that these formulas have only an indirect relationship to infrastructure needs, and many have no relationship to outcomes or grantees’ performance. MAP-21 addresses several of our past recommendations—by articulating national goals and adopting a performance-based approach to funding surface transportation projects. The act established seven national surface transportation goals for the federal-aid highway program: Safety—to achieve a significant reduction in traffic fatalities and serious injuries on all public roads. Infrastructure condition—to maintain the highway infrastructure asset system in a state of good repair. Congestion reduction—to achieve a significant reduction in congestion on the National Highway System. System reliability—to improve the efficiency of the surface transportation system. Freight movement and economic vitality—to improve the National Highway Freight Network, strengthen the ability of rural communities to access national and international trade markets, and support regional economic development. Environmental sustainability—to enhance the performance of the transportation system while protecting and enhancing the natural environment. Reduced project delivery delays—to reduce project costs, promote jobs and the economy, and expedite the movement of people and goods by accelerating project completion through eliminating delays in the project development and delivery process, including reducing regulatory burdens and improving agencies’ work practices. The act and its implementing regulations set forth a three-stage process for the performance-based surface transportation transformation across multiple modes: 1. Rulemaking: DOT must establish performance measures and 2. Set and track performance targets: states, MPOs, and other grantees must set annual or multi-year targets based on these performance measures and states report on their progress on meeting such targets to DOT, and 3. Evaluate performance: DOT must evaluate whether states have met or made significant progress toward their targets. States have one year from the effective date of the performance measure final rules to establish their own performance targets for those measures. MPOs then have 180 days after the respective state’s DOT sets its targets to establish their own targets or agree to support the state’s target. If states do not make significant progress toward meeting some targets, they must report action they will undertake to achieve their targets. Generally, the targets established will cover a 4-year reporting period and will be reported biennially. MAP-21 also mandated that states begin reporting their progress toward targets to DOT no later than October 2016. In addition, the legislation directed DOT to report to Congress in October 2017 on the effectiveness of performance-based planning as a tool for guiding transportation investments and the effectiveness of the performance-based planning process of each state. The performance-based approach remains essentially unchanged in the FAST Act—the most recent surface transportation reauthorization measure—enacted in December 2015. In administering approximately $40 billion a year to grantees for the federal-aid highway program, FHWA manages a variety of grant programs that provide funding for the development of infrastructure projects, and jointly with FTA, administers the implementation of the statewide and metropolitan planning requirements carried out by states and MPOs, discussed below. FHWA oversees the federal-aid highway program primarily through its 52 division offices located in each state, the District of Columbia, and Puerto Rico. To help lead the agency’s efforts to implement a performance-based approach, FHWA established the office of Transportation Performance Management in 2013 to help oversee the rulemaking process and the agency’s implementation efforts. Regulatory agencies, such as DOT, have authority and responsibility for developing and issuing regulations. The basic rulemaking process is spelled out in the Administrative Procedures Act. Among other things, the act establishes procedures and broadly applicable federal requirements for informal rulemaking. In addition, the act generally requires agencies to publish a notice of proposed rulemaking in the Federal Register. After giving interested persons an opportunity to comment on the proposed rule by providing “written data, views, or arguments,” and considering those comments, the agency may then publish the final rule. In our prior work, we found that rulemakings can range from 1 to nearly 14 years to complete, depending on a number of factors. More specifically, we found that a rulemaking takes an average of 4 years to complete, based on a sample of 16 major or other significant rulemakings from four federal agencies, including DOT. TPM will be carried out as part of the transportation planning process prescribed by statute for state DOTs and MPOs. This process addresses both urbanized and nonmetropolitan areas of the state and includes planning and coordination for both highway and transit needs jointly administered by FTA and FHWA. Under this process, states engage in continuing cross-agency coordination and planning that requires periodic updates of specific plans. For example, states develop a variety of plans for overall transportation programs, including a long-range statewide transportation plan and a state transportation improvement program. In urbanized areas, MPOs are also required to produce a long-range transportation plan, referred to as a metropolitan transportation plan, and a transportation improvement program. While MAP–21 left the basic framework of the transportation-planning process largely untouched, the act introduced critical changes to the process by requiring grantees such as states, MPOs, and transit agencies to move toward more performance-based highway and transit programs, and linking investment priorities to the achievement of performance targets. Under the new TPM approach, states and MPOs must coordinate on the target-setting and transportation-planning process, and states are required to integrate performance targets into both existing planning processes and plans. TPM also seeks to ensure that the transportation-planning process provides a performance-based approach to decision-making in support of national goals. FHWA has recently completed the first of the three-stage TPM implementation process—rulemaking, setting and tracking performance targets, and evaluating performance—issuing all six interrelated rules that establish both the performance measures focused on the national transportation goals and the overarching performance-management framework (see fig.1). The agency issued the first of the six rules in March 2016. In January 2017, the last two rules were issued and became effective in May 2017, with part of one rule delayed indefinitely. Completing the rulemaking process was a substantial undertaking for FHWA, as none of the rules had been completed when we last reported on this topic in 2015. With the completion of the six final rules, the framework for a performance-based approach to highway transportation management is now in place. FHWA is responsible for six rules related to the condition and performance of the federal highway system, one of which is administered jointly with FTA (see table 1). These six rules are interrelated because some establish specific performance measures while others relate to the transportation-planning processes or specific federal programs related to the performance measures. For example, three rules established the national performance measures (17 measures in total) to be used by state DOTs and MPOs, including rules focused on specific infrastructure condition measures, while other rules outline how the performance-based approach will be carried out through the transportation-planning process that occurs between states and MPOs. The 17 performance measures focus on key indicators of (1) safety, (2) pavement and bridge condition, and (3) performance of the surface transportation system (reliability, congestion, freight movement, and air quality). Generally, these performance measures correspond to national federal-aid highway program goals and performance areas set forth in MAP-21. For example, one performance measure—the percentage of pavements on the Interstate System in good condition—corresponds to the “pavement condition on the Interstate System” performance area and to the national goal to “maintain the highway infrastructure asset system in a state of good repair.” See appendix I for additional information on the 17 performance measures and the corresponding goals and performance areas. Many of these measures are fundamentally new to states while a few other measures are similar to information some states already have experience analyzing and tracking. For example, in 2015 we found that only 20 to 40 percent of states had adopted performance management practices in the areas of congestion, air quality and freight. In contrast, states have been working to develop safety performance measures in highway safety grant programs since 2008. The final System Performance measure rule establishes six measures based on MAP-21 requirements to assess the performance of the Interstate and non- Interstate National Highway System, traffic congestion, freight movement, and emissions. For example, two of these measures are to carry out the National Highway Performance Program, including a measure of the percent of person-miles traveled on reliable Interstate system roadways. The Pavement and Bridge rule establishes six measures and contains minimum thresholds for pavement and bridge conditions, and if a state falls below those thresholds it must spend a portion of its annual federal funding for improvements. While DOT began the rulemaking process shortly after MAP-21 was enacted, it has taken several years for the rules to be developed and move through the process, resulting in the recent issuance of the final rules. For example, DOT completed the first two rules in March of 2016 and issued the last two of the six rules in January 2017. The last two rules took effect on May 20, 2017. As we noted in 2015, MAP-21 established a deadline for promulgating a rulemaking for five of the rules; however, no such deadlines existed for one of the rules. At that time, we found that DOT had missed five performance-related rulemaking deadlines established in MAP-21; however, given the extent of regulatory changes required and the length and complexity of the rulemaking process, these deadlines may have been ambitious. For example, in response to the proposed System Performance rule, DOT received more than 8,800 public comments, which the agency then had to review and respond to before finalizing the rule. In their comments on the System Performance, Pavement and Bridge, and Planning rules as well as in interviews we conducted with selected states, states anticipated that the complexity of the data and the capabilities needed to use that data to set performance targets and measure progress under the System Performance rule as the most pressing challenge in implementing the performance-based approach. In contrast, states anticipated few challenges implementing the Pavement and Bridge rule and some challenges related to the Planning rule. According to selected states and others, anticipated data challenges in implementing the System Performance rule include the size of the data set required, the process of combining data sources, and other issues that may result in states facing significant additional costs and difficulties. In the final rule, FHWA took steps to respond to these anticipated challenges, which were raised in public comments, by revising some performance measures and eliminating others. Nonetheless, some stakeholders we spoke to believe that FHWA did not fully address some significant concerns. FHWA officials stated that revisions made in the final rule along with new data improvements and planned technical assistance adequately address data concerns expressed by the stakeholders. In comments in response to the proposed System Performance rule and in our interviews with officials in selected states, states reported that the anticipated costs and difficulties associated with collecting, analyzing, and reporting data associated with the rule was the major challenge affecting TPM implementation. The System Performance rule includes six performance measures related to highway system performance, freight movement, and congestion. Twenty-nine of the 36 state DOTs that commented on the proposed rule and 8 of the 10 states we met with cited the System Performance rule’s reliance on an FHWA-prescribed data set as the major source of the anticipated challenges. FHWA provides this data set, the National Performance Management Research Data Set (NPMRDS), to states and MPOs, which will then use it to calculate and set targets and measure progress for four of the six performance measures established in the rule. FHWA selected this data set to calculate the metrics for travel time and speed-based measures to promote consistency and coverage at a national level. Specifically, anticipated challenges related to the dataset included: (1) its size, (2) the process necessary to combine disparate traffic volume and speed data, and (3) the quality of the data. Size of the data set—According to AASHTO representatives and academic experts from two institutions we met with, the sheer size of the data set and server space required will affect their ability to analyze the data and will be a challenge for many states. AASHTO, for example, said reporting on the measures will require states to develop computerized applications to analyze millions, in some cases hundreds of millions of data points, principally in order to identify areas where traffic is congested. Officials from one state’s DOT said their staff does not have the training or experience to conduct the data analysis necessary to report congestion bottlenecks and congestion with a data set as large as NPMRDS. Five of the 10 state DOTs we interviewed raised similar concerns about their ability to analyze the large dataset. Process of combining data sources—Four of the six measures in the System Performance rule require estimates of delay or estimates of the reliability of person miles traveled or truck travel times. Data will need to be combined from two different data sources before these performance measures can be calculated, and most state officials and academic experts we spoke to agree that this initial step will be challenging. The two data sources include traffic volume data from the Highway Performance Monitoring System—which states have been using for many years to monitor highway conditions—and the NPMRDS data containing highway traffic speeds. For example, one state said that combining the two sets of data—a process typically called “conflation”—requires manual and highly technical adjustments to the underlying data. This process is particularly challenging because it requires advanced technical expertise. Some state DOTs also emphasized that they have limited experience using the data, which will necessitate significant additional work to perform the required calculations. Data quality—Some states expressed concerns with the quality and completeness of the NPMRDS data. For example, Michigan DOT officials expressed concerns about the amount of missing travel time data. In the Detroit area in particular, more than 50 percent of the data are missing for some routes. These officials noted they do not believe sound investment decisions will be made if they are based on incomplete data. Officials we met with in most states, many comments by states on the proposed rule, and the national MPO association stated that they anticipate added costs and burdens related to implementing the System Performance rule, given the technical capabilities and expertise necessary to mitigate these data issues and use the NPMRDS data set. In comments on this rule, AASHTO noted that overall, the measures and calculation methods are considered to be overly complex even by those states that have been viewed as leaders in developing performance measures and related tools, such as Missouri, Texas, and Washington. Similarly, a national MPO association told us that the vast majority of MPOs do not have the capacity to conduct their own analysis of regional system performance and may need to rely on the state to do the required analyses. For example, 28 of the 36 state DOTs that commented on the proposed rule and 7 out of our 10 selected states anticipate technical, human resource, or funding challenges associated with the data requirements. Several states specifically noted they would need outside expertise, such as academic experts or contractors, or additional resources such as training, hardware, or software for staff to address these data challenges, which can be costly responses for states facing limited resources and budget constraints. For example, officials from one state DOT that is considered to be very experienced with transportation performance measurement commented that to implement the rule, state DOTs and MPOs must find additional employees and funding to complete the extensive workload needed to collaborate with other agencies, as the rule requires for the transportation planning process. Officials from another experienced state DOT also commented that the use of NPMRDS requires data-management and statistical-evaluation skills that are not commonly found within DOT and MPO staffs. Thus, these officials noted using these data for analytical and reporting requirements triggers the need to invest in capabilities outside of their organizations, such as working with universities to provide the expertise needed to provide ongoing data analysis and data management of the NPMRDS or contracting for expensive proprietary systems that could handle the computational- and analytical-reporting challenges. In contrast to the System Performance rule, states that commented on the final rules and the selected states we met with generally did not anticipate major challenges in implementing the Pavement and Bridge final rule. According to AASHTO, the impact on states’ implementing the Pavement and Bridge rule will be less significant and onerous because most states have been measuring pavement and bridge conditions for years. FHWA has also previously reported that most states are already actively involved in preparing to implement legal requirements in these more mature program areas. According to FHWA, states have historically expended a large amount of funds and resources collecting and analyzing pavement and bridge data, and they are more experienced with the analyses necessary to calculate these measures than they are for the new System Performance rule. Because the TPM rules are interrelated, elements of the final Planning rule also affect implementation of the System Performance rule. Given this effect, states and MPOs expressed some general concerns in their comments on the final rules and in interviews with how state and local planning activities will be carried out. Under the state transportation planning process, setting and agreeing upon performance targets for the 17 performance measures necessitates coordination between a state and MPOs in the state. Once states set their performance targets, generally MPOs will have 180 days to decide whether to set their own targets based on conditions and priorities in their respective planning areas, or to agree to the state-level targets. Thirteen of 36 state DOTs and the MPO national organization expressed concern with the level of coordination required to set and agree on targets. Three state DOTs commented on the Planning rule that setting performance targets will be a significant challenge for interstate MPOs that have membership in multiple states. This is due to each state’s having different resources and different transportation policies, goals, and priorities. According to FHWA, commenters also said that it is not uncommon for states, MPOs, and operators of public transportation to have different priorities that may conflict with each other, potentially leading to conflicts when setting performance targets and attempting to implement them. In response to these comments, FHWA and FTA emphasize in the final Planning rule the importance of state DOTs and MPOs engaging in early and ongoing interagency coordination during their performance-based planning and programming. Given that states and MPOs are just beginning the target- setting process for the Pavement and Bridge and System Performance rules, the extent to which engaging in early and ongoing coordination throughout the planning process will help resolve situations where states and MPOs may have differing priorities driving their preferred performance targets is not yet clear. In response to comments received on the proposed System Performance rule, FHWA stated that it had taken steps to reduce the complexity of the requirements, including eliminating, revising and simplifying measures, and reducing the burden of compliance. FHWA eliminated some measures and merged others in the final rule. For example, four of the six measures in the final rule require the use of data derived from NPMRDS, compared to seven in the proposed rule. FHWA also took steps to simplify the required data processing and calculation of the metrics, including changes to specific data requirements such as how travel time intervals are calculated, which agency officials say effectively reduces the amount of data analysis necessary. The agency also stated its commitment to working with state DOTs and MPOs to establish a joint effort to acquire services and tools that will help with activities such as data processing, analysis, measure calculation and reporting. According to transportation industry association representatives and some states we met with, the changes made in the final System Performance rule do not address most of the expected data challenges associated with the process of combining data sources, the size of the data, and states lack of experience with the data. Specifically, AASHTO representatives told us that while the final rule made some technical changes to specific data requirements, overall, significant data challenges—related to both the size of the data set and the process of combining the speed and volume data sets—remain. AASHTO representatives also said that the new measures in the final rule are not responsive at all to their comments and although FHWA made changes to the rule—such as changing the weighting of the travel time measures from system miles to person-miles traveled, which is an adjustment to the measure itself—overall, the final rule is ultimately more complicated. After the final System Performance rule was issued, officials from two state DOTs we met with said that the final rule did not substantively change the concerns identified in the original comments and that many of the challenges in implementing the rules remain. In response to these concerns, FHWA officials said that they made significant alterations in the final rule to the measures in direct response to the extensive and substantive comments received on the proposed rule. The officials also stated that revisions made in the final rule along with new NPMRDS improvements, planned joint efforts with the states, and technical assistance adequately address data concerns expressed by the stakeholders. For example, FHWA officials stated that the changes made reduce the size of the data set by approximately two-thirds. In addition, after issuing the final System Performance rule, FHWA executed a new contract in April 2017 containing updates to the NPMRDS data that officials said would also help address anticipated data challenges. According to agency officials, the new NPMRDS data will substantially reduce the workload and complexity for state DOTs and MPOs. This new data set will help simplify the initial step of combining the two data sources necessary to begin calculating four of the System Performance measures, reducing the technical burden on the states. FHWA officials stated that data provided under this new contract will be available for states’ use in July 2017. Recognizing that some states have limited experience with the type of data analysis that will be necessary to calculate the performance measures, officials stated that FHWA is committed to providing technical assistance to help alleviate burdens associated with states’ lack of experience with the data. To guide the transition to national transportation performance management, FHWA has engaged in extensive outreach with states and MPOs throughout the initial, rulemaking stage of TPM implementation and continues to develop a range of tools, trainings, and workshops to help prepare states and MPOs for key activities related to the second stage—setting and tracking performance targets. The agency has also begun to determine its internal roles and responsibilities for both the target-setting and evaluation stages of the transition and has taken steps to meet its congressional reporting requirements in the interim. However, the agency has: not clearly defined its overarching goals for the transition, not developed an implementation plan showing how the various efforts underway relate to each other and when they will be completed, nor clearly communicated its approach to states and MPOs. Without these elements, states and MPOs may struggle to fulfill their performance management responsibilities and FHWA may find it challenging to effectively use its resources to assist states with implementation challenges, to evaluate their performance, and ultimately to use that information, as the final performance measure rules state, to “communicate a national performance story” and assess the impacts of federal funding. As we found in 2015, during the initial rulemaking stage of TPM implementation, FHWA engaged in extensive communication and outreach with states and MPOs. FHWA is currently developing tools and training to advance the second—setting- and tracking-performance targets—and third—evaluating performance—stages of TPM implementation. These efforts build upon the agency’s earlier communication and outreach efforts. While the agency continues to use the webinars, workshops, and trainings employed during the rulemaking stage, DOT has changed the substantive focus of these efforts to address the second implementation stage. For example, the focus has shifted to helping states and MPOs understand and implement specific performance measure requirements, data-management approaches, and target-setting practices. FHWA’s efforts include the following: Webinars: Beginning in 2012, FHWA used webinars to communicate information related to the rulemaking process. More recently, the agency has begun hosting webinars focused on data use and target setting. For example, FHWA conducted a webinar in July 2016 on the basics of target setting that defined key terms, explained the five steps in the target-setting process, and provided a preview of a new course on effective target setting. In addition, a target-setting webinar on the final safety rule in September 2016 included an overview of FHWA’s recommended process for coordination on safety targets. Workshops: In June 2015, FHWA began facilitating peer-to-peer workshops, including a 2016 data management session, a topic that will play a critical role in TPM implementation as states use data to set appropriate targets and track progress toward them. In addition, FHWA has conducted workshops on safety target setting and coordination and has begun to conduct workshops that focus on how to implement technical aspects of the Pavement and Bridge and System Performance rules now that they are in effect. Web-based assessment tools: In December 2016, FHWA released a new web resource and guidebook that state and local transportation agencies can use to assess their transportation performance management approach and identify steps they might take to implement or improve that approach. The web resource includes a self-assessment tool states and MPOs can use to determine their level of maturity in TPM areas such as target setting, performance- based planning, and data usability and analysis. It also provides additional resources states and MPOs can use to strengthen their approach where needed in these areas. Trainings: FHWA and the National Highway Institute are developing 8 training courses covering topics such as the target-setting process for specific TPM elements, but many of these are new or were delayed due to the regulatory review. For example, the “Steps to Effective Target Setting for TPM” and the “Performance-based Programming and Planning” trainings are newly available and being delivered in summer 2017. According to FHWA, trainings pertaining to the rules that took effect in May 2017—”TPM for Pavement” and “TPM for Bridges”—will be available in the fall of 2017. As of July, 2017, 5 of the 8 courses were available. FHWA also has efforts under way to guide its internal TPM agency activities. These efforts are designed to organize its staff, ensure the agency is prepared to make its statutorily mandated report to Congress on the status of TPM efforts by October 2017, and to lay the groundwork for receiving and displaying information from states and MPOs when the time comes for them to report their targets and for DOT to evaluate states’ performance. Developing internal guidance: FHWA issued internal guidance describing the roles and levels of responsibility headquarters, division office, and resource center staff will assume, including assuring data quality and influencing target setting. This document outlined preliminary responsibilities in February, 2015 and FHWA officials acknowledged more work remains to be done to solidify these roles as TPM evolves. Preparing to meet mandatory reporting requirements: Despite delays in the rulemaking process, FHWA has taken steps to assist states in meeting their statutory reporting requirements. For example, to assist states in reporting progress toward achieving targets to DOT by October 1, 2016, as mandated, the agency issued guidance to states on how to submit preliminary information to meet requirements in the absence of the finalized Pavement and Bridge and System Performance rules. According to FHWA officials, they have also developed two web portals for states to use to report on progress for the Highway Safety Improvement Program performance requirements and other measure areas. Planning to assess state readiness: FHWA also planned and developed a draft survey that would both provide the agency with information on the status of TPM practice across the states and provide specific examples that would enable FHWA to meet its 2017 requirement to report to Congress. Officials said they also planned to use the survey results to identify further assistance and guidance needs. The officials also told us that they want to administer a similar survey in 2 years to assess states’ ability to meet TPM requirements and identify the major challenges that have arisen. While the survey was initially planned for the end of 2016, it was delayed during the review process and remained under review as of June 2017. Agency officials were unsure when the survey would ultimately be administered, but said they had identified alternate sources of information to use to assess states’ progress in order to meet congressional reporting requirements. For example, FHWA has been collecting information from its division offices on how states and MPOs have been working on TPM-related issues. Planning web-based performance reporting tools: FHWA is also in the preliminary stages of planning a web portal that states and MPOs can use to submit their performance reports to the agency and a website that the agency plans to use to publicly display state and MPO targets and actual performance data. Both the Pavement and Bridge and System Performance final rules state that the website will “likely include infographics, tables, charts, and descriptions of the performance data that state DOTs report to FHWA.” Agency officials said the site will show both targets and actual performance numbers and will eventually be expanded to a DOT-wide site incorporating performance figures from other modes of transportation. The officials said that they plan to share prototypes of the web portal and website for states’ comments and review during fall 2017 and to have design completed by October 2018. The new performance-based approach to transportation is a transformational shift, impacting billions of dollars in federal transportation funds, including almost $40 billion in FHWA grant funding to states and holding states and grantees accountable for results, many for the first time. In 2003, we identified key practices from major private- and public- sector organizational mergers, acquisitions, and transformations that federal agencies could implement to successfully manage a transformation. We found that several key practices can help federal agencies implement needed transformational changes, such as transforming to a more results-oriented culture. Some of these key practices are applicable to FHWA’s approach to implementing TPM. In particular, establishing a coherent mission and integrated strategic goals that are clear to both agency employees and grantees from the outset is critical to guide a successful transformation and involves determining strategies and resources to effectively accomplish the goals of the transformation. Developing, communicating and constantly reinforcing the mission and strategic goals of the transformation gives employees a sense of what the organization intends to accomplish and helps them determine what they must do differently to help the organization achieve success. In 2003, we also identified setting implementation goals and a timeline as a key practice, given that transformations are lengthy processes that can take years to complete, and noted the importance of focusing on a set of principles and priorities at the outset of the transformation. In addition, we noted that identifying the processes that will be used to achieve results should focus attention clearly on critical stages of the transformation and determine essential activities that must be completed by certain dates. Furthermore, our past work on organizational transformations found that early, effective, and ongoing communication is essential to implementing a successful transformation, as it engages employees and stakeholders and builds an understanding of the purpose of planned changes. As principles of internal control in the federal government, management should internally and externally communicate the necessary quality information to achieve objectives. While FHWA has developed tools and trainings to advance both the rulemakings and implementation, the agency has not pulled all of this together into a comprehensive plan to guide its efforts through the final two stages of implementation. Specifically, DOT has not: Articulated a mission and overarching goals to guide the transition: While national goals for the transportation system were established in MAP-21, FHWA has not coherently articulated the mission of TPM or specific strategic goals related to each stage of the TPM transition in a comprehensive manner. For example, the agency has not identified strategic implementation goals describing what the transformation is intended to accomplish and how the various efforts the agency has underway will help accomplish those goals. Each of the final performance measure rules states that TPM will provide FHWA the ability to “better communicate a national performance story” and to assess the impacts of federal funding. MAP-21 also provided that performance based planning and programming will improve project decision-making, thereby ensuring efficient investment of federal funds. However, FHWA has not identified specific processes or essential activities that will be used to achieve these results. For example, the agency has not explained the process it will use to analyze states’ performance data to assess the impacts of federal funding on the surface transportation system. Furthermore, FHWA has not described how the individual tools and trainings it has been developing contribute to a comprehensive approach to achieving the national goals and planning outcomes. If FHWA does not provide a sense of what it intends to accomplish or determine what must be done differently to help achieve successful TPM implementation, states may not see how TPM is intended to help them improve their transportation planning and decision-making processes. Developed an implementation plan and timelines: FHWA has not developed and made public a comprehensive plan describing specific deliverables or actions the agency intends to take to address its ultimate goals, identified how these planned activities will help move the agency and its grantees through the two remaining stages of the transition, or established timelines for when these activities will be carried out. For example, FHWA did not make clear at the outset of the transition how it would prepare grantees or focus on a key set of principles and priorities for implementation, a key practice for successful transformations. As a result, states have not had a plan that lays out the various efforts that FHWA is taking and plans to take to help address anticipated challenges. While FHWA has begun to develop individual plans to outline implementation efforts for some specific rules, the agency has not developed an overarching plan that would describe the range of efforts across the six rules or explicitly stated what actions it plans to take to facilitate the implementation of these rules. For example, officials from one state DOT told us that they were unsure how the transition would work in practical terms because all the presentations FHWA provided concerning future plans had been very general. While DOT will begin assessing states’ performance in the third implementation stage, the agency has not described how it will evaluate performance nationwide. For example, while FHWA officials told us they are developing a website to display state targets and performance data, they have not outlined specific plans for analyzing, displaying, or using the data in a way that will provide practical insights into the condition of the nation’s surface transportation system or better communicate a national performance story. Furthermore, the agency has not developed a timeline identifying when it will initiate and conclude specific activities, making it difficult for state and MPOs to know when the tools, training, and guidance FHWA is developing to help them meet the new requirements will be available. For example, officials from one state DOT told us they struggled to begin complying with new requirements after rules were finalized because they had to wait months for specific subject-matter guidance to be released. Communicated its approach: FHWA has not communicated its overall approach to TPM to states and MPOs—the grantees that are responsible for carrying out several of the performance management requirements. While the majority of state DOT officials we met with were generally appreciative of the communication the agency provided during the rulemaking stage and found the various webinars to be valuable in conveying the status of agency efforts in the first stage of implementation, the agency’s broader plans for full TPM implementation were unclear. For example, officials we spoke with from 9 out of the 10 states said they were not aware of FHWA’s plans for TPM beyond the rulemaking process. As such, states and MPOs may not have a sense of the purpose behind the significant changes they will need to make to support this transformation of the surface transportation system or know how to support the agency’s objectives of communicating a national performance story and assessing the impacts of federal funding. For example, officials from one state DOT said they were not aware of any FHWA documentation that would constitute a concrete plan for developing a framework that would provide an in-depth look at the nation’s infrastructure. FHWA officials stated they have not developed an implementation plan with goals and timeframes because they have been focused on completing the rulemakings. Agency officials stated that any efforts to formalize an implementation plan would need to come after all the rules took effect—which happened in May 2017—as well as completion of the previously discussed survey effort. Agency officials also acknowledged that an implementation plan would be helpful, but challenging to develop. In the absence of a timely developed implementation plan, DOT may not be ideally positioned to coordinate its technical assistance activities or address implementation challenges anticipated by states—such as the data-related challenges discussed earlier. An implementation plan could also help the agency make the best use of the performance information states will begin reporting to improve transportation decision-making, communicate a national performance story, and assess the impacts of federal funding. If grantees do not have a clear understanding of the specific implementation goals and processes that their TPM efforts will contribute to or how FHWA plans to use their performance information, it may be difficult for them to support implementation. As our past work has demonstrated, it is increasingly important to improve the effectiveness of surface transportation programs by establishing links to performance and measuring progress toward clear national goals in order to maximize the use of available resources. TPM represents a fundamental transformation in the way the agency will hold grant recipients responsible for achieving congressionally established goals and measure overall program performance. While the first stage of implementation, the rulemaking process, took nearly 5 years to complete, FHWA is now at a critical junction, preparing for the second and third implementation stages—target setting and performance evaluation. States and other grantees anticipate significant challenges associated with implementation of these new requirements, including specific challenges related to the complexity of the data and the processes necessary to calculate and track certain performance measures. It remains to be seen whether and to what extent revisions made to the final rule and other actions by FHWA adequately addresses concerns expressed by the stakeholders. As the states begin the process necessary to agree upon and set performance targets for each of the required TPM performance measures, they could benefit from a plan outlining the agency’s overarching approach to TPM and how the various actions the agency has taken during the initial stage of implementation are related to one another and will help prepare states for implementation. Without a formal implementation plan guiding and coordinating FHWA’s efforts, FHWA may struggle to articulate the goals and purpose of the transformation, tell a national performance story, and identify and address the activities best suited to help states and MPOs overcome anticipated implementation challenges or identify future efforts to pursue to help achieve specific TPM implementation goals. With the issuance of the interrelated final rules and of states’ beginning the target-setting process, there is an opportunity for FHWA at this critical junction to articulate an overarching approach to TPM, including specific steps the agency plans to take to operationalize the transformation, time frames for key activities and efforts, and specific actions the agency will take to help states and other grantees address anticipated challenges. To better position DOT and FHWA to effectively guide the transformation of federal surface transportation programs to a more performance- oriented approach, and to help states and MPOs overcome anticipated challenges, we recommend that the Secretary of Transportation direct the Administrator of FHWA to take the following two actions: 1. Develop a formal TPM implementation plan to include overarching implementation goals, specific actions FHWA plans to take to help states and MPOs successfully implement TPM, and corresponding timelines. 2. Publicly communicate this plan and approach to build a shared understanding of the goals and purpose of the transformation with its grantees. We provided a draft of this report to DOT for review and comment. In its comments, reproduced in appendix II, the agency concurred with our recommendation. DOT also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Transportation and the Administrator of the Federal Highway Administration. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. System Performance Rule National Goal Congestion Reduction: To achieve a significant reduction in congestion on the National Highway System System Reliability: To improve the efficiency of the surface transportation system Freight Movement and Economic Vitality: To improve the National Highway Freight Network, strengthen the ability of rural communities to access national and international trade markets, and support regional economic development Environmental Sustainability: To enhance the performance of the transportation system while protecting and enhancing the natural environment. Performance of the Interstate System Performance of the non- Interstate NHS Freight movement on the Interstate System Traffic congestion 3. Truck travel time reliability index 4. Annual hours of peak-hour excessive delay 5. Percent of non-single occupancy vehicle 6. Total emission reductions and Bridge Condition for the National Highway Performance Program (Pavement and Bridge); and Assessing Performance of the National Highway System, Freight Movement on the Interstate System, and Congestion Mitigation and Air Quality Improvement (System Performance). These rules were created in response to the Moving Ahead for Progress in the 21st Century Act (MAP-21), which required the Secretary of Transportation to establish measures states could use to assess their performance in the 12 areas listed in this table. These performance measures are intended to create a framework to support the seven national goals established by MAP-21. Part of the System Performance final rule pertaining to the greenhouse gas measure was delayed indefinitely. 82 Fed. Reg. 22879 (May 19, 2017). FHWA plans to publish another notice of proposed rulemaking regarding the greenhouse gas measure at a later date. In addition to the individual named above, Steve Cohen (Assistant Director); Maria Wallace (Analyst in Charge); Delwen Jones; Alex Lawrence; Crystal Wesco; Sarah Wilson; and Elizabeth Wood made key contributions to this report. | Since 2008, GAO has highlighted the need to demonstrate the outcomes of the billions of dollars the Department of Transportation (DOT) provides to states and other grantees for surface transportation programs. MAP-21 included provisions for DOT and its grantees to move toward a performance-based approach, transforming federal surface transportation programs by holding states and other grantees accountable for results, in many cases for the first time. GAO was asked to review DOT's implementation of TPM. This report focuses on FHWA, which administers the largest grant program of the three DOT agencies involved, and (1) examines the progress made in developing rules to establish a national performance-based approach and (2) evaluates how FHWA is guiding the transition to TPM, among other objectives. GAO reviewed proposed and final rules and information on rulemaking activities and interviewed FHWA officials, national transportation organizations, and a non-generalizable sample of 10 state departments of transportation, among others. States were selected based on factors such as geographic distribution, population, and urban and rural characteristics. The Federal Highway Administration (FHWA) recently issued the last of six interrelated rules to implement a new performance-based approach to federal surface transportation grant programs. Three of the six rules establish 17 total performance measures in the areas of safety, pavement and bridge conditions, and performance of the surface transportation system (congestion, freight movement, reliability, and air quality). Rulemaking was the first of three stages to implement the transportation performance management (TPM) approach, which was required by the Moving Ahead for Progress in the 21st Century Act (MAP-21) in 2012 (see fig.). The agency issued the first of the six rules in March 2016. In January 2017, the last two final rules were issued and became effective in May 2017, with part of one rule related to greenhouse gas emissions delayed indefinitely. FHWA is developing tools and training to help implement TPM but has not yet developed a comprehensive plan to guide implementation. The agency has begun to help states and regional metropolitan planning organizations (MPO) prepare to set targets and begin tracking performance as part of the second implementation stage. However, the agency has not clearly articulated strategic goals for the TPM transition, developed an implementation plan showing how the various efforts under way relate to each other and when they will be completed, or clearly communicated the approach to states and MPOs. For example, the agency has not identified strategic goals describing what each stage of the transformation is intended to accomplish and how the various efforts under way will help accomplish those goals. While FHWA has stated in its rulemaking that it wants to “communicate a national performance story,” it has not identified specific processes or activities that will be essential to achieving these results. Developing, communicating, and reinforcing the goals of a transformation, as well as specific activities and timelines for achieving those goals, are among the key practices that GAO has identified for major organizational transformations. An implementation plan could help the agency better articulate the goals of the transformation, make the best use of the performance information states will begin reporting, and help assess the effects of federal funding to improve investment decision-making. FHWA should develop a TPM implementation plan that includes goals and specific actions and timelines, and publicly communicate that plan. DOT concurred with the recommendation. The agency also provided technical comments, which were incorporated as appropriate. |
TEA-21 authorized a total of $36 billion in “guaranteed” funding for a variety of transit programs, including financial assistance to states and localities to develop, operate, and maintain transit systems. Under one of these programs, New Starts, FTA identifies and funds worthy fixed- guideway transit projects, including heavy, light, and commuter rail; ferry; and certain bus projects (such as bus rapid transit). We have recognized the New Starts program as a model that the federal government could use for approving other transportation projects. FTA generally funds New Starts projects through full funding grant agreements (FFGAs). An FFGA establishes the terms and conditions for federal participation in a project, including the maximum amount of federal funds available for the project, as well as the project’s scope, schedule, and cost. By statute, the federal funding share for a New Starts project cannot exceed 80 percent of its net cost. To obtain an FFGA, projects must go through an extensive process from a regional multimodal transportation planning process to preliminary engineering to final design and construction. (See fig. 1.) As required by TEA-21, New Starts projects must emerge from a regional, multimodal transportation planning process. The first two phases of the New Starts process—systems planning and alternatives analysis—address this requirement. The systems planning phase identifies the transportation needs of a region, while the alternatives analysis phase provides information on the benefits, costs, and impacts of different corridor-level options, such as rail lines or bus routes. The alternatives analysis phase results in the selection of a locally preferred alternative—which is intended to be the New Starts project that FTA evaluates for funding. After a locally preferred alternative is selected, project sponsors submit a request to FTA for entry into the preliminary engineering phase. Following completion of preliminary engineering, the project may be approved by FTA to advance into final design, after which the project may be approved by FTA for an FFGA and proceed to construction. FTA oversees the management of projects from the preliminary engineering phase through construction and evaluates the projects for advancement into each phase of the process, as well as annually for the New Starts report to Congress. To determine whether a project should receive federal funds, FTA’s New Starts evaluation process assigns ratings on the basis of a variety of financial and project justification criteria and determines an overall rating. These criteria are identified in TEA-21 and reflect a broad range of benefits and effects of the proposed projects, such as capital and operating finance plans, mobility improvements, and cost-effectiveness. As figure 2 shows, FTA has developed a series of measures for the project justification criteria. FTA assigns proposed projects a rating of “high,” “medium-high,” “medium,” “low-medium,” or “low” for each criterion. The individual criterion ratings are combined into the summary financial and project justification ratings. However, FTA does not weigh each individual criterion equally when calculating the summary financial and project justification ratings. For the summary project justification rating, FTA uses primarily two criteria—cost-effectiveness and land use. Each of these criteria account for 50 percent of the summary project justification. Although FTA considers the full range of criteria, according to an FTA official, the other criteria do not produce meaningful distinctions among projects and, therefore, are not given an official weight in the ratings process. FTA plans to consider revisions to the measures for the other criteria after the authorizing legislation is passed. On the basis of the summary project justification and financial ratings, FTA develops the overall project rating. (Table 1 describes the criteria FTA uses to assign overall project ratings.) The exceptions to the evaluation process are statutorily “exempt” projects, which are those that request less than $25 million in New Starts funding. These projects are not required to submit project justification information and do not receive ratings. Last year, we reported that FTA implemented two changes to the New Starts process for the fiscal year 2004 cycle. (These changes are shaded in fig. 2.) First, FTA changed the calculation of the cost-effectiveness and mobility improvements criteria by adopting the Transportation System User Benefits (TSUB) measure. This measure replaced the “cost per new rider” measure that had been used in past ratings cycles. According to FTA, the new measure reflects an important goal of any major transportation investment—reducing the amount of travel time that people incur for taking a trip (i.e., the cost of mobility). In contrast to the “cost per new rider” measure, the new measure considers travel time savings to both new and existing transit system riders. Second, in response to appropriations committee reports, FTA instituted a preference policy favoring projects that seek a federal New Starts share of no more than 60 percent of the total project cost. Under this preference policy, FTA gives projects seeking a federal share of New Starts funding greater than 60 percent a “low” financial rating, which further results in a “not recommended” overall project rating. As required by statute, FTA uses the evaluation and ratings process, along with its consideration of the stage of development of New Starts projects, to decide which projects to recommend to Congress for funding. Although many projects receive an overall rating of “recommended” or “highly recommended,” only a few are proposed for FFGAs in a given fiscal year. FTA proposes “recommended” or “highly recommended” projects for FFGAs when it believes that the projects will be able to meet certain conditions during the fiscal year that the proposals are made. These conditions include the following: The local contribution to funding for the project must be made available for distribution. The project must be in the final design phase and have progressed to the point where uncertainties about costs, benefits, and impacts (e.g., environmental or financial) are minimized. The project must meet FTA’s tests for readiness and technical capacity, which confirm that there are no cost, project scope, or local financial commitment issues remaining. Of the 38 projects evaluated for the fiscal year 2005 cycle, 29 were rated and 9 were statutorily exempt from the rating process because they requested less than $25 million in New Starts funding. While the project ratings for the fiscal year 2005 cycle reflect a general improvement over the previous year, ratings are not as high as those achieved for the fiscal year 2003 cycle. FTA proposed 7 projects for funding for the fiscal year 2005 cycle, including 5 projects for FFGAs. The remaining 2 projects were considered to be “meritorious and worthy of funding” and FTA proposed a total of $50 million for these projects—substantially more than amounts proposed for similar projects in prior years. FTA did not, however, clearly explain to project sponsors how it decides which projects will be recommended for funding outside of FFGAs or what they must do to qualify for such a recommendation. FTA implemented two changes to its evaluation and ratings process for the fiscal year 2004 cycle: implementation of a new cost-effectiveness measure and adoption of the 60 percent federal New Starts share preference policy that contributed to lower ratings. Although many of those projects were able to overcome challenges with the new measure for the current cycle, ratings reflected that some projects were still unable to generate reliable local travel forecasts. Also, while the majority of the projects evaluated during the current cycle requested a federal New Starts share of less than 60 percent, some project sponsors raised concerns about FTA’s preference policy, including the challenges associated with securing the local funding share. Project ratings are generally higher for the fiscal year 2005 cycle than for the fiscal year 2004 cycle but are still lower than ratings for fiscal year 2003. Of the 38 projects FTA evaluated for the fiscal year 2005 cycle, 29 were rated, and 9 were statutorily exempt from the ratings process because project sponsors requested less than $25 million in New Starts funding. Figure 3 shows that the percentage of projects that received ratings of “recommended” or “highly recommended” rose from 44 percent for the fiscal year 2004 cycle to 59 percent for the fiscal year 2005 cycle. FTA attributes the increase in “recommended” projects over last year’s total to improved submissions, notably improved financial plans, and a better understanding of and increased comfort with the estimation of project benefits among project sponsors. In addition, FTA rated 7 projects as “not recommended” and designated 5 projects as “not rated.” According to FTA, most of the projects that received a rating of “not recommended” submitted poor financial plans—that is, plans that FTA considered overly optimistic in their assumptions about costs and revenue growth, or demonstrated no commitment of funds. For the projects that received a rating of “not rated,” either FTA had significant concerns with the travel forecasts submitted by the project sponsor or the project sponsor did not provide all of the information necessary for a complete submission. (See app. II for a full listing of ratings for projects evaluated for the fiscal year 2005 cycle.) FTA proposed 7 projects for funding for the fiscal year 2005 cycle. FTA proposed 5 of the 7 projects for FFGAs, including Cleveland, Euclid Corridor Transportation Project; Las Vegas, Resort Corridor Fixed Guideway; New York, Long Island Rail Road East Side Access; Phoenix, Central Phoenix/East Valley Light Rail Transit (LRT) Corridor; and Pittsburgh, North Shore LRT Connector. These projects are expected to be ready for FFGAs by the end of fiscal year 2005. The total costs of these 5 projects are estimated to be $7.6 billion. The total federal New Starts share is expected to be $3.7 billion. In addition, FTA considered 2 other projects in final design to be meritorious and recommended a total of $50 million for these projects in fiscal year 2005. FTA proposed $30 million for the Charlotte South Corridor LRT Project and $20 million for the Raleigh Regional Rail Project— substantially more than amounts proposed for similar projects in prior years. According to the fiscal year 2005 New Starts report, these meritorious projects are “located in areas that are highly congested or rapidly growing, and that have demonstrated a high level of local financial commitment and strong support from local citizens, businesses, and elected officials.” However, the report does not clearly explain to project sponsors how FTA decides which projects will be recommended for funding outside of FFGAs or what they must do to qualify for such a recommendation. FTA officials explained that the 2 projects considered to be meritorious this cycle are closer to being ready for an FFGA than the other projects evaluated; however, FTA did not believe the 2 projects would be ready for an FFGA in fiscal year 2005. FTA officials also told us that decisions to recommend funding for projects outside of FFGAs are made on an annual basis and are dependent on the readiness of the projects and the availability of funds after funding for existing or new FFGAs is allocated. This explanation, however, is not included in its New Starts report or other published guidance. FTA has funded similar projects in the past. For example, for the fiscal year 2003 cycle, FTA considered 5 projects in preliminary engineering to be meritorious. At that time, FTA had proposed $4 million for 4 of the 5 projects and $15 million for the remaining project. FTA reported in its annual New Starts report that the 5 projects “may be ready to progress through final design and construction by the end of fiscal year 2003.” However, by the fiscal year 2005 cycle, only 1 of the projects had an FFGA. The remaining 4 projects were either being proposed for an FFGA for the fiscal year 2005 cycle (3) or still in preliminary engineering (1). Therefore, in the past, FTA’s recommendation for funding for projects considered to be meritorious does not guarantee that a project will advance to final design and construction as quickly as anticipated. Project sponsors continue to experience challenges calculating cost- effectiveness. Last year, we reported that many project sponsors experienced difficulties that prevented them from producing accurate local travel forecasts to calculate the TSUB measure, resulting in 11 projects designated as “not rated” for cost-effectiveness. Since that time, the sponsors for 8 of those 11 projects were able to submit sufficient information to receive a rating for cost-effectiveness, suggesting that they were able to overcome the travel forecasting problem that they had experienced during the first year of the measure’s implementation. However, 6 additional project sponsors were unable to generate reliable local travel forecasts and thus could not calculate a valid TSUB value for the fiscal year 2005 cycle, resulting in a total of 9 of the 29 projects designated as “not rated” for cost-effectiveness. According to FTA, the major problem in implementing the measure this cycle stemmed from problems with the underlying local travel forecasting models, not FTA’s software or the TSUB measure. For example, FTA noted that 22 of the 29 projects rated this year required some involvement by FTA to improve the accuracy of their travel forecasts. Last year, we recommended that FTA issue additional guidance describing its expectations regarding the local travel forecasting models and the specific types of data FTA requires to calculate the measure. FTA concurred with this recommendation and provided additional guidance in its updated reporting instructions, issued in June 2003, and has continued to provide technical assistance to project sponsors. Despite the difficulties encountered in implementing TSUB, FTA and most of the project sponsors we interviewed believe that this new measure is an improvement over the “cost per new rider” measure because it takes into account a broader set of benefits to transit riders. These benefits include reductions in walk times, wait times, ride times, and numbers of transfers, all of which produce perceived savings in travel time or “travel time benefits” for new riders as well as existing transit riders. By contrast, the “cost per new rider” measure recognized benefits only for new transit riders and did not measure benefits to existing transit riders. Although the majority of project sponsors we interviewed believe the new measure is an improvement over the old one, many raised concerns about the implementation of TSUB, including the approach for calculating TSUB and the weight FTA applies to the cost-effectiveness criterion. For example, they were concerned that the measure did not capture all benefits that accrue to the transportation corridor, notably for highway users; the amount of time provided to incorporate changes to their local travel forecasting software was insufficient; and the weight FTA applies to the cost-effectiveness criterion is disproportional to other criteria. Specifically, many project sponsors were unclear about the basis for a 45-minute cap on travel time savings included in the calculation of TSUB. According to an FTA official, this cap allows FTA to limit travel time savings to less than 45 minutes, which they feel is appropriate, when examining the benefits of each project. FTA’s experience has been that time savings in excess of 45 minutes is usually due to problems with the local travel forecasting model. However, FTA has allowed for exceptions to the cap in the past if well justified by local project sponsors. FTA assigns a significant weight to the cost-effectiveness criterion in comparison with other criteria used to calculate the project justification rating. According to the New Starts report, cost-effectiveness accounts for 50 percent of the project justification rating. Land use accounts for the other 50 percent. Thus, although cost-effectiveness accounts for 50 percent of the project justification rating, a “low” cost-effectiveness rating can be offset by a “high” land use rating. This appears to be the case for the majority of projects proposed for funding for the fiscal year 2005 cycle. As table 2 shows, five of the seven projects proposed for funding received a “low-medium” cost-effectiveness rating. However, the projects’ land use ratings raised their summary project justification ratings to “medium,” which allowed them to receive an overall “recommended” rating. FTA instituted a policy favoring projects that seek a federal New Starts share of no more than 60 percent of the total project cost in fiscal year 2004. According to FTA, this preference policy responded to language contained in a conference report, prepared in November 2001, by the House Appropriations Committee. The report states “the conferees direct FTA not to sign any new FFGAs after September 30, 2002, that have a maximum federal share of higher than 60 percent.” Similar language has been included in all subsequent appropriations committee reports. Further, FTA officials told us that this policy would allow more projects to receive funding by spreading limited resources among them and ensure that local governments whose regions stand to receive substantial benefits for the project play a major role in funding such projects. However, when FTA implemented the 60 percent policy, it did not amend its regulations to support the change in policy or its current procedures. As a result, we noted last year that FTA did not provide an opportunity for public comment on the impact of the preference policy. We further advised that explicitly stating criteria and procedures in regulations would ensure that project sponsors were fully aware of the preference policy. Accordingly, last year we recommended that FTA amend its regulations governing the New Starts share for projects to reflect its current policy. FTA disagreed with our recommendation, noting it was not required to issue regulations because the policy was not legally binding. Moreover, according to FTA officials, the preference policy is explained in both the fiscal year 2004 and 2005 New Starts reports and in its June 2003 reporting instructions. Although FTA’s preference policy, as expressed in the recent New Starts report, favors projects that request a federal New Starts share of no more than 60 percent, FTA is encouraging project sponsors to request an even lower federal New Starts share. Specifically, some project sponsors have stated that FTA encourages project sponsors to propose a federal New Starts share of no more than 50 percent—which is consistent with the administration’s reauthorization proposal. This push for a lower New Starts share is reflected in FTA’s rating process. As table 3 indicates, the lower the amount of New Starts funding requested, the higher the New Starts share rating. According to the New Starts report, the non-New Starts share rating accounts for 20 percent of a project’s financial rating. The project sponsors we contacted expressed concerns about the preference policy. Although the majority of the projects evaluated during the current cycle requested a federal New Starts share of less than 60 percent, many of the project sponsors we interviewed indicated that they had proposed a share that was in line with FTA’s policy in order to remain competitive. More than half of those interviewed told us they faced difficulties in advancing New Starts projects under such a policy. For example, some project sponsors told us that transit projects have a difficult time competing with highway projects in the local planning process because highway projects typically require a 20 percent local match, whereas New Starts projects require a match of at least 40 percent. Other project sponsors described the limited resources available at the local level to advance New Starts projects. A number of project sponsors also expressed concerns about FTA’s efforts to lower the federal New Starts share to 50 percent. For example, one project sponsor indicated that their project would have to drop out of the process, others indicated that the projects would have to be redesigned, and one project sponsor indicated that requesting a lower federal New Starts share would weaken the project’s financial plan. According to the fiscal year 2005 New Starts report, projects that request more than a 60 percent federal New Starts share are not recommended to Congress for FFGAs. Specifically, the fiscal year 2005 New Starts report states that “projects seeking a federal New Starts share over 60 percent of total costs are given a ‘low’ rating for local financial commitment, regardless of the ratings received for the capital plan and operating plan. This ‘low’ rating further results in a ‘not recommended’ overall project rating.” Projects receiving an overall “not recommended” rating are not proposed for an FFGA. An FTA official told us that for the fiscal year 2005 cycle, no project received an overall “not recommended” rating solely due to this policy preference. The enabling legislation for this program states that federal grants are to be for 80 percent of the net project cost, unless the grant recipient requests a lower grant percentage. TEA-21 required FTA to consider the strength of the local financial commitment, including the extent to which the project will have a federal New Starts share of less than 80 percent. In our view, FTA’s policy to favor projects with a lower federal share is permissible as long as projects are not required to request less than an 80 percent federal New Starts share in order to be considered for recommendation for an FFGA. FTA’s description of the preference policy in its fiscal year 2005 New Starts report suggests that this policy is absolute in that projects proposing more than a 60 percent federal New Starts share will not be recommended for an FFGA. However, FTA has assured us that this is a general preference and it may make exceptions to this policy. FTA has agreed to clarify in its upcoming reporting instructions that this is a general preference policy, thus allowing for the possibility of exceptions. FTA instituted two new requirements for New Starts projects for the fiscal year 2005 cycle that were independent of the rating process. First, FTA required project sponsors to submit a supplemental document—a “make the case” document—that articulates the benefits of the proposed New Starts project. Project sponsors are expected to “make the case” by describing why the project is needed and why it is the best alternative available to meet these needs. According to an FTA official, the “make the case” document is intended to help FTA interpret the data produced by the local travel forecasting models. For example, the supplemental document could be used to explain unusual results produced by the local travel forecasts. In addition, an FTA official stated that the document would aid FTA in preparing the profile summaries of projects for the annual New Starts reports. FTA officials note, however, that many of the “make the case” submissions for the current cycle did not meet their expectations. For example, some of the submissions provided only a justification of the need for a corridor improvement; others consisted solely of a summary of financial and political commitment. An FTA official acknowledged that FTA could have done a better job in defining the purpose of the document and stated that FTA plans to provide more guidance in the near future. The second new requirement instituted for the fiscal year 2005 cycle is a risk assessment. The risk assessments are intended to identify the issues that could affect schedule or cost, as well as the probability that they will do so. It is also used as a project management tool by the project sponsor and an FTA oversight tool. FTA’s project management oversight contractors have been conducting the assessments, focusing on the projects that are closest to receiving an FFGA. As of May 2004, FTA has completed risk assessments for four projects. Eventually, FTA intends to conduct risk assessments on projects in earlier phases of development. FTA officials plan to issue more guidance on this new requirement. In addition, FTA continues to share and exchange information with project sponsors through FTA-sponsored roundtables and New Starts workshops. The administration’s fiscal year 2005 budget proposal requests $1.5 billion for the New Starts program, a $225 million increase over the amount appropriated for fiscal year 2004. Proposed legislation to reauthorize federal surface transportation programs in the House and Senate would expand the New Starts program to include a wider variety of transit projects as well as streamline the New Starts evaluation process for projects requesting less than $75 million in New Starts funding, among other things. Project sponsors had mixed reactions to these proposals and called for clear definitions. In its budget proposal for fiscal year 2005, the administration requests $1.5 billion for the construction of new transit systems and the expansion of existing systems through the New Starts program—an increase of $225 million, or 15 percent, over the amount appropriated for fiscal year 2004. Figure 4 illustrates the specific allocations FTA has proposed for fiscal year 2005, including the following: $931 million (61 percent) would be allocated among 26 projects under construction with existing FFGAs, $295 million (19 percent) would be allocated among the 5 projects proposed for new FFGAs, $151 million (10 percent) would be allocated among other projects in final design and preliminary engineering that do not have existing FFGAs, and $50 million (3 percent) would be allocated for 2 projects considered to be meritorious by FTA. Proposed FFGAs ($295 million) Existing FFGAs ($931 million) FTA has limited commitment authority remaining—about $200 million— through June 2004. According to FTA officials, the commitment authority for fiscal year 2005 and beyond will be addressed in the next surface transportation authorization legislation. FTA officials told us that neither the amount of commitment authority remaining nor the delay in reauthorizing TEA-21 affected the number of projects proposed for an FFGA for the fiscal year 2005 cycle. However, FTA officials noted that FTA will not be able to execute all 5 proposed FFGAs until additional commitment authority is provided through congressional authorization. Congress is currently considering legislation that would reauthorize all surface transportation programs, including the New Starts program. Both the Senate and House bills contain a number of provisions and initiatives for the New Starts program. Some of the key provisions of these bills would (1) streamline the evaluation process for projects under $75 million, (2) expand the definition of eligible projects, (3) change the ratings categories, and (4) maintain the maximum federal New Starts share at 80 percent. The project sponsors we interviewed had mixed reactions to these provisions. In addition, most of the sponsors called for clear definitions to any changes to the New Starts process. Streamline the New Starts evaluation process for projects under $75 million. The Senate proposal would allow the Secretary of Transportation discretion to develop a streamlined evaluation process for projects requesting less than $75 million in New Starts funds. This provision would eliminate the “exempt” classification for projects requesting less than $25 million in New Starts funding and would allow FTA to analyze and rate all projects through a streamlined process. The House proposal would establish a “Small Starts” program for projects requesting between $25 million and $75 million in New Starts funding, and these projects would be evaluated through a streamlined ratings process. In addition, the House proposal would maintain the exempt classification allowing projects requesting less than $25 million in New Starts funding to be exempt from the evaluation and ratings process. Most project sponsors we interviewed were supportive of implementing a streamlined evaluation process for less expensive projects. Some stated that a less robust evaluation process for less expensive projects makes sense, and others said it would allow cities to consider a range of potential projects without having to develop an expensive project. However, some said that clear definitions and criteria would be necessary in implementing the streamlined evaluation process. Expand the definition of eligible projects. Currently, TEA-21 limits New Starts funding to fixed-guideway projects. Both the House and Senate reauthorization proposals would allow certain nonfixed-guideway transit projects (e.g., bus rapid transit operating in nonexclusive lanes) to be eligible for New Starts funding, opening the program up to projects that currently are ineligible. Specifically, the Senate proposal would allow nonfixed-guideway projects requesting less than $75 million to be eligible for New Starts funding. The House proposal would expand New Starts funding eligibility to include nonfixed-guideway projects with a majority of fixed-guideway components seeking between $25 million and $75 million, as part of its “Small Starts” initiative. The majority of project sponsors we interviewed supported this initiative, some noting that broadening the program to include nonfixed-guideway projects would open more transit possibilities for localities. However, some project sponsors did express concerns about the already high demand on New Starts funding and noted that nonfixed-guideway projects could receive funds through other federal programs or capital funds. As a result, a number of project sponsors that support expanding the program said increased and/or separate funding and a clear definition of eligible projects are needed. Others were reluctant to support the expansion of New Starts to include nonfixed-guideways projects, citing a lack of funds and the importance of maintaining the fixed-guideway focus of New Starts. Change the rating categories. Under TEA-21, FTA assigns summary ratings of “highly recommended,” “recommended,” and “not recommended” to projects requesting New Starts funding. The Senate reauthorization proposal would revise the current rating system and implement five levels of ratings: “high,” “medium-high,” “medium,” “medium-low,” and “low.” The House proposal would maintain the current ratings system. Project sponsors were unsure of the impact of the proposed changes, and a few requested clearly defined criteria for each new rating category. Some sponsors told us they were not concerned with the ratings scale as long as it was clearly defined. Other project sponsors said they did not care what the new rating categories were called—they just want to know what rating is needed to secure an FFGA. In addition, two sponsors said the new system could be more easily conveyed to local officials. Maintain a maximum New Starts share at 80 percent. Currently, TEA- 21 allows a maximum New Starts share of 80 percent for individual projects. Both the House and the Senate versions of the TEA-21 reauthorization proposals would maintain the maximum New Starts share at 80 percent, in contrast to the administration’s reauthorization proposal, which would lower the maximum New Starts share to 50 percent. Furthermore, the House bill specifically prohibits FTA from requiring a nonfederal share that is more than 20 percent of the project’s cost. Currently, FTA is encouraging project sponsors to request no more than 50 percent in New Starts funding for their projects. As noted earlier, some project sponsors we interviewed were concerned about the potential impact of reducing the New Starts share to 50 percent, including the effect of this change on the balance between highway and transit project funding. All 26 projects with existing FFGAs have not received funds as scheduled— that is, the amount of funding appropriated was less than the amount scheduled in the FFGA. FFGAs are multiyear contractual agreements between FTA and project sponsors for a specified amount of funding, which are subject to the annual appropriations process. The full amount of funding is committed to the projects over a set period, and the FFGA contains a schedule of annual federal payments to fulfill FTA’s commitment. According to FTA, all completed New Starts projects received the total FFGA amount but not necessarily according to the original FFGA schedule. FTA will continue to request funds to be appropriated to meet the amounts authorized in existing FFGAs. As of March 2004, the 26 projects have received a total of $294 million, or 5 percent, less than the amount authorized by the projects’ FFGAs. The amount and timing of the differences in funding varied for each project, but all 26 projects with FFGAs received less than the scheduled amount at some point. As of March 2004, 7 had received over 10 percent less than the scheduled amount, 2 had received between 5 and 10 percent less than scheduled, and 17 projects had received up to 5 percent less than scheduled. The timing of the differences in funding also varied. Some projects experienced substantial differences between appropriated and scheduled amounts at the beginning or near the end of their FFGA, but it was more common for projects to experience funding differences throughout. (See app. III for the total amount of differences for each project with an existing FFGA.) Several factors contributed to projects receiving less New Starts funding than scheduled. The amount of funding authorized by an FFGA is subject to the annual appropriations process and, therefore, differences may arise because of congressional decision making. In addition, projects receive less than the amounts authorized by the FFGAs because FTA retains 1 percent of the funding provided each year to cover the cost of its project management oversight. According to FTA officials, each year FTA requests funding to cover the project management oversight costs, but these funds are typically not appropriated. FTA may also request that less New Starts funding be appropriated to a project than scheduled by the FFGA if it is concerned about a specific project’s progress; however, FTA officials said they rarely recommend less funding than is scheduled. Faced with these variances in funding, project officials we interviewed have developed methods to mitigate the impact of receiving less than the scheduled annual amount for their project. Some project officials entered into partnerships with the state and/or local government. For example, one transit agency arranged for the state to contribute more funds early on in the project and, as a result, the funding schedule did not adversely affect the project. Other projects implemented interim funding mechanisms to cover any FFGA variances, including issuing bonds or loans to generate necessary funds. None of the 5 project officials we contacted had to change the scope or schedule of their project solely due to funding variances. However, officials from these projects said that interim financing ultimately increased the cost of the project. For example, the Portland Interstate MAX project incurred approximately $3 million in borrowing costs, which equaled 4 percent of the total local commitment to the project. To receive an FFGA, projects must go through a lengthy evaluation process by FTA—from planning to preliminary engineering to final design. The steps for advancing through the evaluation process and securing an FFGA are well documented in FTA’s New Starts reports and other published guidance. Documentation of these steps is important to ensure a common understanding among projects sponsors and to increase the transparency of the process. Like other programs, the transparency of the New Starts process is critical in ensuring that project sponsors view the process as fair and objective. Although the process for securing an FFGA is well-defined, FTA’s identification of meritorious projects—and the subsequent proposed funding of these projects—is not. While FTA officials were able to provide us additional insight regarding funding recommendations for these projects, FTA’s New Starts reports and other published guidance are not clear in its meaning. In particular, the rationale for the funding recommendation for these two projects in FTA’s New Starts Report for Fiscal Year 2005 is very broad and lacks necessary detail. FTA does not explain how it decides which projects will be recommended for funding outside of FFGAs and what project sponsors must do to qualify for such a recommendation. In addition, FTA did not justify the level of funding proposed for these projects for fiscal year 2005—which is a substantial increase compared to amounts proposed for similar projects in the past. Consequently, it is difficult to understand why these two meritorious projects are more worthy of funding than other projects in the pipeline. The implementation of FTA’s policy favoring projects requesting a federal New Starts share of less than 60 percent also continues to create challenges for some project sponsors and raises concerns. According to FTA, its policy is in response to language contained in appropriations committee reports and will result in more projects receiving funding by spreading limited resources among them and ensuring that local governments whose regions would benefit from the project play a major role in funding such projects. However, many of the project sponsors we interviewed experienced challenges in trying to secure a larger local match to comply with FTA’s preference policy. Several project sponsors also stated that FTA’s push for a lower federal New Starts share would likely affect their decision to advance future transit projects. Therefore, it is important for FTA to understand how this policy affects local decision making with regard to proposing and funding New Starts projects, as well as whether the policy is maximizing New Starts funds and local participation. To ensure that FTA’s New Starts evaluation process and policies are objective, transparent, and comply with federal statute, we recommend that the Secretary of Transportation direct the Administrator, FTA, to take the following two actions: Clearly explain the basis on which it decides which projects will be recommended for funding outside of FFGAs, such as projects considered to be meritorious, and what projects must do to qualify for such a recommendation. These explanations should be included in FTA’s annual New Starts report and other published New Starts guidance. FTA should also examine the impact of its preference policy on projects currently in the evaluation process, as well as projects in the early planning stages, and examine whether its policy results in maximizing New Starts funds and local participation. We provided a draft of this report to the Department of Transportation for review and comment. Officials from the Department and FTA, including the Associate Administrator for Planning and the Environment, indicated that they generally agreed with the report and its recommendations. According to FTA officials, the FTA New Starts program has been recognized as well- managed, with consistent, proven results and accomplishments. They recognized, however, the need to further improve program guidance and operation. Specifically, FTA agreed to more clearly explain, in its guidance to project sponsors, the basis on which it decides which projects will be recommended for funding outside of FFGAs. In addition, in the draft of this report, we recommended that FTA revise its guidance to clarify that exceptions to the preference policy are permissible. In discussions with FTA officials about the draft of this report, FTA agreed to clarify the preference policy by clearly stating it is a general, rather than an absolute, preference in its upcoming reporting instructions and other appropriate sources, making a recommendation to take such action unnecessary. Therefore, we eliminated our recommendation on this matter in our final report. Finally, FTA officials also provided technical clarifications, which we incorporated as appropriate. We are sending copies of this report to congressional committees with responsibilities for transit issues; the Secretary of Transportation; the Administrator, Federal Transit Administration; and the Director, Office of Management and Budget. We also will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me at [email protected]. GAO contacts and key contributors to this report are listed in appendix IV. To address our objectives, we reviewed the administration’s fiscal year 2005 budget request and legislative reauthorization proposals, the Federal Transit Administration’s (FTA) annual New Starts reports, records on funding authorized and appropriated to projects with existing full funding grant agreements (FFGAs), and federal statutes pertaining to the New Starts program. In addition, we interviewed FTA officials and representatives from the American Public Transportation Association and attended FTA’s New Starts roundtable with project sponsors in April 2004. We also conducted semistructured interviews with project sponsors from 15 projects in preliminary engineering or final design to gain their perspectives on recent and proposed changes to the New Starts program and 5 projects with FFGAs to discuss their experiences in dealing with shortfalls in federal funding for their New Starts projects. (See table 4 for a listing of all projects contacted.) The results of these interviews are not generalizable to all project sponsors, however we used multiple criteria in selecting the projects to ensure we evaluated a diverse group of projects. Specifically, to select the 15 projects in preliminary engineering or final design, we considered projects’ overall ratings for the fiscal year 2005 cycle, mobility and cost-effectiveness ratings for fiscal years 2003 to 2005, percentage of New Starts funding requested for fiscal years 2003 through 2005, total cost, and location. We obtained this information from FTA’s annual New Starts reports for fiscal years 2003 through 2005. In selecting the 5 projects with FFGAs, we considered the size and timing of any differences between the amount of funding authorized in projects’ FFGAs and the amount of funding appropriated to the projects for fiscal years 1998 through 2004. We obtained information on the amount of funding authorized and appropriated to projects with existing FFGAs from FTA. To ensure the reliability of information presented in this report, we interviewed FTA officials about FTA’s policies and procedures for compiling the annual New Starts reports, including FTA’s data collection and verification practices for New Starts information. We also reviewed documentation for the database FTA uses to compile, analyze, and store data for New Starts projects. In addition, during our semistructured interviews with project sponsors, we asked about the accuracy of the information about their projects presented in the annual New Starts reports. Finally, we tested the reliability of FTA’s records of the amount of funding authorized and appropriated to projects with existing FFGAs by comparing a nonprobability sample of the records with FFGAs. We concluded that the FTA information presented is sufficiently reliable for the purposes of this report. Projects with existing full funding grant agreements (FFGAs) Baltimore–Central Light Rail Transit (LRT) Double Tracking Chicago–North Central Corridor Commuter Rail Chicago–Union Pacific West Line Extension Fort Lauderdale–South Florida Commuter Rail Upgrades Northern New Jersey–Hudson Bergen Minimal Operating Segment (MOS1) Northern New Jersey–Hudson Bergen (MOS2) Projects considered to be meritorious. Total authorized and scheduled in full funding ($360,785) ($12,418,193) ($4,824,837) ($2,137,215) ($1,237,615) ($5,969,600) ($35,158,211) ($24,992,274) ($11,210,695) ($663,339) ($2,670,672) ($16,455,206) ($2,402,995) ($313,896) Total authorized and scheduled in full funding ($1,342,076) ($1,120,854) ($41,584,710) ($1,127,405) ($3,719,376) ($41,489,474) ($7,382,694) ($9,659,711) ($54,818,940) ($1,186,586) ($53,383) ($10,132,887) In addition to the individuals named above, other key contributors to this report were Chris Bonham, Jay Cherlow, Elizabeth Eisenstadt, Rita Grieco, Kara Finnegan Irving, Elizabeth McNally, Sara Ann Moessbauer, and Stacey Thompson. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | The Transportation Equity Act for the 21st Century (TEA-21) and subsequent legislation authorized about $8.3 billion in guaranteed funding for the Federal Transit Administration's (FTA) New Starts program, which funds fixed guideway transit projects, such as rail and trolley projects, through FFGAs. GAO assessed the New Starts process for the fiscal year 2005 cycle. GAO identified (1) the number of projects that were evaluated, rated, and proposed for new FFGAs and how recent changes to the process were reflected in ratings; (2) the proposed funding commitments in the administration's budget request and legislative reauthorization proposals; and (3) the extent to which amounts appropriated since 1998 fulfilled FFGAs. For the fiscal year 2005 cycle, FTA evaluated 38 projects, rated 29 projects, and proposed 7 projects for funding. FTA recommended 5 of the 7 projects for full funding grant agreements (FFGAs). FTA considered the remaining 2 projects to be meritorious and recommended a total of $50 million for these projects in fiscal year 2005. However, FTA does not clearly explain how it decides which projects will be recommended for funding outside of FFGAs or what project sponsors must do to qualify for such a recommendation. Last year, in response to language contained in appropriations committee reports, FTA instituted a policy favoring projects that seek a federal New Starts share of no more than 60 percent of the total project cost--even though the law allows projects to seek up to 80 percent--in its recommendation for FFGAs. According to FTA officials, this policy allows more projects to receive funding and ensures that local governments play a major role in funding such projects. FTA describes the 60 percent policy as a general preference; however, FTA's fiscal year 2005 New Starts report suggests that this policy is absolute in that projects proposing more than a 60 percent federal New Starts share will not be recommended for an FFGA. Therefore, FTA agreed to describe the policy as a general preference in future reporting instructions, thus allowing for the possibility of exceptions. Although most of the projects evaluated during the current cycle proposed a federal New Starts share of less than 60 percent of total project costs, some project sponsors GAO interviewed raised concerns about the difficulties of securing the local funding share. However, the overall impact of this policy on projects is unknown. The administration's fiscal year 2005 budget proposal requests $1.5 billion for the New Starts program, a $225 million increase over the amount appropriated for the fiscal year 2004 cycle. Congress is currently considering legislative reauthorization proposals, which contain a number of provisions and initiatives for the New Starts program including streamlining the New Starts evaluation process for projects requesting less than $75 million in New Starts funds, expanding the definition of eligible projects, changing the ratings categories, and maintaining the maximum federal New Starts share at 80 percent of total project cost. Project sponsors GAO interviewed had varying views on these provisions, but most said that clear definitions would be needed for any proposed changes to the New Starts process. All 26 projects with existing FFGAs have not received funds as scheduled--the amount of funding appropriated was less than the amount authorized and scheduled by the FFGA. According to FTA, all completed projects have received the total amount authorized in the FFGAs, but not necessarily according to the original FFGA schedule. As of March 2004, the 26 projects have received a total of $294 million, or 5 percent, less than the amount scheduled by the projects' FFGAs. The amount and timing of differences varied for each project. Project sponsors GAO interviewed have developed methods to mitigate the impact of receiving less than the scheduled annual amount for their project, but these methods can generate additional costs. |
In April 2003, we reported that watch lists were maintained by numerous federal agencies and that the agencies did not have a consistent and uniform approach to sharing information on individuals with possible links to terrorism. Our report recommended that the Secretary of the Department of Homeland Security (DHS), in collaboration with the heads of departments and agencies that have and use watch lists, lead an effort to consolidate and standardize the federal government’s watch list structures and policies. Subsequently, pursuant to Homeland Security Presidential Directive 6 (HSPD-6), dated September 16, 2003, the Attorney General established the Terrorist Screening Center (TSC) to consolidate the government’s approach to terrorism screening and provide for the appropriate and lawful use of terrorist information in screening processes. TSC’s consolidated watch list is the U.S. government’s master repository for all known or appropriately suspected international and domestic terrorist records used for watch list-related screening. TSC records contain sensitive but unclassified information on terrorist identities—such as name and date of birth—that can be shared with screening agencies, whereas the classified derogatory information that supports the watch list records is maintained in other law enforcement and intelligence agency databases. Records for inclusion on the consolidated watch list are nominated to TSC from the following two sources: Identifying information on individuals with ties to international terrorism is provided to TSC through the National Counterterrorism Center (NCTC), which is managed by the Office of the Director of National Intelligence. Identifying information on individuals with ties to purely domestic terrorism is provided to TSC by the FBI. HSPD-6 required the Attorney General—in coordination with the Secretary of State, the Secretary of Homeland Security, and the Director of Central Intelligence—to implement appropriate procedures and safeguards with respect to all terrorist information related to U.S. persons (i.e., U.S. citizens and lawful permanent residents) that is provided to NCTC (formerly the Terrorist Threat Integration Center). According to TSC, agencies within the intelligence community that collect and maintain terrorist information and nominate individuals for inclusion on TSC’s consolidated watch list are to do so in accordance with Executive Order 12333. With respect to U.S. persons, this order addresses the nature or type of information that may be collected and the allowable methods for collecting such information. It provides that agencies within the intelligence community are authorized to collect, retain, or disseminate information concerning U.S. persons only in accordance with procedures established by the head of the agency concerned and approved by the Attorney General, consistent with the authorities set out earlier in the order. The order further provides that agencies within the intelligence community are to use the least intrusive collection techniques feasible when such collection is conducted within the United States or when directed against U.S. persons abroad. Also, according to TSC officials, the center requires annual training for all personnel concerning the Privacy Act of 1974 to ensure that information collected on U.S. persons is handled in accordance with applicable law. To facilitate operational or mission-related screening, TSC sends applicable records from its terrorist watch list to screening agency systems for use in efforts to deter or detect the movements of known or suspected terrorists. For instance, applicable TSC records are provided to the Transportation Security Administration (TSA) for use by airlines in prescreening passengers; to a U.S. Customs and Border Protection (CBP) system for use in screening travelers entering the United States; to a Department of State system for use in screening visa applicants; and to an FBI system for use by state and local law enforcement agencies pursuant to arrests, detentions, and other criminal justice purposes. When an individual makes an airline reservation, arrives at a U.S. port of entry, or applies for a U.S. visa, or is stopped by state or local police within the United States, the frontline screening agency or airline conducts a name-based search of the individual against applicable terrorist watch list records. In general, when the computerized name-matching system of an airline or screening agency generates a “hit” (a potential name match) against a watch list record, the airline or agency is to review each potential match. Any obvious mismatches (negative matches) are to be resolved by the airline or agency, if possible, as discussed in our September 2006 report. However, clearly positive or exact matches and matches that are inconclusive (uncertain or difficult-to-verify) generally are to be referred to the applicable screening agency’s intelligence or operations center and TSC for closer examination. Specifically, airlines are to contact TSA’s Office of Intelligence; CBP officers at U.S. ports of entry are to contact CBP’s National Targeting Center; and Department of State consular officers who process visa applications are to submit a request for a security advisory opinion to Department of State headquarters. The intelligence or operations center is to refer exact matches and inconclusive matches to TSC. State and local law enforcement officials generally are to refer exact matches and inconclusive matches directly to TSC. In turn, TSC is to check its databases and other sources—including classified databases maintained by NCTC and the FBI—and confirm whether the individual is a positive, negative, or inconclusive match to the watch list record. TSC is to refer positive and inconclusive matches to the FBI’s Counterterrorism Division to provide an opportunity for a counterterrorism response. Deciding what law enforcement or screening agency action to take, if any, can involve collaboration among the frontline screening agency, NCTC or other intelligence community members, and the FBI or other investigative agencies. If the encounter arises in the context of an application for a visa or admission into the United States, the screening agency’s adjudicating official determines whether the circumstances trigger a statutory basis for inadmissibility. Generally, NCTC and the FBI are involved because they maintain the underlying derogatory information that supports terrorist watch list records, which is needed to help determine the appropriate counterterrorism response. If necessary, a member of an FBI Joint Terrorism Task Force can respond in person to interview and obtain additional information about the person encountered. In other cases, the FBI will rely on the screening agency and other law enforcement agencies—such as U.S. Immigration and Customs Enforcement—to respond and collect information. Figure 1 presents a general overview of the process used to resolve encounters with individuals on the terrorist watch list. To build upon and provide additional guidance related to HSPD-6, in August 2004, the President signed Homeland Security Presidential Directive 11 (HSPD-11). Among other things, this directive required the Secretary of Homeland Security—in coordination with the heads of appropriate federal departments and agencies—to submit two reports to the President (through the Assistant to the President for Homeland Security) related to the government’s approach to terrorist-related screening. The first report was to outline a strategy to enhance the effectiveness of terrorist-related screening activities by developing comprehensive and coordinated procedures and capabilities. The second report was to provide a prioritized investment and implementation plan for detecting and interdicting suspected terrorists and terrorist activities. Specifically, the plan was to describe the “scope, governance, principles, outcomes, milestones, training objectives, metrics, costs, and schedule of activities” to implement the U.S. government’s terrorism-related screening policies. According to DHS officials, the department submitted the required strategy and the investment and implementation plan to the President in November 2004. Additional information on the status of the strategy and implementation plan is presented later in this report. NCTC and FBI officials rely upon standards of reasonableness in determining which individuals are appropriate for inclusion on TSC’s watch list, but determining whether individuals meet these minimum standards can involve some level of subjectivity. In accordance with HSPD-6, TSC’s watch list is to contain information about individuals “known or appropriately suspected to be or have been engaged in conduct constituting, in preparation for, in aid of, or related to terrorism.” In implementing this directive, NCTC and the FBI strive to ensure that individuals who are reasonably suspected of having possible links to terrorism—in addition to individuals with known links—are nominated for inclusion on the watch list. Thus, as TSC adds nominated records to its watch list, the list may include individuals with possible ties to terrorism, establishing a broad spectrum of individuals that meet the “known or appropriately suspected” standard specified in HSPD-6. As such, inclusion on the list does not automatically cause an alien to be, for example, denied a visa or deemed inadmissible to enter the United States when the person is identified by a screening agency. Rather, in these cases, screening agency and law enforcement personnel may use the encounter with the individual as an opportunity to collect information for assessing the potential threat the person poses, tracking the person’s movements or activities, and determining what actions to take, if any. NCTC receives international terrorist-related information from executive branch departments and agencies—such as the Department of State, the Central Intelligence Agency, and the FBI—and enters this information into its terrorist database. On a formal basis, Department of State embassies around the world—in collaboration with applicable federal agencies involved in security, law enforcement, and intelligence activities—are expected to participate in the “Visas Viper” terrorist reporting program. This congressionally mandated program is primarily administered through a Visas Viper Committee at each overseas post. The committee is to meet at least monthly to share information on known or suspected terrorists and determine whether such information should be sent to NCTC for inclusion in its terrorist database. NCTC’s database, known as the Terrorist Identities Datamart Environment, contains highly classified information and serves as the U.S. government’s central classified database with information on known or suspected international terrorists. According to NCTC’s fact sheet on the Terrorist Identities Datamart Environment, examples of conduct that will warrant an entry into NCTC’s database includes persons who commit international terrorist activity; prepare or plan international terrorist activity; gather information on potential targets for international terrorist solicit funds or other things of value for international terrorist activity or a terrorist organization; solicit membership in an international terrorist organization; provide material support, such as a safe house, transportation, communications, funds, transfer of funds or other material financial benefit, false documentation or identification, weapons, explosives, or training; or are members of or represent a foreign terrorist organization. If NCTC determines that an individual meets the “known or appropriately suspected” standard of HSPD-6, NCTC is to extract sensitive but unclassified information on the individual’s identity from its classified database—such as name and date of birth—and send forward a record to TSC for inclusion on the watch list. According to NCTC procedures, NCTC analysts are to review all information involving international terrorists using a “reasonable suspicion” standard to determine whether an individual is appropriate for nomination to TSC for inclusion on the watch list. NCTC defines reasonable suspicion as information—both facts, as well as rational inferences from those facts and the experience of the reviewer—that is sufficient to cause an ordinarily prudent person to believe that the individual under review may be a known or appropriately suspected terrorist. According to NCTC, this information can include past conduct, current actions, and credible intelligence concerning future conduct. In making this determination, NCTC generally relies upon the originating agency’s designation that there is reasonable suspicion to believe a person is engaged in terrorist or terrorist-related activities as being presumptively valid. For example, NCTC will rely on the FBI’s designation of an individual as a known or suspected international terrorist unless NCTC has specific and credible information that such a designation is not appropriate. Also, NCTC officials noted that an individual is to remain on the watch list until the respective department or agency that provided the terrorist- related information that supports a nomination determines the individual should be removed from the list. According to TSC, if the FBI conducts a threat assessment on an individual that reveals no nexus to international terrorism, then NCTC will initiate the process for deleting the record from its database and the watch list. If NCTC receives information that it determines is insufficient to nominate an individual to TSC for inclusion on the watch list, the available information may remain in the NCTC database until additional information is obtained to warrant nomination to TSC or be deleted from the NCTC database. In general, individuals who are subjects of ongoing FBI counterterrorism investigations are nominated to TSC for inclusion on the watch list, including persons who are being preliminarily investigated to determine if they have links to terrorism. If an investigation does not establish a terrorism link, the FBI generally is to close the investigation and request that TSC remove the person from the watch list. In determining whether to open an investigation, the FBI uses guidelines established by the Attorney General. These guidelines contain specific standards for opening investigations. According to FBI officials, there must be a “reasonable indication” of involvement in terrorism before opening an investigation. The FBI noted, for example, that it is not sufficient to open an investigation based solely on a neighbor’s complaint or an anonymous tip or phone call. In such cases, however, the FBI could use techniques short of opening an investigation to assess the potential threat the person poses, which would not result in adding the individual to the watch list at that time. The FBI has established formal review and approval processes for nominating individuals for inclusion on the watch list. In general, FBI case agents are to send nominations to a unit at FBI headquarters for review and approval. If approved, information on domestic terrorists is sent to TSC for inclusion on the watch list. For approved international terrorist nominations, the FBI sends the information to NCTC, who then sends forward the nomination to TSC. For each nomination, NCTC and the FBI provide TSC with biographic or other identifying data, such as name and date of birth. This identifying information on known or suspected terrorists is deemed sensitive but unclassified by the intelligence and law enforcement communities. Then, TSC is to review the identifying information and the underlying derogatory information—by directly accessing databases maintained by NCTC, the FBI, and other agencies—to validate the requirements for including the nomination on the watch list. On the basis of the results of its review, TSC is to either input the nomination into the watch list—which is the U.S. government’s master repository for all known or appropriately suspected international and domestic terrorist records that are used for watch list- related screening—or reject the nomination and send it back to NCTC or the FBI for further investigation. TSC relies predominantly on the nominating agency to determine whether or not an individual is a known or appropriately suspected terrorist. According to TSC, on the basis of its review of relevant identifying and derogatory information, the center rejects approximately 1 percent of all nominations. Figure 2 presents a general overview of the process used to nominate individuals for inclusion on TSC’s watch list. TSC’s watch list of individuals with known or appropriately suspected links to terrorism has increased from 158,374 records in June 2004 to 754,960 records in May 2007 (see fig. 3). It is important to note that the total number of records on TSC’s watch list does not represent the total number of individuals on the watch list. Rather, if an individual has one or more known aliases, the watch list will contain multiple records for the same individual. For example, if an individual on the watch list has 50 known aliases, there could be 50 distinct records related to that individual in the watch list. TSC’s database is updated daily with new nominations, modifications to existing records, and deletions. According to TSC data, as of May 2007, a high percentage of watch list records were international terrorist records nominated through NCTC, and a small percentage were domestic terrorist records nominated through the FBI. TSC data also show that more than 100,000 records have been removed from the watch list since TSC’s inception. As discussed later in this report, agencies that conduct terrorism screening do not check against all records in the watch list. Rather, TSC exports applicable records to federal government databases used by agencies that conduct terrorism screening based on the screening agency’s mission responsibilities and other factors. For the 42-month period of December 2003 (when TSC began operations) through May 2007, screening and law enforcement agencies encountered individuals who were positively matched to watch list records 53,218 times, according to our analysis of TSC data. These encounters include many individuals who were positively matched to watch list records multiple times. Agencies took a range of actions, such as arresting individuals, denying other individuals entry into the United States, and most commonly, releasing the individuals following questioning and information gathering. Our analysis of data on the outcomes of these encounters and interviews with screening agency, law enforcement, and intelligence community officials indicate that the watch list has enhanced the U.S. government’s counterterrorism efforts by (1) helping frontline screening agencies obtain information to determine the level of threat a person poses and the appropriate action to take, if any, and (2) providing the opportunity to collect and share information on known or appropriately suspected terrorists with law enforcement agencies and the intelligence community. A breakdown of encounters with positive matches to the terrorist watch list shows that the number of matches has increased each year—from 4,876 during the first 10-month period of TSC’s operations (December 2003 through September 2004) to 14,938 during fiscal year 2005, to 19,887 during fiscal year 2006. This increase can be attributed partly to the growth in the number of records in the consolidated terrorist watch list and partly to the increase in the number of agencies that use the list for screening purposes. Since its inception, TSC has worked to educate federal departments and agencies, state and local law enforcement, and foreign governments about appropriate screening opportunities. Our analysis of TSC data also indicates that many individuals who were positively matched to the terrorist watch list were encountered multiple times. For example, a truck driver who regularly crossed the U.S.-Canada border or an individual who frequently took international flights could each account for multiple encounters. Further, TSC data show that the highest percentage of encounters with individuals who were positively matched to the watch list involved screening within the United States by a state or local law enforcement agency, U.S. government investigative agency, or other governmental entity. Examples of these encounters include screening by police departments, correctional facilities, FBI agents, and courts. The next highest percentage of encounters with positive matches to the watch list involved border-related encounters, such as passengers on airline flights inbound from outside the United States or individuals screened at land ports of entry. Examples include (1) a passenger flying from London (Heathrow), England, to New York (JFK), New York, and (2) a person attempting to cross the border from Canada into the United States at the Rainbow Bridge port of entry in Niagara Falls, New York. The smallest percentage of encounters with positive matches occurred outside of the United States. State and local law enforcement agencies historically have had access to an FBI system that contains watch list records produced by the FBI. However, pursuant to HSPD-6 (Sept. 16, 2003), state and local law enforcement agencies were, for the first time, given access to watch list records produced by the intelligence community, which are also included in the FBI system. This access has enabled state and local agencies to better assist the U.S. government’s efforts to track and collect information on known or appropriately suspected terrorists. These agencies accounted for a significant percentage of the total encounters with positive matches to the watch list that occurred within the United States. The watch list has enhanced the U.S. government’s counterterrorism efforts by allowing federal, state, and local screening and law enforcement officials to obtain information to help them make better-informed decisions during encounters regarding the level of threat a person poses and the appropriate response to take, if any. The specific outcomes of encounters with individuals on the watch list are based on the government’s overall assessment of the intelligence and investigative information that supports the watch list record and any additional information that may be obtained during the encounter. Our analysis of data of the outcomes of encounters revealed that agencies took a range of actions, such as arresting individuals, denying others entry into the United States, and most commonly, releasing the individuals following questioning and information gathering. The following provides additional information on arrests, as well as the outcomes of encounters involving the Department of State, TSA, CBP, and state or local law enforcement, respectively. TSC data show that agencies reported arresting many subjects of watch list records for various reasons, such as the individual having an outstanding arrest warrant or the individual’s behavior or actions during the encounter. TSC data also indicated that some of the arrests were based on terrorism grounds. TSC data show that when visa applicants were positively matched to terrorist watch list records, the outcomes included visas denied, visas issued (because the consular officer did not find any statutory basis for inadmissibility), and visa ineligibility waived. TSA data show that when airline passengers were positively matched to the No Fly or Selectee lists, the vast majority of matches were to the Selectee list. Other outcomes included individuals matched to the No Fly list and denied boarding (did not fly) and individuals matched to the No Fly list after the aircraft was in-flight, which required an immediate counterterrorism response. Additional information on individuals on the No Fly list passing undetected through airline prescreening and being identified in-flight is presented later in this report. CBP data show that a number of nonimmigrant aliens encountered at U.S. ports of entry were positively matched to terrorist watch list records. For many of the encounters, CBP determined there was sufficient derogatory information related to watch list records to preclude admission under terrorism grounds. However, for most of the encounters, CBP determined that there was not sufficient derogatory information related to the records to preclude admission. TSC data show that state or local law enforcement officials have encountered individuals who were positively matched to terrorist watch list records thousands of times. Although data on the actual outcomes of these encounters were not available, the vast majority involved watch list records that indicated that the individuals were released, unless there were reasons other than terrorism-related grounds for arresting or detaining the individual. Appendix IV presents more details on the outcomes of screening agency encounters with individuals on the terrorist watch list. According to federal officials, encounters with individuals who were positively matched to the watch list assisted government efforts in tracking the respective person’s movements or activities and provided the opportunity to collect additional information about the individual that was shared with agents conducting counterterrorism investigations and with the intelligence community for use in analyzing threats. Such coordinated collection of information for use in investigations and threat analyses is one of the stated policy objectives for the watch list. Most of the individuals encountered were questioned and released because the intelligence and investigative information on these persons that supported the watch list records and the information obtained during the encounter did not support taking further actions, such as denying an individual entry into the United States. Specifically, as discussed previously, for most Department of State, TSA (via air carriers), CBP, and state and local encounters with individuals who were positively matched to the terrorist watch list, the counterterrorism response consisted of questioning the individuals and gathering information. That is, the encounters provided screening agency and law enforcement personnel the opportunity to conduct in-depth questioning and inspect travel documents and belongings to collect information for use in supporting investigations and assessing threats. TSC plays a central role in the real-time sharing of this information, creating a bridge among screening agencies, the law enforcement community, and the intelligence community. For example, in addition to facilitating interagency communication and coordination during encounters, TSC creates a daily report of encounters involving positive matches to the terrorist watch list. This report contains a summary of all positive encounters for the prior day. TSC summarizes the type of encounter, what occurred, and what action was taken. The report notes the person’s affiliation with any groups and provides a summary of derogatory information available on the individual. Overview maps depicting the encounters and locations are also included in the report. The daily reports are distributed to numerous federal entities, as shown in table 1. According to federal law enforcement officials, the information collected during encounters with individuals on the terrorist watch list helps to develop cases by, among other means, tracking the movement of known or appropriately suspected terrorists and determining relationships among people, activities, and events. According to NCTC officials, information obtained from encounters is added to NCTC’s Terrorist Identities Datamart Environment database, which serves as the U.S. government’s central classified database on known or suspected international terrorists. This information can be electronically accessed by approximately 5,000 U.S. counterterrorism personnel around the world. Each day, TSC exports applicable records from the watch list—containing biographic or other identifying data, such as name and date of birth—to federal government databases used by agencies that conduct terrorism screening. Specifically, applicable watch list records are exported to the following federal agency databases, which are described later in this report: DHS’s Interagency Border Inspection System. The Department of State’s Consular Lookout and Support System. The FBI’s Violent Gang and Terrorist Organization File. TSA’s No Fly and Selectee lists. The applicable records that TSC exports to each of these databases vary based on the screening agency’s mission responsibilities, the technical capabilities of the agency’s computer system, and operational considerations. For example, records on U.S. citizens and lawful permanent residents are not exported to the Department of State’s system used to screen visa applicants for immigration violations, criminal histories, and other matters, because these individuals would not apply for a U.S. visa. Also, to facilitate the automated process of checking an individual against watch list records, all of these databases require certain minimum biographic or identifying data in order to accept records from TSC’s consolidated watch list. The identifying information required depends on the policies and needs of the screening agency and the technical capacity of the respective agency’s computerized name-matching program. Also, certain records may not be exported to screening agency systems based on operational considerations, such as the amount of time available to conduct related screening. In general, the agency governing a particular screening database establishes the criteria for which records from the consolidated watch list will be accepted into its own system. Figure 4 presents a general overview of the process used to export records from TSC’s consolidated watch list to screening agency databases. According to TSC, in addition to agency mission, technical, and operational considerations, an individual’s record may be excluded from an agency’s database in rare cases when there is a reasonable and detailed justification for doing so and the request for exclusion has been reviewed and approved by the FBI’s Counterterrorism Division and TSC. The following sections provide additional information on the databases of the screening processes we reviewed, the percentage of records accepted as of May 2007, and potential security vulnerabilities. The Interagency Border Inspection System is DHS’s primary lookout system available at U.S. ports of entry and other locations. CBP officers use the system to screen travelers entering the United States at ports of entry, which include land border crossings along the Canadian and Mexican borders, sea ports, and U.S. airports for international flight arrivals. This system includes not only the applicable records exported by TSC, but also additional information on people with prior criminal histories, immigration violations, or other activities of concern that CBP wants to identify and screen at ports of entry. The system is also used to assist law enforcement and other personnel at approximately 20 other federal agencies, including the following: U.S. Immigration and Customs Enforcement; U.S. Citizenship and Immigration Services; the FBI; the Drug Enforcement Administration; the Bureau of Alcohol, Tobacco, Firearms and Explosives; the Internal Revenue Service; the U.S. Coast Guard; the Federal Aviation Administration; and the U.S. Secret Service. Of all the screening agency databases discussed in this report, the Interagency Border Inspection System has the least restrictive acceptance criteria and therefore contained the highest percentage of records from TSC’s consolidated watch list as of May 2007. This is because CBP’s mission is to screen all travelers, including U.S. citizens, entering the United States at ports of entry. The Consular Lookout and Support System is the Department of State’s name-check system for visa applicants. Consular officers abroad use the system to screen the names of visa applicants to identify terrorists and other aliens who are potentially ineligible for visas based on criminal histories or other reasons specified by federal statute. According to the Department of State, all visa-issuing posts have direct access to the system and must use it to check each applicant’s name before issuing a visa. Records on U.S. citizens and lawful permanent residents are not to be included in the part of the Consular Lookout and Support System that is used to screen visa applicants—because these individuals would not apply for U.S. visas—but may be included in another part of the system that is used to screen passport applicants. According to TSC officials, the part of the system that is used to screen visa applicants generally contains the same information as is contained in the Interagency Border Inspection System, except for records on U.S. citizens and lawful permanent residents. As of May 2007, the Consular Lookout and Support System contained the second highest percentage of all watch list records. The Violent Gang and Terrorist Organization File is the FBI’s lookout system for known or appropriately suspected terrorists, as well as gang groups and members. The file is part of the FBI’s National Crime Information Center database, which is accessible by federal, state, and local law enforcement officers and other criminal justice agencies for screening in conjunction with arrests, detentions, and other criminal justice purposes. A subset of the Violent Gang and Terrorist Organization file consists of TSC’s records to be used to screen for possible terrorist links. As of May 2007, the FBI database contained the third highest percentage of watch list records. According to TSC officials, if the remaining watch list records were included in the Violent Gang and Terrorist Organization File, the system would identify an unmanageable number of records of individuals as potentially being matches to the National Crime Information Center database. The officials explained that name checks against the National Crime Information Center database return not only potential matches to terrorist watch list records in the Violent Gang and Terrorist Organization File, but also potential matches to the millions of other records in the database. TSC officials noted, however, that not including these records has resulted in a potential vulnerability in screening processes—or at least a missed opportunity to track the movements of individuals who are the subjects of watch list records and collect additional relevant information. According to the FBI, the remaining records are not included to ensure the protection of civil rights and prevent law enforcement officials from taking invasive enforcement action on individuals misidentified as being on the watch list. The FBI also noted that while law enforcement encounters of individuals on the watch list provide significant information, unnecessary detentions or queries of misidentified persons would be counterproductive and potentially damaging to the efforts of the FBI to investigate and combat terrorism. Because of these operational concerns, the FBI noted that the extent of vulnerabilities in current screening processes that arise when the Violent Gang and Terrorist Organization File cannot accept certain watch list records has been determined to be low or nonexistent. We note, however, that the FBI did not specifically address the extent to which security risks are raised by not using these records. The No Fly and Selectee lists are compiled by TSC and forwarded to TSA, which distributes the lists to air carriers for use in identifying individuals who either should be precluded from boarding an aircraft or should receive additional physical screening prior to boarding a flight. TSA requires that U.S. aircraft operators use these lists to screen passengers on all of their flights and that foreign air carriers use these lists to screen passengers on all flights to and from the United States. Of all of the screening agency databases that accept watch list records, only the No Fly and Selectee lists require certain nomination criteria or inclusion standards that are narrower than the “known or appropriately suspected” standard of HSPD-6. Specifically, the lists are to contain any individual, regardless of citizenship, who meets certain nomination criteria established by the Homeland Security Council. Persons on the No Fly list are deemed to be a threat to civil aviation or national security and therefore should be precluded from boarding an aircraft. Passengers who are a match to the No Fly list are to be denied boarding unless subsequently cleared by law enforcement personnel in accordance with TSA procedures. The Homeland Security Council criteria contain specific examples of the types of terrorism-related conduct that may make an individual appropriate for inclusion on the No Fly list. Persons on the Selectee list are also deemed to be a threat to civil aviation or national security but do not meet the criteria of the No Fly list. Being on the Selectee list does not mean that the person will not be allowed to board an aircraft or enter the United States. Instead, persons on this list are to receive additional security screening prior to being permitted to board an aircraft, which may involve a physical inspection of the person and a hand-search of the passenger’s luggage. The Homeland Security Council criteria contain specific examples of the types of terrorism-related conduct that may make an individual appropriate for inclusion on the Selectee list, as well as the types of activities that generally would not be considered appropriate for inclusion on the list. According to the Homeland Security Council criteria, the No Fly and Selectee lists are not intended as investigative or information-gathering tools, or tracking mechanisms. Rather, the lists are intended to help ensure the safe transport of passengers and their property and to facilitate the flow of commerce. An individual must meet the specific nomination criteria to be placed on one of the lists, and the watch list record must contain a full name and date of birth to be added to either of the lists. As of May 2007, the No Fly list and the Selectee list collectively contained the lowest percentage of watch list records. The remaining records in TSC’s watch list either did not meet the specific Homeland Security Council nomination criteria or did not meet technical requirements that the records contain a full name and date of birth. TSC could not readily determine how many records fell into each of these two categories. Nonetheless, these records are not provided to TSA for use in prescreening passengers. According to TSA officials, without a full name and date of birth, the current name-matching programs used by airlines would falsely identify an unacceptable number of individuals as potentially being on the watch list. According to DHS, the amount or specific types of biographical information available on the population to be screened should also be considered when determining what portion of the watch list should be used. For example, DHS noted that screening international airline passengers who have provided passport information is very different from screening domestic airline passengers for whom the government has little biographical information. Further, DHS noted that for airline passengers, there is not much time to resolve false positives or determine whether someone on the watch list should be subjected to additional screening prior to departure of a flight, whereas for individuals arriving at U.S. ports of entry from international locations, CBP has more time to interview individuals and resolve issues upon their arrival. For international flights bound to or departing from the United States, two separate screening processes occur. Specifically, in addition to TSA requiring that air carriers prescreen passengers prior to boarding against the No Fly and Selectee lists, CBP screens all passengers on international flights—for border security purposes—against watch list records in the Interagency Border Inspection System. CBP’s screening generally occurs after the aircraft is in flight. This layered or secondary screening opportunity does not exist for passengers traveling domestically within the United States. In 2006, the conference report accompanying the Department of Homeland Security Appropriations Act, 2007, directed TSA to provide a detailed plan describing key milestones and a schedule for checking names against the full terrorist watch list in its planned Secure Flight passenger prescreening program if the administration believes a security vulnerability exists under the current process of checking names against only the No Fly and Selectee lists. According to TSA, the administration has concluded that non-use of the full watch list does not constitute a security vulnerability; however, TSA did not explain the basis for this determination. Also, DHS’s Office for Civil Rights and Civil Liberties emphasized that there is a strong argument against increasing the number of watch list records TSA uses to prescreen passengers. Specifically, the office noted that if more records were used, the number of misidentifications would expand to unjustifiable proportions, increasing administrative costs within DHS, without a measurable increase in security. The office also noted that an expansion of the No Fly and Selectee lists could even alert a greater number of individuals to their watch list status, compromising security rather than advancing it. Further, according to the office, as the number of U.S. citizens denied and delayed boarding on domestic flights increases, so does the interest in maintaining watch list records that are as accurate as possible. Also, the office noted that an increase in denied and delayed boarding of flights could generate volumes of complaints or queries that exceed the current capabilities of the watch list redress process. Key frontline screening agencies within DHS—CBP, U.S. Citizenship and Immigration Services, and TSA—are separately taking actions to address potential vulnerabilities in terrorist watch list-related screening. A particular concern is that individuals on the watch list not pass undetected through agency screening. According to the screening agencies, some of these incidents—commonly referred to as false negatives—have occurred. Irrespective of whether such incidents are isolated aberrations or not, any individual on the watch list who passes undetected through agency screening constitutes a vulnerability. Regarding other ameliorative efforts, TSC has ongoing initiatives that could help reduce false negatives, such as improving the quality of watch list data. CBP, U.S. Citizenship and Immigration Services, and TSA have begun to take actions to address incidents of subjects of watch list records passing undetected through agency screening. The efforts of each of these three DHS component agencies are discussed in the following sections, respectively. Generally, as indicated, positive steps have been initiated by each agency. Given the potential consequences of any given incident, it is particularly important that relevant component agencies have mechanisms in place to systematically monitor such incidents, determine causes, and implement appropriate corrective actions as expeditiously as possible. During our field visits in spring 2006 to selected ports of entry, CBP officers informed us of several incidents involving individuals on the watch list who were not detected until after they had been processed and admitted into the United States. In response to our inquiry at CBP headquarters in May 2006, agency officials acknowledged that there have been such incidents. CBP did not maintain aggregated data on the number of these incidents nationwide or the specific causes, but it did identify possible reasons for failing to detect someone on the watch list. Subsequently, in further response to our inquiries, CBP created a working group to study the causes of incidents involving individuals on the watch list who were not detected by port-of-entry screening. The working group, coordinated by the National Targeting Center, is composed of subject matter experts representing the policy, technical, and operations facets within CBP. According to headquarters officials, the group is responsible for (1) identifying and recommending policy solutions within CBP and (2) coordinating any corrective technical changes within CBP and with TSC and NCTC, as appropriate. The working group held its first meeting in early 2007. According to CBP, some corrective actions and measures have already been identified and are in the process of being implemented. Agencies are working to eliminate shortcomings in screening processes that have resulted in unauthorized applicants for citizenship and other immigration benefits getting through agency screening. The cognizant agency, U.S. Citizenship and Immigration Services, is to screen all individuals who apply for U.S. citizenship or other immigration benefits— such as work authorization—for information relevant to their eligibility for these benefits. According to U.S. Citizenship and Immigration Services officials, the agency does not maintain aggregated data on the number of times the initial screening has failed to identify individuals who are subjects of watch list records or the specific causes. The officials noted, however, that for certain applicants—including individuals seeking long- term benefits such as permanent citizenship, lawful permanent residence, or asylum—additional screening against watch list records is conducted. This additional screening has generated some positive matches to watch list records, whereas these matches were not detected during the initial checks. According to U.S. Citizenship and Immigration Services, each instance of individuals on the watch list getting through agency screening is reviewed on a case-by-case basis to determine the cause, with appropriate follow-up and corrective action taken, if needed. As a prospective enhancement, in April 2007, U.S. Citizenship and Immigration Services entered into a memorandum of understanding with TSC. If implemented, this enhancement could allow U.S. Citizenship and Immigration Services to conduct more thorough and efficient searches of watch list records during the screening of benefit applicants. In the past, there have been a number of known cases in which individuals who were on the No Fly list passed undetected through airlines’ prescreening of passengers and flew on international flights bound to or from the United States, according to TSA data. These individuals were subsequently identified in-flight by other means—specifically, screening of passenger manifests conducted by CBP’s National Targeting Center. However, the onboard security threats required an immediate counterterrorism response, which in some instances resulted in diverting the aircraft to a location other than its original destination. TSA provided various reasons why an individual who is on the No Fly list may not be detected by air carriers during their comparisons with the No Fly list. However, TSA had not analyzed the extent to which each cause contributed to such incidents. According to TSA, the agency’s regulatory office is responsible for initiating investigative and corrective actions with the respective air carrier, if needed. For international flights bound to or from the United States, two separate screening processes occur. In addition to the initial prescreening conducted by the airlines in accordance with TSA requirements, CBP’s National Targeting Center screens passengers against watch list records in the Interagency Border Inspection System using information that is collected from air carriers’ passenger manifests, which contain information obtained directly from government-issued passports. Specifically, for passengers flying internationally, airlines are required to provide passenger manifest data obtained at check-in from all passengers to CBP. Presently, CBP requires airlines to transmit the passenger data no later than 15 minutes prior to departure for outbound flights and no later than 15 minutes after departure for inbound flights. Because the transmission of this information occurs so close to the aircraft’s departure, the National Targeting Center’s screening of the information against watch list records in the Interagency Border Inspection System—which includes a check of records in the No Fly list—often is not completed until after the aircraft is already in the air. If this screening produces a positive match to the No Fly list, the National Targeting Center is to coordinate with other federal agencies to determine what actions to take. Procedures described in the final rule issued by CBP and published in the Federal Register on August 23, 2007, could help mitigate instances of individuals on the No Fly list boarding international flights bound to or from the United States. Specifically, the rule will require air carriers to either transmit complete passenger manifests to CBP no later than 30 minutes prior to the securing of the aircraft doors, or transmit manifest information on an individual basis as each passenger checks in for the flight up to but no later than the securing of the aircraft. When implemented (the rule is to take effect on February 19, 2008), CBP should be better positioned to identify individuals on the No Fly list before an international flight is airborne. Regarding domestic flights within the United States, there is no second screening opportunity using watch list-related information. Rather, the airlines are responsible for prescreening passengers prior to boarding in accordance with TSA requirements and using the No Fly and Selectee lists provided by TSA. Although TSA has been mandated to assume responsibility for conducting the watch list screening function from the airline industry, the agency’s proposed prescreening program, known as Secure Flight, has not yet been implemented. Under the Secure Flight program, TSA plans to take over from aircraft operators the responsibility for comparing identifying information on airline passengers against watch list records. We have reported and TSA has acknowledged significant challenges in developing and implementing the Secure Flight program. Last year, TSA suspended Secure Flight’s development to reassess, or rebaseline, the program. The rebaselining effort included reassessing the program goals, the expected benefits and capabilities, and the estimated schedules and costs. According to TSC officials who have been working with TSA to support implementation of Secure Flight, the program could help to reduce potential vulnerabilities in the prescreening of airline passengers on domestic flights. To help reduce vulnerabilities in watch list-related screening, TSC has ongoing initiatives to improve the effectiveness of screening and ensure the accuracy of data. Also, prospectively, TSC anticipates developing a capability to link biometric data to supplement name-based screening. Generally, to handle the large volumes of travelers and others who must be screened, federal agencies and most airlines use computer-driven algorithms to rapidly compare the names of individuals against applicable terrorist watch list records. In the name-matching process, the number of likely matching records returned for manual review depends partly upon the sensitivity thresholds of the algorithms to variations in name spelling or representations of names from other languages. Screening agencies, and airlines in accordance with TSA requirements, have discretion in setting these thresholds, which can have operational implications. If a threshold is set relatively high, for example, more names may be cleared and fewer flagged as possible matches, increasing the risk of false negatives—that is, failing to identify an individual whose name is on the terrorist watch list. Conversely, if a threshold is set relatively low, more individuals who do not warrant additional scrutiny may be flagged (false positives), with fewer cleared through an automated process. A primary factor in designing a computerized name-matching process is the need to balance minimizing the possibility of generating false negatives, while not generating an unacceptable number of false positives (misidentifications). To help ensure awareness of best practices among agencies, TSC has formed and chairs an interagency working group—the Federal Identity Match Search Engine Performance Standards Working Group—that met initially in December 2005. An objective of the working group is to provide voluntary guidance for federal agencies that use identity matching search engine technology. Essentially, the prospective guidance is intended to improve the effectiveness of identity matching across agencies by, among other means, assessing which algorithms or search engines are the most effective for screening specific types or categories of names. According to TSC, three agencies have volunteered to participate in pilot programs in the summer of 2007, after which a target date for completing the initiative to develop and provide voluntary guidance to screening agencies will be set. If effectively implemented, this initiative could help reduce potential vulnerabilities in screening processes that are based on limitations in agencies’ computerized name-matching programs. TSC is also developing a process whereby screening agencies can directly “query” the center’s consolidated terrorist screening database. TSC noted that a direct-query capability will ensure that all possible hits against the database will be directed automatically into the center’s resolution process to determine if they are positive matches, thereby ensuring consistency in the government’s approach to screening. Currently, TSC must rely upon the screening agencies to contact the center—generally by telephone or fax—when they have possible hits. As of May 2007, TSC had not developed specific time frames for implementing this initiative. According to TSC, the technology for a direct-query capability is in place, but related agreements with screening agencies were still being negotiated. Preventing incidents of individuals on the watch list passing undetected through agency screening is dependent partly on the quality and accuracy of data in TSC’s consolidated terrorist watch list. In June 2005, the Department of Justice’s Office of the Inspector General reported that its review of TSC’s consolidated watch list found several problems—such as inconsistent record counts and duplicate records, lack of data fields for some records, and unclear sources for some records. Among other things, the Inspector General recommended that TSC develop procedures to regularly review and test the information contained in the consolidated terrorist watch list to ensure that the data are complete, accurate, and nonduplicative. In its September 2007 follow-up report, the Inspector General noted that TSC has enhanced its efforts to ensure the quality of watch list data and has increased the number of staff assigned to data quality management. However, the Inspector General also determined that TSC’s management of the watch list continues to have weaknesses. TSC has ongoing quality-assurance initiatives to identify and correct incomplete or inaccurate records that could contribute to either false negatives or false positives. The center’s director and principal deputy director stressed to us that quality of data is a high priority and also is a continuing challenge, particularly given that the database is dynamic, changing frequently with additions, deletions, and modifications. The officials noted the equal importance of ensuring that (1) the names of known and appropriately suspected terrorists are included on the watch list and (2) the names of any individuals who are mistakenly listed or are cleared of any nexus to terrorism are removed. In this regard, the officials explained that the TSC’s standard operating practices include at least three opportunities to review records. First, TSC staff—including subject matter experts detailed to the center from other agencies—review each incoming record submitted (nominated) to the center for inclusion on the consolidated watch list. Second, every time there is a screening encounter—for example, a port-of-entry screening of an individual that generates an actual or a potential match with a watch list record—that record is reviewed again. And third, records are reviewed when individuals express their concerns or seek correction of any inaccurate data—a process often referred to as redress. Conceptually, biometric technologies based on fingerprint recognition, facial recognition, or other physiological characteristics can be used to screen travelers against a consolidated database, such as the terrorist watch list. However, TSC presently does not have this capability, although use of biometric information to supplement name-based screening is planned as a future enhancement. Specifically, TSC’s strategy is not to replicate existing biometric data systems. Rather, the strategy, according to TSC’s director and principal deputy director, is to develop a “pointer” capability to facilitate the online linking of name-based searches to relevant biometric systems, such as the FBI’s Integrated Automated Fingerprint Identification System—a computerized system for storing, comparing, and exchanging fingerprint data in a digital format that contains the largest criminal biometric database in the world. TSC officials recognize that even biometric systems have screening limitations, such as relevant federal agencies may have no fingerprints or other biometrics to correlate with many of the biographical records in the TSC’s watch list. For instance, watch list records may be based on intelligence gathered by electronic wire taps or other methods that involve no opportunity to obtain biometric data. Nonetheless, TSC officials anticipate that biometric information, when available, can be especially useful for confirming matches to watch list records when individuals use false identities or aliases. Although the U.S. government has made progress in using watch list records to support terrorism-related screening, there are additional opportunities for using the list. Internationally, the Department of State has made arrangements with six foreign governments to exchange terrorist watch list information and is in negotiations with several other countries. Within the private sector, some critical infrastructure components are presently using watch list records to screen current or prospective employees, but many components are not. DHS has not established guidelines to govern the use of watch list records for appropriate screening opportunities in the private sector that have a substantial bearing on homeland security. Further, all federal departments and agencies have not taken action in accordance with HSPD-6 and HSPD-11 to identify and describe all appropriate screening opportunities that should use watch list records. According to TSC, determining whether new screening opportunities are appropriate requires evaluation of multiple factors, including operational and legal issues—particularly related to privacy and civil liberties. To date, appropriate opportunities have not been systematically identified or evaluated, in part because the federal government lacks an up-to-date strategy and a prioritized investment and implementation plan for optimizing the use and effectiveness of terrorist-related screening. Moreover, the lines of authority and responsibility to provide governmentwide coordination and oversight of such screening are not clear, and existing entities with watch list responsibilities may not have the necessary authority, structure, or resources to assume this role. According to the 9/11 Commission, the U.S. government cannot meet its obligations to the American people to prevent the entry of terrorists into the United States without a major effort to collaborate with other governments. The commission noted that the U.S. government should do more to exchange terrorist information with trusted allies and raise U.S. and global border security standards for travel and border crossing over the medium and longterm through extensive international cooperation. HSPD-6 required the Secretary of State to develop a proposal for the President’s approval for enhancing cooperation with certain foreign governments—beginning with those countries for which the United States has waived visa requirements—to establish appropriate access to terrorism screening information of the participating governments. This information would be used to enhance existing U.S. government screening processes. The Department of State determined that the most effective way to obtain this information was to seek bilateral arrangements to share information on a reciprocal basis. The Department of State’s Bureau of Consular Affairs and the Homeland Security Council co-chair an interagency working group to implement the international cooperation provisions of HSPD-6. According to the Department of State, there is no single document or proposal that sets forth the working group’s approach or plan. Rather, a series of consensus decisions specify how to proceed, often on a country-by-country basis in order to accommodate each country’s laws and political sensitivities. The working group met six times from September 2005 through December 2006 to discuss operational and procedural issues related to sharing terrorism information and to update working group members on the status of bilateral negotiations with foreign governments. According to the Department of State, the department’s Bureau of Consular Affairs has approached all countries for which the United States has waived visa requirements and two non-visa waiver program countries with a proposal to exchange terrorist screening information. From October through December 2006, interagency teams visited six countries to brief government officials and also met in Washington, D.C., with representatives of a number of other countries. According to the Department of State, interagency working groups at U.S. embassies around the world remain actively engaged with foreign counterparts and coordinate discussions on international sharing of terrorist screening information with a Department of State team in Washington, D.C. Two countries have been sharing terrorist screening information with the United States since before September 11, 2001, and that information has been integrated into TSC’s consolidated watch list and, as applicable, into screening agencies’ databases. According to the Department of State, since 2006, the United States has made arrangements to share terrorist screening information with four new foreign government partners and is in negotiations with several other countries. The department noted that it had also received indications of interest from governments of non-visa waiver countries. Although federal departments and agencies have made progress in using terrorist watch list records to support private sector screening processes, there are additional opportunities for using records in the private sector. However, DHS has not yet finalized guidelines to govern such use. Specifically, HSPD-6 required the Secretary of Homeland Security to develop guidelines to govern the use of terrorist information, as defined by the directive, to support various screening processes, including private sector screening processes that have a substantial bearing on homeland security. The interagency memorandum of understanding that implements HSPD-6 also required the Secretary of Homeland Security to establish necessary guidelines and criteria to (a) govern the mechanisms by which private sector entities can access the watch list and (b) initiate appropriate law enforcement or other governmental action, if any, when a person submitted for query by a private sector entity is identified as a person on the watch list. According to the Associate Director of the Screening Coordination Office within DHS, in developing guidelines to govern private sector screening against watch list records, the department planned to partner with the National Infrastructure Advisory Council. The council had previously reported that the private sector wants to be informed about threats and potential terrorists. Specifically, in its July 2006 report on public and private sector intelligence coordination, the National Infrastructure Advisory Council noted that chief executive officers of private sector corporations expect to be informed when the government is aware of a specific, credible threat to their employees, physical plants, or cyber assets. The report also noted that chief executive officers expect to be informed if the government knows that their respective company has inadvertently employed a terrorist. According to DHS’s Office of Infrastructure Protection and Infrastructure Partnerships Division, employees in parts of some components of the private sector are being screened against watch list records, including certain individuals who have access to the protected or vital areas of nuclear power plants, work in airports, and transport hazardous materials. However, many critical infrastructure components are not using watch list records. The office also indicated that several components of the private sector are interested in screening employees against watch list records or expanding current screening. In its June 2007 comments on a draft of this report (see app. V), DHS noted that the Screening Coordination Office has drafted initial guidelines to govern the use of watch list records to support private sector screening processes and was in the process of working with federal stakeholders to finalize this document. However, DHS did not provide specific plans and time frames for finalizing the guidelines. Establishing guidelines to govern the private sector’s use of watch list records, in accordance with HSPD-6, would help in identifying and implementing appropriate screening opportunities. Although required to do so by presidential directives, federal departments and agencies have not identified all appropriate screening opportunities that should use terrorist watch list records. Specifically, HSPD-6 required the heads of executive departments and agencies to conduct screening using the terrorist watch list at all appropriate opportunities, and to report the opportunities at which such screening shall and shall not be conducted to the Attorney General. TSC provided an initial report on screening opportunities to the Attorney General on December 15, 2003. According to the report, TSC hosted a meeting with representatives of more than 30 agencies in October 2003 to discuss the HSPD-6 requirement. At the meeting, TSC requested that the agencies identify appropriate screening opportunities and report them to TSC. However, the report noted that based on the agency responses TSC received, no meaningful or comprehensive report on screening opportunities could be produced at that time. TSC provided additional reports to the Attorney General in April, July, and December 2004. These reports also did not contain comprehensive information on all screening opportunities, consistent with HSPD-6. According to the Department of Justice, with the issuance of HSPD-11, which “builds upon” HSPD-6, the Attorney General’s responsibilities for identifying additional screening opportunities were largely overtaken by DHS which, in coordination with the Department of Justice and other agencies, was to create a comprehensive strategy to enhance the effectiveness of terrorist-related screening activities. Among other things, the strategy was to include a description of the screening opportunities for which terrorist-related screening would be applied. DHS has taken some related actions but, as of June 2007, it had not systematically identified all appropriate screening opportunities. Absent a systematic approach to identifying appropriate screening opportunities, TSC has been working with individual agencies to identify such opportunities. According to TSC, as of May 2007, the center was working on approximately 40 agreements with various federal departments or agencies to use applicable portions of the terrorist watch list. Also, a systematic approach to identifying screening opportunities would help the government determine if other uses of watch list records are appropriate and should be implemented, including uses primarily intended to assist in collecting information to support investigative activities. Such coordinated collection of information for use in investigations is one of the stated policy objectives for the watch list. For example, during our review, TSC noted that screening domestic airline passengers against watch list records in addition to those in the No Fly and Selectee lists would have benefits, such as collecting information on the movements of individuals with potential ties to terrorism. According to TSC, other factors would need to be considered in determining whether such screening is appropriate and should be implemented, including privacy and civil liberties implications. Moreover, it is not clear whether such screening is operationally feasible, and if it were, whether TSC or some other agency would perform the screening. Since September 11, 2001, we, as well as the Administration, have called for a more strategic approach to managing terrorist-related information and using it for screening purposes. In April 2003, we made recommendations for improving the information technology architecture environment needed to support watch list-related screening and called for short- and long-term strategies that would provide for (1) more consolidated and standardized watch list information and (2) more standardized policies and procedures for better sharing watch list data and for addressing any legal issues or cultural barriers that affect watch list sharing. Subsequently, in August 2004, HSPD-11 outlined the Administration’s vision to develop comprehensive terrorist-related screening procedures. Specifically, HSPD-11 required the Secretary of Homeland Security—in coordination with the heads of appropriate federal departments and agencies—to submit two reports to the President (through the Assistant to the President for Homeland Security) related to the government’s use of the watch list. Among other things, the first report was to outline a strategy to enhance the effectiveness of terrorist-related screening activities by developing comprehensive, coordinated, and systematic procedures and capabilities. The second report was to provide a prioritized investment and implementation plan for a systematic approach to terrorist-related screening that optimizes detection and interdiction of suspected terrorists and terrorist activities. The plan was to describe the “scope, governance, principles, outcomes, milestones, training objectives, metrics, costs, and schedule of activities” to enhance and implement the U.S. government’s terrorism-related screening policies. According to DHS officials, the department submitted the required strategy and the investment and implementation plan to the President in November 2004. However, neither DHS nor the Homeland Security Council would provide us copies of either report. Instead, officials from DHS’s Screening Coordination Office provided us a document that they said contained department-specific information from the 2004 strategy and implementation plan. According to DHS officials, because the strategy and plan were products of an interagency process, the Screening Coordination Office believed that it needed to redact information that pertained to other departments’ processes, programs, or activities. The DHS document contains information on the department’s efforts to catalogue its terrorist-related screening activities and identifies significant issues that inhibit effective terrorist-related screening. For example, according to the document, “no one entity within the department is responsible for defining roles and responsibilities for terrorist-related screening, identifying gaps and overlaps in screening opportunities, prioritizing investments, measuring performance, or setting technical and non-technical standards.” Also, the document notes that DHS components may have only limited knowledge of what screening is currently being performed by others within the department, because there is no coordination mechanism to share information on these activities. DHS acknowledged that it has not updated either the strategy or the plan since the 2004 reports, despite the fact that some aspects of the strategy and plan had been overcome by other events, such as results of the “Second Stage Review” initiated in March 2005 by the Secretary of Homeland Security. Moreover, according to DHS screening managers, the departmental office responsible for updating these documents—the Screening Coordination Office—was not established until July 2006 and has had other screening-related priorities. The officials noted that the Screening Coordination Office is working on various aspects of terrorist- related screening, but that work remains in updating the strategy and the investment and implementation plan. Without an updated strategy and plan, the federal government lacks mechanisms to support a comprehensive and coordinated approach to terrorist-related screening envisioned by the Administration, including mechanisms for building upon existing systems and best practices. Also, the federal government has not taken necessary actions to promote the effective use of watch list records at all appropriate screening opportunities, including private sector screening processes that have a substantial bearing on homeland security. An updated strategy and an investment and implementation plan that address the elements prescribed by HSPD-11—particularly clearly articulated principles, milestones, and outcome measures—could also provide a basis for establishing governmentwide priorities for screening, assessing progress toward policy goals and intended outcomes, ensuring that any needed changes are implemented, and responding to issues our work identified, such as potential screening vulnerabilities and interagency coordination challenges. Recognizing that achievement of a coordinated and comprehensive approach to terrorist-related screening involves numerous entities within and outside the federal government, HSPD-11 called for DHS to address governance in the investment and implementation plan. To date, however, no governance structure with clear lines of responsibility and authority has been established to monitor governmentwide screening activities— such as assessing gaps or vulnerabilities in screening processes and identifying, prioritizing, and implementing new screening opportunities. Lacking clear lines of authority and responsibility for terrorist-related screening activities that transcend the individual missions and more parochial operations of each department and agency, it is difficult for the federal government to monitor its efforts and to identify best practices or common corrective actions that could help to ensure that watch list records are used as effectively as possible. More clearly defined responsibility and authority to implement and monitor crosscutting initiatives could help ensure a more coordinated and comprehensive approach to terrorist-related screening by providing applicable departments and agencies important guidance, information, and mechanisms for addressing screening issues. Until the governance component of the investment and implementation plan is clearly articulated and established, it will not be possible to assess whether its structure is capable of providing the oversight necessary for optimizing the use and effectiveness of terrorist-related screening. Our interviews with responsible officials and our analysis of department and agency missions suggest, however, that existing organizations with watch list-related responsibilities may lack the authority, resources, or will to assume this role. Specifically, DHS screening officials told us that the department is the appropriate entity for coordinating the development of the watch list strategy and the related investment and implementation plan, but that it does not have the authority or resources for providing the governmentwide oversight needed to implement the strategy and plan or resolve interagency issues. The Office of the Director of National Intelligence and its NCTC also have important roles in watch list-related issues and information-sharing activities, but officials there told us that the agency is not suited for a governmentwide leadership role either, primarily because its mission focuses on intelligence and information sharing in support of screening but not on actual screening operations. Likewise, since its inception, TSC has played a central role in coordinating watch list-related activities governmentwide and has established its own governance board—composed of senior-level agency representatives from numerous departments and agencies—to provide guidance concerning issues within TSC’s mission and authority. While this governance board could be suited to assume more of a leadership role, its current authority is limited to TSC-specific issues, and it would need additional authority to provide effective coordination of terrorist-related screening activities and interagency issues governmentwide. Managed by TSC, the terrorist watch list represents a major step forward from the pre-September 11 environment of multiple, disconnected, and incomplete watch lists throughout the government. Today, the watch list is an integral component of the U.S. government’s counterterrorism efforts. However, our work indicates that there are additional opportunities for reducing potential screening vulnerabilities. It is important that responsible federal officials assess the extent to which security vulnerabilities exist in screening processes when agencies are not able to screen individuals on the watch list to determine the level of threat the individuals pose because of technical or operational reasons and—in consultation with TSC and other agencies—determine whether alternative screening or other mitigation activities should be considered. Our work also indicates the need for a more coordinated and comprehensive approach to terrorist-related screening through expanded use of the list and enhanced collaboration and coordination within and outside the federal government. To further strengthen the ability of the U.S. government to protect against acts of terrorism, HSPD-6 required the Secretary of Homeland Security to develop guidelines to govern the use of terrorist information to support various screening processes, including private sector screening processes that have a substantial bearing on homeland security. To date, however, DHS has not developed guidelines for the private sector’s use of watch list records in screening designed to protect the nation’s critical infrastructures. Currently, some but not all relevant components of the private sector use the watch list to screen for terrorist-related threats. Establishing clear guidelines to comply with the presidential directive would help both the private sector and DHS ensure that private sector entities are using watch list records consistently, appropriately, and effectively to protect their workers, visitors, and key critical assets. HSPD-11 outlined the Administration’s vision to implement a coordinated and comprehensive approach to terrorist-related screening and directed the Secretary of Homeland Security to coordinate with other federal departments to develop (1) a strategy for a coordinated and comprehensive approach to terrorist-related screening and (2) a prioritized investment and implementation plan that describes the scope, governance, principles, outcomes, milestones, training objectives, metrics, costs, and schedule of activities necessary to achieve the policy objectives of HSPD-11. DHS officials acknowledged that work remains to update the strategy and the investment and implementation plan. Without an up-to- date strategy and plan, agencies and organizations that engage in terrorist- related screening activities do not have a foundation for a coordinated approach that is driven by an articulated set of core principles. Furthermore, lacking clearly articulated principles, milestones, and outcome measures, the federal government is not easily able to provide accountability and a basis for monitoring to ensure that (1) the intended goals for, and expected results of, terrorist screening are being achieved and (2) use of the list is consistent with privacy and civil liberties. These plan elements, which were prescribed by HSPD-11, are crucial for coordinated and comprehensive use of terrorist-related screening data, as they provide a platform to establish governmentwide priorities for screening, assess progress toward policy goals and intended outcomes, ensure that any needed changes are implemented, and respond to issues that hinder effectiveness, such as the potential vulnerabilities and interagency coordination challenges discussed in this report. Although all elements of a strategy and an investment and implementation plan cited in HSPD-11 are important to guide realization of the most effective use of watch list data, addressing governance is particularly vital, as achievement of a coordinated and comprehensive approach to terrorist- related screening involves numerous entities within and outside the federal government. Establishing a governance structure with clearly defined responsibility and authority would help ensure that agency efforts are coordinated and the federal government has the means to monitor and analyze the outcomes of interagency efforts and to address common problems efficiently and effectively. To date, however, no clear lines of responsibility and authority have been established to monitor governmentwide screening activities for shared problems and solutions or best practices. Neither does any existing entity clearly have the requisite authority for addressing various governmentwide issues—such as assessing common gaps or vulnerabilities in screening processes and identifying, prioritizing, and implementing new screening opportunities. Indeed, current unresolved interagency issues highlight the need for clearly defined leadership and accountability for managing and overseeing watch list-related issues across the individual departments and agencies, each of which has its own mission and focus. To promote more comprehensive and coordinated use of terrorist-related screening data to detect, identify, track, and interdict suspected terrorists, we recommended a total of five actions in the restricted version of this report. First, in order to mitigate security vulnerabilities in terrorist watch list screening processes, we recommended that the Secretary of Homeland Security and the Director of the FBI assess to what extent there are vulnerabilities in the current screening processes that arise when screening agencies do not accept relevant records due to the designs of their computer systems, the extent to which these vulnerabilities pose a security risk, and what actions, if any, should be taken in response. Further, we recommended the following three actions to enhance the use of the consolidated terrorist watch list as a counterterrorism tool and to help ensure its effectiveness: that the Secretary of Homeland Security in consultation with the heads of other appropriate federal departments and agencies and private sector entities, develop guidelines to govern the use of watch list records to support private sector screening processes that have a substantial bearing on homeland security, as called for in HSPD-6; that the Secretary of Homeland Security in consultation with the heads of other appropriate federal departments, develop and submit to the President through the Assistant to the President for Homeland Security and Counterterrorism an updated strategy for a coordinated and comprehensive approach to terrorist-related screening as called for in HSPD-11, which among other things, (a) identifies all appropriate screening opportunities to use watch list records to detect, identify, track, and interdict individuals who pose a threat to homeland security and (b) safeguards legal rights, including privacy and civil liberties; and that the Secretary of Homeland Security in consultation with the heads of other appropriate federal departments, develop and submit to the President through the Assistant to the President for Homeland Security and Counterterrorism an updated investment and implementation plan that describes the scope, governance, principles, outcomes, milestones, training objectives, metrics, costs, and schedule of activities necessary for implementing a terrorist-related screening strategy, as called for in HSPD-11. Finally, to help ensure that governmentwide terrorist-related screening efforts have the oversight, accountability, and guidance necessary to achieve the Administration’s vision of a comprehensive and coordinated approach, we recommended that the Assistant to the President for Homeland Security and Counterterrorism ensure that the governance structure proposed by the plan affords clear and adequate responsibility and authority to (a) provide monitoring and analysis of watch list screening efforts governmentwide, (b) respond to issues that hinder effectiveness, and (c) assess progress toward intended outcomes. We provided a draft of the restricted version of this report for comments to the Homeland Security Council, the Office of the Director of National Intelligence, and the Departments of Homeland Security, Justice, and State. We also provided relevant portions of a draft of the restricted version of this report for comments to the Social Security Administration. We received written responses from each entity, except for the Homeland Security Council. In its response, DHS noted that it agreed with and supported our work and stated that it had already begun to address issues identified in our report’s findings. The response noted that DHS, working closely with the FBI and the Office of the Director of National Intelligence, has ongoing efforts to ensure that potential watch list vulnerabilities are identified and addressed and that watch list records and screening programs are appropriate. Also, DHS noted that at the time of our audit work, the department’s Screening Coordination Office was relatively new—established in July 2006—but had subsequently added key staff and begun the critical work of advancing DHS screening programs and opportunities. According to DHS, the office has drafted initial guidelines to govern the use of watch list records to support private sector screening processes and is working with federal stakeholders to finalize this document, but the department did not provide specific plans and time frames for finalizing the guidelines. The department also noted that it works closely with all DHS and federal offices involved in screening initiatives and has begun appropriate outreach to the private sector. Further, DHS noted that its Screening Coordination Office is working within the department to advance a comprehensive approach to terrorist-related screening and that DHS would review and appropriately update the department’s investment and implementation plans for screening opportunities. However, DHS did not specifically address our recommendations related to updating the governmentwide terrorist-related screening strategy and the investment and implementation plan, which is to include the scope, governance, principles, outcomes, milestones, training objectives, metrics, costs, and schedule of activities necessary for implementing the strategy. In our view, an updated strategy and plan are important for helping to ensure a coordinated and comprehensive approach to terrorist-related screening as called for in HSPD-11. The full text of DHS’s written comments is reprinted in appendix V. DHS also provided technical comments, which we incorporated in this report where appropriate. The FBI, responding on behalf of the Department of Justice, commented that the report correctly characterized the FBI’s criteria for nominating individuals for inclusion on the watch list. Also, the FBI response noted that to ensure the protection of civil rights and prevent law enforcement officials from taking invasive enforcement action on individuals misidentified as being on the watch list, the Violent Gang and Terrorist Organization File is designed to not accept certain watch list records. The FBI explained that while law enforcement encounters of individuals on the watch list provide significant information, unnecessary detentions or queries of misidentified persons would be counterproductive and potentially damaging to the efforts of the FBI to investigate and combat terrorism. Because of these operational concerns, the FBI noted that our recommendation to assess the extent of vulnerabilities in current screening processes that arise when the Violent Gang and Terrorist Organization File cannot accept certain watch list records has been completed and the vulnerability has been determined to be low or nonexistent. In our view, however, recognizing operational concerns does not constitute assessing vulnerabilities. Thus, while we understand the FBI’s operational concerns, we maintain it is still important that the FBI assess to what extent vulnerabilities or security risks are raised by not screening against certain watch list records and what actions, if any, should be taken in response. With respect to private sector screening, the FBI commented that it has assigned staff to assist the DHS Screening Coordination Office with drafting related screening guidelines. Finally, the FBI commented that the language of our recommendation related to governance of the watch- listing process may be interpreted to have some overlap with existing mandates carried out by TSC under HSPD-6. Specifically, the FBI noted that governance of the watch-listing process is better suited to be a component of TSC, rather than DHS. The FBI explained that DHS has no authority or provisions for establishing any watch-listing procedures for anyone other than DHS component agencies, whereas TSC has established a governance board composed of senior members from the nominating and screening agencies, the Office of the Director of National Intelligence, and the Homeland Security Council to monitor and update the watch listing process. The FBI further explained that these members meet regularly and address terrorist watch-listing issues ranging from nominations and encounters to dissemination of information and intelligence collected, and that all decisions approved by the governance board are presented at the Deputies Meeting chaired by the White House. The FBI believes this is the appropriate forum for obtaining a commitment from all of the entities involved in the watch-listing process. We recognize that TSC and its governance board have played and will continue to play a central role in coordinating watch list-related activities governmentwide. However, as discussed in this report, TSC’s governance board is currently responsible for providing guidance concerning issues within TSC’s mission and authority and would need additional authority to provide effective coordination of terrorist-related screening activities and interagency issues governmentwide. We are not recommending that a new governance structure be created that overlaps with existing mandates or activities currently carried out by TSC and other entities. Rather, we are recommending that a governance structure be established that affords clear and adequate responsibility and authority to (a) provide monitoring and analysis of watch list screening efforts governmentwide, (b) respond to issues that hinder effectiveness, and (c) assess progress toward intended outcomes. The FBI also provided technical comments, which we incorporated in this report where appropriate. The Office of the Director of National Intelligence, the Department of State, and the Social Security Administration provided technical comments only, which we incorporated in this report where appropriate. As arranged with your offices, we plan no further distribution of this report until 30 days after the date of this report. At that time, we will send copies of the report to interested congressional committees and subcommittees. If you or your staff have any questions about this report or wish to discuss the matter further, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report were Danny R. Burton, Virginia A. Chanley, R. Eric Erdman, Michele C. Fejfar, Jonathon C. Fremont, Kathryn E. Godfrey, Richard B. Hung, Thomas F. Lombardi, Donna L. Miller, Raul Quintero, and Ronald J. Salo. In response to a request from the Chairman and the Ranking Member of the Senate Committee on Homeland Security and Governmental Affairs, the Chairman and the Ranking Member of the Permanent Subcommittee on Investigations, and the Chairman and the Ranking Member of the House Committee on Homeland Security, we addressed the following questions: In general, what standards do the National Counterterrorism Center (NCTC) and the Federal Bureau of Investigation (FBI) use in determining which individuals are appropriate for inclusion on the Terrorist Screening Center’s (TSC) consolidated watch list? Since TSC became operational in December 2003, how many times have screening and law enforcement agencies positively matched individuals to terrorist watch list records, and what do the results or outcomes of these encounters indicate about the role of the watch list as a counterterrorism tool? To what extent do the principal screening agencies whose missions most frequently and directly involve interactions with travelers check against all records in TSC’s consolidated watch list? If the entire watch list is not being checked, why not, what potential vulnerabilities exist, and what actions are being planned to address these vulnerabilities? To what extent are Department of Homeland Security component agencies monitoring known incidents in which subjects of watch list records pass undetected through screening processes, and what corrective actions have been implemented or are being planned to address these vulnerabilities? What actions has the U.S. government taken to ensure that the terrorist watch list is used as effectively as possible, governmentwide and in other appropriate venues? In addressing these questions, we reviewed TSC’s standard operating procedures and other relevant documentation, including statistics on screening encounters with individuals who were positively matched to terrorist watch list records, and we interviewed TSC officials, including the director and the principal deputy director. Further, we reviewed documentation and interviewed senior officials from the FBI’s Counterterrorism Division and the principal screening agencies whose missions most frequently and directly involve interactions with travelers. Specifically, at the Transportation Security Administration (TSA), we examined the screening of air passengers prior to their boarding a flight; at U.S. Customs and Border Protection (CBP), we examined the screening of travelers entering the United States through ports of entry; and at the Department of State, we examined the screening of nonimmigrant visa applicants. We also visited a nonprobability sample of screening agencies and investigative agencies in geographic areas of four states (California, Michigan, New York, and Texas). We chose these locations on the basis of geographic variation and other factors. More details about the scope and methodology of our work regarding each of the objectives are presented in the following sections, respectively. To ascertain the general standards used in determining which individuals are appropriate for inclusion on TSC’s consolidated watch list, we reviewed available documentation. In particular, we reviewed Homeland Security Presidential Directive 6, which specifies that TSC’s consolidated watch list is to contain information about individuals “known or appropriately suspected to be or have been engaged in conduct constituting, in preparation for, in aid of, or related to terrorism;” an NCTC document on building a single database of known and suspected terrorists for the U.S. government, which provides NCTC’s standards for including individuals on the watch list; the Attorney General’s Guidelines for FBI National Security Investigations and Foreign Intelligence Collection, which provide standards for opening FBI international terrorism investigations; and the Attorney General’s Guidelines on General Crimes, Racketeering Enterprise and Terrorist Enterprise Investigations, which provide standards for opening FBI domestic terrorism investigations. We discussed implementation of applicable guidance with responsible NCTC and FBI Counterterrorism Division officials. However, we did not audit or evaluate agencies’ compliance with the guidance. For instance, we did not review or assess the derogatory information related to terrorist watch list records, partly because such information involved ongoing counterterrorism investigations. Also, a primary agency that collects information on known or suspected terrorists—the Central Intelligence Agency—declined to meet with us or provide us with documentation on its watch list-related activities. From TSC, we obtained statistics on the number of positive encounters, that is, the number of times that individuals have been positively matched during screening against terrorist watch list records. Generally, the statistics cover the period from December 2003 (when TSC began operations) through May 2007. To the extent possible on the basis of available information, we worked with the applicable agencies (particularly the FBI, CBP, TSA, and the Department of State) to quantify the results or outcomes of these positive encounters—which included actions ranging from arrests and visa denials to questioning and releasing individuals. Further, we inquired about the existence and resolution of any issues regarding interagency collaboration in managing encounters with individuals on the terrorist watch list. Moreover, in our interviews with officials at TSC and the frontline screening agencies and in the law enforcement and intelligence communities, we obtained perspectives on whether (and how) watch list screening has enhanced the U.S. government’s counterterrorism efforts. We determined from TSC what subsets of records from the consolidated watch list are exported for use by the respective frontline screening agencies and law enforcement. Each day, TSC exports subsets of the consolidated watch list to federal government databases used by agencies that conduct terrorism-related screening. Specifically, we focused on exports of records to the following agencies’ databases: Department of Homeland Security’s Interagency Border Inspection System. Among other users, CBP officers use the Interagency Border Inspection System to screen travelers entering the United States at international ports of entry, which include land border crossings along the Canadian and Mexican borders, sea ports, and U.S. airports for international flight arrivals. Department of State’s Consular Lookout and Support System. This system is the primary sensitive but unclassified database used by consular officers abroad to screen the names of visa applicants to identify terrorists and other aliens who are potentially ineligible for visas based on criminal histories or other reasons specified by federal statute. FBI’s Violent Gang and Terrorist Organization File. This file, which is a component of the FBI’s National Crime Information Center, is accessible by federal, state, and local law enforcement officers for screening in conjunction with arrests, detentions, or other criminal justice purposes. TSA’s No Fly and Selectee lists. TSA provides updated No Fly and Selectee lists to airlines for use in prescreening passengers. Through the issuance of security directives, the agency requires that airlines use these lists to screen passengers prior to boarding. The scope of our work included inquiries regarding why only certain records are exported for screening rather than use of the entire consolidated watch list by all agencies. At TSC and the frontline screening agencies, we interviewed senior officials and we reviewed mission responsibilities, standard operating procedures, and documentation regarding the technical capabilities of the respective agency’s database. We inquired about incidents of subjects of watch list records who were able to pass undetected through screening conducted by the various frontline screening agencies or, at TSA direction, airlines. More specifically, we reviewed available documentation and interviewed senior officials at the FBI, CBP, TSA, U.S. Citizenship and Immigration Services, and the Department of State regarding the frequency of such incidents and the causes, as well as what corrective actions have been implemented or planned to address vulnerabilities. Regarding actions taken by the U.S. government to ensure the effective use of the watch list, we reviewed Homeland Security Presidential Directive 6 and Homeland Security Presidential Directive 11, which address the integration and use of screening information and comprehensive terrorist-related screening procedures. Generally, these directives require federal departments and agencies to identify all appropriate opportunities or processes that should use the terrorist watch list. We did not do an independent evaluation of whether all screening opportunities were identified. Rather, to determine the implementation status of these directives, we reviewed available documentation and interviewed senior officials at the Departments of Homeland Security, Justice, and State, as well as TSC and the Social Security Administration. Our inquiries covered domestic screening opportunities within the federal community and critical infrastructure sectors of private industry. Further, our inquiries covered international opportunities, that is, progress made in efforts to exchange terrorist watch list information with trusted foreign partners on a reciprocal basis. Finally, we compared the status of watch list-related strategies, planning, and initiatives with the expectations set forth in Homeland Security Presidential Directive 6 and Homeland Security Presidential Directive 11. The Homeland Security Council—which is chaired by the Assistant to the President for Homeland Security and Counterterrorism—denied our request for an interview. Regarding statistical information we obtained from TSC and screening agencies—such as the number of positive matches and actions taken—we discussed the sources of the data with agency officials and reviewed documentation regarding the compilation of the statistics. We determined that the statistics were sufficiently reliable for the purposes of this review. We did not review or assess the derogatory information related to terrorist watch list records, primarily because such information involved ongoing counterterrorism investigations or intelligence community activities. We performed our work on the restricted version of this report from April 2005 through September 2007 in accordance with generally accepted government auditing standards. Appendix II: Homeland Security Presidential Directive/HSPD-6 (Sept. 16, 2003) Appendix II: Homeland Security Presidential Directive/HSPD-6 (Sept. 16, 2003) Appendix III: Homeland Security Presidential Directive/HSPD-11 (Aug. 27, 2004) This appendix presents details on the outcomes of screening agency encounters with individuals on the terrorist watch list. Specifically, the following sections provide information on arrests and other outcomes of encounters involving the Department of State, TSA, CBP, and state or local law enforcement. According to TSC data, for the period December 2003 through May 2007, agencies reported arresting subjects of watch list records for various reasons hundreds of times, such as the individual having an outstanding arrest warrant or the individual’s behavior or actions during the encounter. For this period, TSC data also indicated that some of the arrests were based on terrorism grounds. For example, according to TSC, in November 2004, the subject of a watch list record was encountered at the El Paso, Texas, border crossing by CBP and U.S. Immigration and Customs Enforcement agents and subsequently arrested as a result of their interview with the person. According to TSC, the arrest was done in conjunction with the FBI on grounds of material support to terrorism. In January 2007, TSC officials told us that—because of the difficulty in collecting information on the basis of arrests—the center has changed its policy on documentation of arrests and no longer categorizes arrests as terrorism-related. As such, the number of times individuals on the watch list have been arrested based on terrorism grounds is no longer being tracked. U.S. consulates and embassies around the world are required to screen the names of all visa applicants against the Department of State’s Consular Lookout and Support System and to notify TSC when the applicant’s identifying information matches or closely matches information in a terrorist watch list record. For positive matches, officials at Department of State headquarters are to review available derogatory information and provide advice to the consular officer, who is responsible for deciding whether to grant or refuse a visa to the applicant under the immigration laws and regulations of the United States. According to TSC data, when visa applicants were positively matched to terrorist watch list records, the outcomes included visas denied, visas issued (because the consular officer did not find any statutory basis for inadmissibility), and visa ineligibility waived. The Department of State described several scenarios under which an individual on the terrorist watch list might still be granted a visa. According to the department, visas can be issued following extensive interagency consultations regarding the individuals who were matched to watch list records. The department explained that the information that supports a terrorist watch list record is often sparse or inconclusive. It noted, however, that having these records exported to the Consular Lookout and Support System provides an opportunity for a consular officer to question the alien to obtain additional information regarding potential inadmissibility. For instance, there might be a record with supporting information showing that the person attended a political rally addressed by radical elements. According to the Department of State, while this activity may raise suspicion about the individual, it also requires further development and exploration of the person’s potential ability to receive a visa. Thus, using watch list records allows the department to develop information and pursue a thorough interagency vetting process before coming to a final conclusion about any given prospective traveler who is the subject of a watch list record. Further, individuals can receive a waiver of inadmissibility from the Department of Homeland Security. According to the Department of State, there may be U.S. government interest in issuing a visa to someone who has a record in the terrorist watch list and who may have already been found ineligible for a visa or inadmissible to the United States. For instance, an individual might be a former insurgent who has become a foreign government official. This person might be invited to the United States to participate in peace talks under U.S. auspices. According to the Department of State, in such a case, the visa application would go through normal processing, which would include a review of the derogatory information related to the terrorist watch list record. This information, along with the request for a waiver, would be passed to the Department of Homeland Security, which normally grants waivers recommended by the Department of State. Another scenario under which an individual on the terrorist watch list might still be granted a visa involves instances where a watch list record is not exported to the Department of State’s Consular Lookout and Support System. According to the department, originating agencies that nominate terrorist watch list records occasionally ask TSC to not export a record to the Department of State’s system for operational reasons, such as to not alert the individuals about an ongoing investigation. In this case, if a terrorist watch list record is not exported to the Consular Lookout and Support System database, a consular officer will not be notified of the record and may otherwise proceed in adjudicating the visa without consulting Department of State officials in Washington, D.C. TSA requires aircraft operators to screen the names of all passengers against extracts from TSC’s consolidated watch list to help ensure that individuals who pose a threat to civil aviation are denied boarding or subjected to additional screening before boarding, as appropriate. Specifically, TSA provides the No Fly and Selectee lists to airlines for use in prescreening passengers. According to TSA policy, if a situation arises in which a person on the No Fly list is erroneously permitted to board a flight, upon discovery, that flight may be diverted to a location other than its original destination. According to TSA data, when airline passengers were positively matched to the No Fly or Selectee lists, the vast majority of matches were to the Selectee list. Other outcomes included individuals matched to the No Fly list and denied boarding (did not fly) and individuals matched to the No Fly list after the aircraft was in-flight. Regarding the latter, TSA officials explained that there have been situations in which individuals on the No Fly list have passed undetected through airlines’ prescreening of passengers and flew on international flights bound to or from the United States. These individuals were subsequently identified in-flight by other means—specifically, screening of passengers conducted by CBP. CBP officers at U.S. ports of entry use the Interagency Border Inspection System to screen the names of individuals entering the United States against terrorist watch list records. Specifically, all individuals entering the United States at seaports and U.S. airports for international flight arrivals are to be checked against watch list records. At land border ports of entry, screening against watch list records depends on the volume of traffic and other operational factors. While U.S. citizens who have left the United States and seek to reenter may be subjected to additional questioning and physical screening to determine any potential threat they pose, they may not be excluded and must be admitted upon verification of citizenship (for example, by presenting a U.S. passport). Alien applicants for admission are questioned by CBP officers, and their documents are examined to determine admissibility based on requirements of the Immigration and Nationality Act. For nonimmigrant aliens who are positively matched to a terrorist watch list record, officials at CBP are to review available derogatory information related to the watch list record and advise port officers regarding whether sufficient information exists to refuse admission under terrorism or other grounds. CBP officers at ports of entry are ultimately responsible for making determinations regarding whether an individual should be admitted or denied entry into the United States. According to CBP policies, CBP officers at the port of entry are required to apprise the local FBI Joint Terrorism Task Force and the local U.S. Immigration and Customs Enforcement of all watch list encounters, regardless of the individual’s citizenship and whether or not the person is refused admission into the United States. If the individual is a U.S. citizen or an admitted non-citizen, CBP officers at the port are to apprise the local Joint Terrorism Task Force of any suspicions about the person after questioning, in order to permit post-entry investigation or surveillance. According to CBP data, a number of nonimmigrant aliens encountered at U.S. ports of entry were positively matched to terrorist watch list records. For many of the encounters, CBP determined there was sufficient derogatory information related to the watch list records to preclude admission under terrorism grounds in the Immigration and Nationality Act, and the individuals were refused entry. However, for most of the encounters, CBP determined there was not sufficient derogatory information related to terrorist watch list records to refuse admission on terrorism-related grounds in the Immigration and Nationality Act. According to CBP, the center did not know how many times these encounters ultimately resulted in individuals being admitted or denied entry into the United States. The officials explained that after in-depth questioning and inspection of travel documents and belongings, CBP officers could still have refused individuals the right to enter the United States based on terrorism-related or other grounds set forth in the Immigration and Nationality Act, such as immigration violations. To assist state and local officials during encounters, all watch list records in the FBI’s Violent Gang and Terrorist Organization File contain a specific category or handling code and related instructions about actions that may be taken in response to a positive watch list encounter. These actions may include—in appropriate and lawfully authorized circumstances—arresting, detaining, or questioning and then releasing the individual. State and local officials are to contact TSC when the names of individuals queried match or closely match a terrorist watch list record in the Violent Gang and Terrorist Organization File. For positive or inconclusive matches, TSC is to refer the matter to the FBI’s Counterterrorism Division, which provides specific instructions to state and local officials about appropriate actions that may be taken or questions that should be asked. According to TSC data, state or local law enforcement officials have encountered individuals who were positively matched to terrorist watch list records in the Violent Gang and Terrorist Organization File thousands of times. Although data on the actual outcomes of these encounters were not available, the vast majority involved watch list records that indicated that the individuals were released, unless there were other reasons for arresting or detaining the individual. | The Federal Bureau of Investigation's (FBI) Terrorist Screening Center (TSC) maintains a consolidated watch list of known or appropriately suspected terrorists and sends records from the list to agencies to support terrorism-related screening. Because the list is an important tool for combating terrorism, GAO examined (1) standards for including individuals on the list, (2) the outcomes of encounters with individuals on the list, (3) potential vulnerabilities and efforts to address them, and (4) actions taken to promote effective terrorism-related screening. To conduct this work, GAO reviewed documentation obtained from and interviewed officials at TSC, the FBI, the National Counterterrorism Center, the Department of Homeland Security, and other agencies that perform terrorism-related screening. The FBI and the intelligence community use standards of reasonableness to evaluate individuals for nomination to the consolidated watch list. In general, individuals who are reasonably suspected of having possible links to terrorism--in addition to individuals with known links--are to be nominated. As such, being on the list does not automatically prohibit, for example, the issuance of a visa or entry into the United States. Rather, when an individual on the list is encountered, agency officials are to assess the threat the person poses to determine what action to take, if any. As of May 2007, the consolidated watch list contained approximately 755,000 records. From December 2003 through May 2007, screening and law enforcement agencies encountered individuals who were positively matched to watch list records approximately 53,000 times. Many individuals were matched multiple times. The outcomes of these encounters reflect an array of actions, such as arrests; denials of entry into the United States; and, most often, questioning and release. Within the federal community, there is general agreement that the watch list has helped to combat terrorism by (1) providing screening and law enforcement agencies with information to help them respond appropriately during encounters and (2) helping law enforcement and intelligence agencies track individuals on the watch list and collect information about them for use in conducting investigations and in assessing threats. Regarding potential vulnerabilities, TSC sends records daily from the watch list to screening agencies. However, some records are not sent, partly because screening against them may not be needed to support the respective agency's mission or may not be possible due to the requirements of computer programs used to check individuals against watch list records. Also, some subjects of watch list records have passed undetected through agency screening processes and were not identified, for example, until after they had boarded and flew on an aircraft or were processed at a port of entry and admitted into the United States. TSC and other federal agencies have ongoing initiatives to help reduce these potential vulnerabilities, including efforts to improve computerized name-matching programs and the quality of watch list data. Although the federal government has made progress in promoting effective terrorism-related screening, additional screening opportunities remain untapped--within the federal sector, as well as within critical infrastructure components of the private sector. This situation exists partly because the government lacks an up-to-date strategy and implementation plan for optimizing use of the terrorist watch list. Also lacking are clear lines of authority and responsibility. An up-to-date strategy and implementation plan, supported by a clearly defined leadership or governance structure, would provide a platform to establish governmentwide screening priorities, assess progress toward policy goals and intended outcomes, consider factors related to privacy and civil liberties, ensure that any needed changes are implemented, and respond to issues that hinder effectiveness. |
Under Presidential Decision Directive (PDD) 39 (U.S. Policy on Counterterrorism, June 1995), the National Security Council (NSC) is to coordinate interagency terrorism policy issues and review ongoing crisis operations and activities concerning foreign terrorism and domestic terrorism with significant foreign involvement. An NSC-chaired coordinating group is to ensure the PDD is implemented but does not have authority to direct agencies’ activities. Among its general mission responsibilities, the Office of Management and Budget (OMB) is to evaluate the effectiveness of agency programs, policies, and procedures; assess competing funding demands among agencies; set funding priorities; and develop better performance measures and coordinating mechanisms. Further, according to PDD 39, OMB is to analyze the adequacy of funding for terrorism-related programs and ensure the adequacy of funding for research, development, and acquisition of counterterrorism-related technology and systems on an ongoing basis. Under PDD 39, the State Department and the Department of Justice, through the Federal Bureau of Investigation (FBI), have lead federal agency responsibility for dealing with terrorist incidents overseas and domestically, respectively. Numerous federal departments, agencies, bureaus, and offices also have terrorism-related programs and activities that are funded through annual and supplemental appropriations. (See app. I for a list of federal entities with terrorism-related programs and activities.) Terrorism-related funding requests include nearly $290 million provided under the 1995 Emergency Supplemental Appropriations Act (P.L. 104-19) in the aftermath of the domestic terrorist attack in Oklahoma City and $1.1 billion proposed for counterterrorism programs within a number of agencies in fiscal year 1996 supplemental appropriations and fiscal year 1997 budget amendments. The Government Performance and Results Act (Results Act) of 1993 is intended to improve the management and accountability of federal agencies. The Results Act seeks to shift the focus of federal management and decision-making from activities that are undertaken to the results of activities as reflected in citizens’ lives. Specifically, it requires federal agencies to prepare multiyear strategic plans and annual performance plans, establish program performance measures and goals, and provide annual performance reports to the Congress. Agencies submitted the first strategic plans to OMB and the Congress by September 30, 1997; the first annual performance plans, covering fiscal year 1999, are to be submitted to the Congress after the President’s budget submission in 1998. In recent years, several efforts have been undertaken to coordinate federal programs that cut across agencies to help ensure that national needs are being effectively targeted. These efforts have shown that coordinating crosscutting programs takes time and sustained attention and, because of the statutory bases of crosscutting programs, may require congressional involvement to integrate the federal response to national needs. With the large number of government entities involved, the federal effort to combat terrorism is one example of a crosscutting program to which Results Act principles and measures might be applied. Federal agencies are not required to account separately for their terrorism-related programs and activities. Because most federal agencies do not isolate or account specifically for terrorism-related funding, it is difficult to determine how much the government budgets and spends to combat terrorism. Key agencies provided us their estimates of terrorism-related spending, using their own definitions. These estimates totaled nearly $7 billion for unclassified programs and activities for fiscal year 1997, and should be considered a minimum estimate of federal spending for unclassified terrorism-related programs and activities. The amounts for governmentwide terrorism-related funding and spending are uncertain because (1) definitions of antiterrorism and counterterrorism vary from agency to agency; (2) in most cases agencies do not have separate budget line items for terrorism-related activities; (3) some agency functions serve more than one purpose, and it is difficult to allocate costs applicable to terrorism alone (e.g., U.S. embassy security measures protect not only against terrorism but also against theft, compromise of classified documents, and violent demonstrations); (4) some agencies, such as the Departments of Energy and Transportation, have decentralized budgeting and accounting functions and do not aggregate terrorism-related funding agencywide; (5) programs and activities may receive funding from more than one appropriation within a given agency, which makes it difficult to track collective totals; and (6) appropriations legislation often is not clear regarding which amounts are designated to combat terrorism. At our request, the primary agencies leading or supporting operational crisis response and management activities under PDD 39 provided spending data for fiscal years 1994 to 1996 (not all agencies were able to provide historical data prior to fiscal year 1996) and estimates for fiscal year 1997 (see table 1). Figure 1 indicates that DOD spent the largest share of estimated terrorism-related funds for fiscal year 1997, followed by the Department of Energy. While DOD and the Department of Energy estimated spending accounted for 76 percent of the unclassified fiscal year 1997 terrorism-related funds, other agencies’ resources dedicated to combating terrorism have significantly increased in recent years. For example, FAA resources tripled (in current dollars) during fiscal years 1994-97, and FBI resources increased five-fold. FAA increased equipment purchases and aviation security operations, and the FBI nearly tripled the authorized staffing level dedicated to combating terrorism, with the largest staff increase occurring in fiscal year 1997. There is no interagency mechanism to centrally manage funding requirements and requests to ensure an efficient, focused governmentwide application of federal funds to numerous agencies’ programs designed to combat terrorism. Given the high national priority and magnitude of this nearly $7-billion federal effort, sound management principles dictate that (1) governmentwide requirements be prioritized to meet the objectives of national policy and strategy and (2) spending and program data be collected from the federal agencies involved to conduct annual, crosscutting evaluations of their funding requests based on the threat and risk of terrorist attack and to avoid duplicated efforts or serious funding gaps. Neither NSC nor OMB currently performs these functions for the governmentwide program to combat terrorism. Rather, each agency is responsible for identifying and seeking funding for its priorities within its own budget allocation, and OMB reviews the budget requests on an agency-by-agency basis. Because individual agencies continue to propose new programs, activities, and capabilities to combat terrorism, annual crosscutting evaluations of agency budget requests for such programs would be prudent to help avoid duplicated efforts. Under PDD 39, NSC is to ensure the federal policy and strategy for combating terrorism is implemented. Although PDD 39 establishes interagency coordinating and working groups under the auspices of NSC to handle policy and operational issues related to combating terrorism, these groups operate on a consensus basis, do not have decision-making authority, and do not establish governmentwide resource priorities for combating terrorism. Moreover, PDD 39 does not assign responsibility to NSC to ensure that terrorism-related requirements and related funding proposals (1) are analyzed and reviewed to ensure they are based on a validated assessment of the terrorism threat and risks of terrorist attack, (2) provide a measured and appropriate level of effort across the federal government, (3) avoid duplicative efforts and capabilities, and (4) are prioritized governmentwide in a comprehensive strategy to combat the terrorist threat. PDD 39 requires OMB to analyze the adequacy of funding for terrorism-related programs, technology, and systems. Further, OMB’s general mission responsibilities include evaluating the effectiveness of federal programs and policies, assessing competing funding demands, and setting funding priorities. However, PDD 39 does not specifically require OMB to prioritize terrorism-related requirements governmentwide or to gather funding data across agencies and perform the crosscutting analyses of agencies’ funding proposals necessary to ensure the efficient use of federal resources. OMB examiners who review individual agencies’ terrorism-related funding requests explained that although they do not review activities and programs to combat terrorism on a crosscutting basis as such, they often discuss funding issues with each other during their reviews. Further, they bring issues they identify during their reviews to the attention of senior OMB officials. For example, OMB said it reviewed the FBI’s funding requests for a hazardous materials laboratory capability and for increased staffing to combat terrorism. However, because OMB did not provide evidence of its reviews, we could not verify the extent to which OMB considered the capabilities of other federal laboratories or analyzed the FBI’s request for increased staffing based on workload data and on the threat and risk of terrorism. Further, because terrorism-related funding requirements and proposals have not been prioritized across agencies, OMB could not have fully considered tradeoffs among competing demands. For this reason, it is unclear, for example, whether OMB’s denial of an FBI request for an aircraft that the FBI said was required for counterterrorism and other operations was based on an assessment of terrorism-related priorities across the government or of only the FBI’s funding requests. OMB stated that in addition to its examination of agencies’ funding requests, it has met its responsibilities under PDD 39 by reviewing DOD’s counterterrorism program baseline funding and program submission, participating in interagency meetings designed to better identify terrorism-related budget functions that are imbedded in broader funding accounts, and reviewing specific technology proposals (such as FAA proposals for explosives detection technology). Also, consistent with its role, OMB prepared the President’s $1.1-billion request for terrorism-related programs and activities. We submitted a letter of inquiry to OMB to obtain information about OMB’s role in reviewing federal agencies’ budget requests and spending to combat terrorism. Our questions and OMB’s written response appear in appendixes II and III, respectively. While OMB said that it analyzes individual agencies’ funding requests—and some examiners say they share information during their examinations—OMB does not regularly perform crosscutting analyses of requirements, priorities, and funding for the overall federal effort to combat terrorism. Consequently, OMB cannot provide reasonable assurance that specific federal activities and programs to combat terrorism (1) are required based on a full assessment of the threat and risk involved, (2) avoid unnecessary duplication of effort or capability with other agencies, and (3) meet governmentwide priorities for effectively and efficiently implementing the national strategy on combating terrorism. Section 1501 of the recently enacted National Defense Authorization Act for Fiscal Year 1998 requires OMB to establish a reporting system for executive agencies on the budgeting and expenditure of funds for counterterrorism and antiterrorism programs and activities. The section also requires OMB, using the reporting system, to collect agency budget and expenditure information on these programs and activities. Further, the President is required to submit an annual report to the Congress containing agency budget and expenditure information on counterterrorism and antiterrorism programs and activities. The report is also to identify any priorities and any duplication of efforts with respect to such programs and activities. The Results Act requires each executive branch agency to define its mission and desired outcomes, measure performance, and use performance information to ensure that programs meet intended goals. However, the national policy, strategy, programs, and activities to combat terrorism cut across agency lines. The act’s emphasis on results implies that federal programs contributing to the same or similar outcomes should be closely coordinated to ensure that goals are consistent and that program efforts are mutually reinforcing. Effective implementation of the act governmentwide should eventually help prevent uncoordinated crosscutting program efforts that can waste funds and limit the overall effectiveness of the federal effort. The principles underlying the Results Act provide guidance that the many federal agencies responsible for combating terrorism can use to develop coordinated goals, objectives, and performance measures and to improve the management of individual agency and overall federal efforts to combat terrorism. For example, the act focuses on clarifying missions, setting program goals, and measuring performance toward achieving those goals. In our work examining implementation of the Results Act, we identified several critical issues that need to be addressed if the act is to succeed in improving management of crosscutting program efforts by ensuring that those programs are appropriately and substantively coordinated. As their implementation of the Results Act continues to evolve, agencies with terrorism-related responsibilities may become more aware of the potential for and desirability of coordinating performance plans, goals, and measures for their crosscutting activities and programs. The next phase of implementation of the Results Act requires agencies to develop annual performance plans that are linked to their strategic plans. These plans are to contain annual performance goals, performance measures to gauge progress toward achieving the goals, and the resources agencies will need to meet their goals. The development of annual plans may provide the many federal agencies responsible for combating terrorism the next opportunity to develop coordinated goals, objectives, and performance measures for programs and activities that combat terrorism and to articulate how they plan to manage this crosscutting program area. The Economy Act of 1932 (31 U.S.C. 1535, as amended) generally requires federal agencies to reimburse other federal agencies that provide them with support. However, PDD 39 states that federal agencies providing support to lead agencies’ counterterrorist operations or activities must bear the cost unless otherwise directed by the President. Because the Economy Act and PDD 39 differ in their treatment of reimbursement, DOD and the FBI have disagreed on whether the FBI must reimburse DOD for its support of counterterrorist operations. Primary examples of DOD support involve air transportation to return terrorists from overseas locations or other deployments of FBI personnel and equipment for special events or for the investigation of terrorist incidents. DOD officials stated that PDD 39 does not have the force of statutory authority regarding whether or not DOD’s support to another agency is reimbursable. These officials believe the Economy Act requires DOD to provide the requested support on a reimbursable basis unless another statute allows for nonreimbursable support. Every request for DOD support requires a legal determination of which statutes are applicable and whether the Economy Act applies. DOD believes that PDD 39 does not control the legal determination of reimbursement. The issue of reimbursement has caused two concerns within the FBI: (1) the potential impairment of its operations under PDD 39 or other authorities and (2) the availability of funding for operations under PDD 39 if DOD does not provide nonreimbursable support. According to the FBI, DOD ultimately provides nonreimbursable support in most cases, but delays and uncertainties involved in DOD’s decision process on reimbursement frequently threaten timely FBI deployments. DOD officials cited an example of the process it follows when the FBI, through the Attorney General, requests support under PDD 39. In response to an Attorney General request that DOD provide air transportation for FBI personnel and equipment to prepare for the June 1997 Summit of the Eight in Denver, Colorado, DOD identified a statute that allowed nonreimbursable support regarding the provision of security to foreign dignitaries. Otherwise, the Economy Act would have required the FBI to reimburse DOD for the transportation costs. In an attempt to alleviate concern and confusion over reimbursement of support activities, NSC tasked a special working group on interagency operations to explore solutions. According to NSC, possible solutions include legislation to provide DOD with special authority to provide nonreimbursable support or to set aside contingency funds for domestic emergency support team activities. The Department of Justice commented that DOD-provided transportation services and assistance provided in response to terrorist activities involving a weapon of mass destruction should be exempt from the requirements of the Economy Act. DOD commented that it is also considering various legislative options to permit nonreimbursable support for counterterrorism operations. At the time of our review, the issue remained unresolved. Billions of dollars are being spent by numerous agencies with roles or potential roles in combating terrorism, but because no federal entity has been tasked to collect such information across the government, the specific amount is unknown. Further, no governmentwide spending priorities for the various aspects of combating terrorism have been set, and no federal entity manages the crosscutting program to channel resources where they are most needed in consideration of the threat and the risk of terrorist attack and to prevent wasteful spending that might occur from unnecessary duplication of effort. Recent legislation requires that OMB establish a reporting system for executive agencies on the budgeting and expenditure of funds for counterterrorism and antiterrorism programs and activities and that the President report this information annually to the Congress, along with program priorities and any duplication of effort. We recommend that consistent with the responsibility for coordinating efforts to combat terrorism, the Assistant to the President for National Security Affairs, NSC, in consultation with the Director, OMB, and the heads of other executive branch agencies, take steps to ensure that (1) governmentwide priorities to implement the national counterterrorism policy and strategy are established; (2) agencies’ programs, projects, activities, and requirements for combating terrorism are analyzed in relation to established governmentwide priorities; and (3) resources are allocated based on the established priorities and assessments of the threat and risk of terrorist attack. To ensure that federal expenditures for terrorism-related activities are well-coordinated and focused on efficiently meeting the goals of U.S. policy under PDD 39, we recommend that the Director, OMB, use data on funds budgeted and spent by executive departments and agencies to evaluate and coordinate projects and recommend resource allocation annually on a crosscutting basis to ensure that governmentwide priorities for combating terrorism are met and programs are based on analytically sound threat and risk assessments and avoid unnecessary duplication. In a draft of this report we also recommended that the Director, OMB, establish a governmentwide mechanism for reporting expenditures to combat terrorism. We deleted that recommendation in view of the requirements of the recently enacted legislation. Our remaining recommendations are consistent with and complement this legislation. In written comments on a draft of this report, the Department of Defense concurred with our findings. DOD noted that we identified a significant issue involving reimbursement for and providing DOD support to other federal agencies under PDD 39. DOD commented that although PDD 39 states that support provided by a federal agency to the lead federal agency in support of counterterrorist operations is borne by the providing agency, PDD 39 is not a statute, and does not provide authority to waive reimbursement that is required by the Economy Act. DOD also discussed in its comments specific legislative options it is considering to resolve the issue. (DOD’s comments and our response are in app. IV.) In its written comments, the State Department pointed out that, although interagency funding requirements for combating terrorism are not managed by any single mechanism, overall counterterrorism and antiterrorism spending is discussed by NSC’s Coordinating Sub-Group and interagency coordination occurs in other contexts. We agree that interagency coordination occurs at various forums in the counterterrorism community but such coordination mechanisms do not perform the functions we are recommending to NSC and OMB. State also highlighted the difficulties of determining the amount of funds spent to combat terrorism with a certain level of precision. We agree that it would be difficult and possibly not cost-effective to account for programs and activities that combat terrorism with a high degree of precision. Nevertheless, at the time of our review, information on federal spending to combat terrorism had not been gathered in any form or at any level of specificity, and we believe that a reasonable methodology could be devised to allow OMB to capture this data governmentwide. State also noted that efforts to coordinate programs and activities and prevent duplication are further complicated by the authorization and appropriations process in the Congress, because various committees have jurisdiction over the federal agencies involved in combating terrorism. State finally noted that it is important to have good working relations with other countries to effectively counter international terrorism. (State’s comments and our response are in app. V.) OMB noted in its written comments that although our recommendations are consistent with policies and responsibilities established by statute and the President, the budget process would not be improved by mandating annual, formal crosscutting reviews of budget requests and spending for federal programs that combat terrorism. OMB also stated that, because of the significant investment in combating terrorism over the past few years, it will include a crosscutting review of these programs in the formulation of the fiscal year 1999 budget. We are encouraged by OMB’s crosscutting evaluation of programs to combat terrorism for the fiscal year 1999 budget submission. Because of the high national priority, the significant federal resources allocated, and the numerous federal agencies, bureaus, and programs involved, we continue to believe that annual crosscutting reviews would provide a mechanism for OMB to better assure that federal resources are aligned with governmentwide program priorities and that funds are not allocated to duplicative activities and functions to combat terrorism. Annual reviews would be particularly important because federal agencies continue to propose funding of new programs, activities, and capabilities to combat terrorism. OMB expressed concern that our report suggests that there currently is no effective process to review spending for combating terrorism. We acknowledge OMB’s reviews of individual agencies’ funding requests, but as noted in our report, OMB did not provide evidence of its reviews, in particular of the $1.1-billion fiscal year 1997 amended budget request for combating terrorism. OMB also commented that it carefully considers funding levels for activities to combat terrorism. During the course of our review, OMB could not provide data on funding levels across the federal government for combating terrorism. During the agency comment period on a draft of this report, officials from the Treasury and Justice Departments noted that OMB recently issued a budget data request to gather budgetary and expenditure data from executive agencies for fiscal years 1996-99, which in part satisfies our recommendation to OMB. OMB would not provide a copy of the budget data request because we are not part of the executive branch and it was in the process of being implemented. As a result, we could not verify that the request was issued or determine its content. (OMB’s written comments are in app. VI.) The Departments of Treasury; Justice, including the FBI; and Transportation provided technical comments, which we have reflected in our report, as appropriate. NSC and the Departments of Energy and Health and Human Services did not comment on the draft report. We reviewed PDD 39 to determine agencies’ roles and responsibilities in managing and coordinating resources for combating terrorism. Because data on agencies’ spending for U.S. efforts to combat terrorism are not available from a central source, we obtained from the Departments of Defense; Energy; Justice, including the FBI; State; Transportation (FAA); Treasury; and Health and Human Services data on spending that the agencies categorized as related to their unclassified efforts to combat terrorism. We did not verify the data for accuracy, completeness, or consistency. We discussed with NSC and OMB their respective roles in managing the crosscutting federal effort to combat terrorism, and we also submitted questions to the Director, OMB, on OMB’s role under PDD 39. We discussed reimbursement issues with the FBI and DOD. We conducted our work from November 1996 to October 1997 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 7 days after its issue date. At that time, we will send copies to the appropriate congressional committees; the Director, Office of Management and Budget; other federal agencies discussed in the report; and other interested parties. If you have any questions about this report, please contact me at (202) 512-3504. Major contributors to this report were Davi M. D’Agostino, Richard A. McGeary, H. Lee Purdy, and Raymond J. Wyrsch. The following is GAO’s comment on DOD’s letter dated November 7, 1997. 1. We did not evaluate DOD’s options for proposed legislative changes that would permit nonreimbursable support to law enforcement agencies. The following are GAO’s comments on the Department of State’s letter dated November 3, 1997. 1. While we acknowledge the existence of various interagency coordinating mechanisms within the NSC structure, these mechanisms do not perform the functions we are recommending to NSC and OMB. For example, the interagency Technical Support Working Group coordinates only certain terrorism-related research and development projects, and it does not function to eliminate duplicative or redundant terrorism-related research and development across government agencies. 2. We modified the text to reflect the Department’s point that embassy guards help protect against a variety of threats. 3. We agree that it would be difficult and possibly not cost-effective to account for spending to combat terrorism with a high degree of precision. Our report discusses this matter on p. 14. 4. The Department’s concern about reimbursement for the cost of facilities security in U.S. missions abroad was not brought to our attention during our review of funding issues for combating terrorism. As a result, we are not in a position to comment on this matter. 5. The report discusses the State Department position on p. 14. The following are GAO’s comments on OMB’s letter dated November 18, 1997. 1. The report acknowledges that OMB reviews agencies’ individual budget requests, and suggests that this process would be enhanced if federal funding proposals were reviewed on a crosscutting, governmentwide basis. The report also points out that additional steps could be taken to prioritize federal programs and activities to combat terrorism at a strategic level to better ensure priority programs are funded and avoid duplicative and overlapping activities. 2. As discussed on p. 14 of the final report, we are encouraged by OMB’s crosscutting review of programs to combat terrorism as part of the fiscal year 1999 budget process. 3. As discussed on pp. 14-15, in view of the national importance and priority, the significant federal resources allocated, and the numerous federal agencies, bureaus, and programs involved, we continue to believe that governmentwide priorities should be set and annual crosscutting reviews be performed on programs to combat terrorism. As agencies continue to propose new programs, activities, and capabilities, priorities and annual crosscutting reviews are particularly important to better assure that funds are not allocated to duplicative activities and functions to combat terrorism. Combating Terrorism: Federal Agencies’ Efforts to Implement National Policy and Strategy (GAO/NSIAD-97-254, Sept. 26, 1997). Combating Terrorism: Status of DOD Efforts to Protect Its Forces Overseas (GAO/NSIAD-97-207, July 21, 1997). Chemical Weapons Stockpile: Changes Needed in the Management Structure of Emergency Preparedness Program (GAO/NSIAD-97-91, June 11, 1997). State Department: Efforts to Reduce Visa Fraud (GAO/T-NSIAD-97-167, May 20, 1997). Aviation Security: FAA’s Procurement of Explosives Detection Devices (GAO/RCED-97-111R, May 1, 1997). Aviation Security: Commercially Available Advanced Explosives Detection Devices (GAO/RCED-97-119R, Apr. 24, 1997). Terrorism and Drug Trafficking: Responsibilities for Developing Explosives and Narcotics Detection Technologies (GAO/NSIAD-97-95, Apr. 15, 1997). Federal Law Enforcement: Investigative Authority and Personnel at 13 Agencies (GAO/GGD-96-154, Sept. 30, 1996). Aviation Security: Urgent Issues Need to Be Addressed (GAO/T-RCED/NSIAD-96-151, Sept. 11, 1996). Terrorism and Drug Trafficking: Technologies for Detecting Explosives and Narcotics (GAO/NSIAD/RCED-96-252, Sept. 4, 1996). Aviation Security: Immediate Action Needed to Improve Security (GAO/T-RCED/NSIAD-96-237, Aug. 1, 1996). Passports and Visas: Status of Efforts to Reduce Fraud (GAO/NSIAD-96-99, May 9, 1996). Terrorism and Drug Trafficking: Threats and Roles of Explosives and Narcotics Detection Technology (GAO/NSIAD/RCED-96-76BR, Mar. 27, 1996). Nuclear Nonproliferation: Status of U.S. Efforts to Improve Nuclear Material Controls in Newly Independent States (GAO/NSIAD/RCED-96-89, Mar. 8, 1996). Aviation Security: Additional Actions Needed to Meet Domestic and International Challenges (GAO/RCED-94-38, Jan. 27, 1994). Nuclear Security: Improving Correction of Security Deficiencies at DOE’s Weapons Facilities (GAO/RCED-93-10, Nov. 16, 1992). Nuclear Security: Weak Internal Controls Hamper Oversight of DOE’s Security Program (GAO/RCED-92-146, June 29, 1992). Electricity Supply: Efforts Underway to Improve Federal Electrical Disruption Preparedness (GAO/RCED-92-125, Apr. 20, 1992). Economic Sanctions: Effectiveness as Tools of Foreign Policy (GAO/NSIAD-92-106, Feb. 19, 1992). State Department: Management Weaknesses in the Security Construction Program (GAO/NSIAD-92-2, Nov. 29, 1991). Chemical Weapons: Physical Security for the U.S. Chemical Stockpile (GAO/NSIAD-91-200, May 15, 1991). State Department: Status of the Diplomatic Security Construction Program (GAO/NSIAD-91-143BR, Feb. 20, 1991). International Terrorism: FBI Investigates Domestic Activities to Identify Terrorists (GAO/GGD-90-112, Sept. 9, l990). International Terrorism: Status of GAO’s Review of the FBI’s International Terrorism Program (GAO/T-GGD-89-31, June 22, 1989). Embassy Security: Background Investigations of Foreign Employees (GAO/NSIAD-89-76, Jan. 5, 1989). Aviation Security: FAA’s Assessments of Foreign Airports (GAO/RCED-89-45, Dec. 7, 1988). Domestic Terrorism: Prevention Efforts in Selected Federal Courts and Mass Transit Systems (GAO/PEMD-88-22, June 23, 1988). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed interagency processes intended to ensure the efficient allocation of funding and resources for the federal government's efforts to combat terrorism, focusing on: (1) federal funding for unclassified programs and activities to combat terrorism; (2) whether any agency or entity has been designated to coordinate budget proposals, establish priorities, manage funding requirements, and help ensure the efficient allocation of federal resources for combating terrorism across federal agencies; (3) opportunities for agencies to expand coordination of terrorism-related programs and activities under the Government Performance and Results Act (GPRA) principles and framework; and (4) issues concerning the reimbursement of support provided to agencies with lead counterterrorism responsibilities. GAO noted that: (1) the amount of federal funds being spent on combating terrorism is unknown and difficult to determine; (2) identifying and tracking terrorism-related governmentwide spending with precision is difficult for several reasons; (3) information from key agencies involved in combating terrorism shows that nearly $7 billion was spent for unclassified terrorism-related programs and activities during fiscal year (FY) 1997; (4) the Department of Defense budgeted about $3.7 billion in FY 1997, or about 55 percent of the estimated spending; (5) although the National Security Council (NSC) is to coordinate counterterrorism policy issues and the Office of Management and Budget (OMB) is to assess competing funding demands, neither agency is required to regularly collect, aggregate, and review funding and spending data relative to combating terrorism on a crosscutting, governmentwide basis; (6) neither agency establishes funding priorities for terrorism-related programs across agencies' budgets or ensures that individual agencies' stated requirements have been validated against threat and risk criteria before budget requests are submitted to Congress; (7) because governmentwide priorities for combating terrorism have not been established and funding requirements have not necessarily been validated based on an analytically sound assessment of the threat and risk of a terrorist attack, there is no basis to have reasonable assurance that: (a) agencies' requests are funded through a coordinated and focused approach to implement national policy and strategy; (b) the highest priority requirements are being met; (c) terrorism-related activities and capabilities are not unnecessarily duplicative or redundant; and (d) funding gaps or misallocations have not occurred; (8) GPRA principles and framework can provide guidance and opportunities for the many federal agencies involved in the crosscutting program to combat terrorism to develop coordinated goals, objectives and performance measures, and to enhance the management of individual agency and overall federal efforts; (9) Presidential Decision Directive (PDD) 39 directs that agencies will provide support for terrorism-related activities at their own expense unless the President directs otherwise; (10) the Economy Act generally requires reimbursement for goods and services provided to another agency; and (11) the difference between PDD 39 and the Economy Act concerning reimbursement has caused disagreements between agencies in some cases. |
NNSA carries out its nuclear weapons research missions at research laboratories located in two states—California and New Mexico. NNSA and DOE have traditionally relied on contractors to carry out the department’s missions. However, the department’s history of inadequate management and oversight and failure to hold its contractors accountable for results led GAO in 1990 to designate DOE contract management as a high-risk area vulnerable to fraud, waste, abuse, and mismanagement. As of February 2004, this high-risk designation was still in effect. At the Los Alamos and Lawrence Livermore laboratories, about 200 NNSA personnel at laboratory site offices are responsible for oversight of the work performed under contract by over 16,000 employees of the University of California. The contracts with the University provide for reimbursement of all allowable costs plus a fee that is in addition to the allowable costs. The total fee available to the University includes a base or fixed amount that is guaranteed and an “at-risk” amount that is tied to performance measures in the contract. (See table 1.) For more than a decade, GAO and others have reported on problems with the mission support activities at the two laboratories, including project management, nuclear safety management, and facilities management. The problems have included cost and schedule overruns on major projects such as the Dual-Axis Radiographic Hydrodynamic Test Facility at Los Alamos; lack of adequate safety documentation for nuclear facilities; and a deferred maintenance backlog of about $318 million at Lawrence Livermore and $564 million at Los Alamos. In addition, in the early 1990s, we reported on poor controls over business operations such as procurement and property management at the Lawrence Livermore laboratory. For several years, NNSA has chosen to address these performance problems using contract mechanisms. For example, when DOE extended the Los Alamos contract in October 1997, it included a special provision in the contract that would allow DOE to terminate the contract if the University failed to make improvements in several mission support areas— ensuring that workers, the public, and the environment are protected; cleaning up radioactive and hazardous wastes; and maintaining a good relationship with the local community. The department subsequently decided to continue the contract after the Los Alamos laboratory made improvements in these three areas. However, when NNSA extended the contracts for both of the laboratories in January 2001, it included additional requirements to improve project management, facilities management, and nuclear safety management. Oversight of the laboratories occurs at several different levels. NNSA provides direct oversight of the two laboratories through its site offices. In addition, NNSA headquarters staff offices, such as the Offices of Defense Programs and Nonproliferation, provide funding and program direction to the site offices. DOE’s Offices of Independent Oversight and Performance Assurance and Price-Anderson Enforcement also oversee laboratory activities. Finally, the Defense Nuclear Facilities Safety Board, an independent oversight organization created by the Congress in 1988, provides advice and recommendations to the Secretary of Energy to ensure adequate protection of public health and safety at all of the department’s defense nuclear facilities, including the Los Alamos and Lawrence Livermore laboratories. NNSA and the University have taken a number of steps to address the major mission support problems that were known when NNSA extended the University’s contracts in 2001, but all actions will not be complete until mid-2005. When NNSA decided to extend the contracts for the two laboratories, concerns had emerged in three areas: project management, facilities management, and nuclear safety. For its part, NNSA incorporated into the two contracts new agencywide requirements related to each area, developed performance measures focusing on these activities, and changed its overall structure and approach for overseeing the actions taken at the laboratory level. In response, the University has implemented many of NNSA’s requirements at the two laboratories, particularly in project and facilities management, but progress has been slower for nuclear safety. The University also has taken steps to improve its oversight of the two laboratories and to foster coordination and collaboration between them. One step NNSA took to improve laboratory performance in mission support activities was to incorporate into the contracts new agencywide requirements related to all three of the identified problem areas. In general, these new requirements call for the laboratories to be more disciplined and businesslike in carrying out their management responsibilities. Project Management. In October 2000, DOE approved Order 413.3, which set agency wide standards for managing projects, including NNSA projects. The order is intended to help ensure that projects are delivered on schedule and within budget and are fully capable of meeting mission requirements as well as environmental, safety, and health standards. It requires that all projects costing more than $5 million go through five decision checkpoints, such as approving mission need and approving the start of operations. At each checkpoint, NNSA must make a formal determination to allow the project to proceed. Facilities Management. Many of the facilities at NNSA sites are decades old and in poor condition. For fiscal year 2003, NNSA required each nuclear weapons site to develop a 10-year comprehensive site plan for facilities management that incorporated NNSA’s strategic goals. The main objective of the plans is to restore, rebuild, and revitalize the physical infrastructure of the nuclear weapons complex. Two of NNSA’s specific goals are to stabilize the deferred maintenance backlog by the end of fiscal year 2005 and to reduce the amount of deferred maintenance to within industry standards by the end of fiscal year 2009. Nuclear Safety. DOE has always required that the contractors responsible for a nuclear facility analyze the facility, the work to be performed, and the associated hazards. Contractors use this information to identify the conditions, safe boundaries, and hazard controls necessary to protect workers, the public, and the environment from adverse consequences. Finally, the contractors document the safety requirements and operating procedures for each facility, referred to as the safety basis. Although DOE included these steps in an order for contractors to follow, the laboratories had not consistently done a thorough and quality analysis of their nuclear facilities. In January 2001, DOE finalized a revised nuclear safety rule, requiring that contractors responsible for nuclear facilities establish and maintain a safety basis. Contractors may be subject to civil penalties for failing to comply with this requirement. This revised rule strengthened NNSA’s ability to hold contractors accountable for the safety of nuclear facilities In addition to incorporating new agencywide requirements into the two contracts, NNSA established new contract mechanisms and performance measures to help ensure that the laboratories put in place management improvements for mission support activities. These measurements have changed over time. Initially, NNSA added a number of specific improvement initiatives in a new appendix (appendix O) to the contract, established to address known problems areas and provide a framework for improved results. For the first 2 years of the contract, the laboratories had to meet all of the requirements in appendix O on a pass/fail basis in order to qualify for performance incentive fees. NNSA phased out appendix O at the end of fiscal year 2002 after determining that the laboratories had met all of the provisions and requirements. Starting in fiscal year 2003, NNSA shifted to assessing a “critical few” broadly defined performance objectives. In prior fiscal years, the two laboratory contracts contained dozens of individual performance objectives, and poor performance in one of these individual objectives would have minimal impact on NNSA’s evaluation of overall contractor performance or the amount of fee that could be earned by the contractor. In fiscal year 2003, NNSA narrowed the number of performance objectives in the two contracts to nine objectives that define the mission of the laboratories, including two objectives that cover mission support activities—achieving successful completion of projects and maintaining secure, safe, environmentally sound, effective, and efficient operations and infrastructure in support of mission objectives. In fiscal year 2004, NNSA added a third mission support objective—improving or maintaining effective business systems and practices that safeguard public assets and support mission objectives. The third main step NNSA took was to revise its approach to overseeing the laboratories. NNSA moved additional oversight staff, which had previously been in operations offices located in distant cities, to the laboratories themselves to bring more oversight staff closer to the laboratory’s day-to- day operations. NNSA also began regular meetings with senior managers at the University and the two laboratories to identify and resolve emerging issues or areas that need improvement. Finally, NNSA designated the senior on-site NNSA representative—the site office manager—as the contracting officer for the laboratory. The contracting officer is the main point of contact and the single point of accountability for the contract. Before this change, the contracting officer was located in the procurement division at the operations office and did not report to the NNSA site office manager. NNSA believed this organizational change would clarify roles and responsibilities, eliminate a layer of management, and provide more effective federal oversight. The University made several changes designed to strengthen management of mission support activities. The first was to redefine the relationship between the two laboratories, making their efforts at managing mission support more collaborative and interactive. Previously, the laboratories had operated more autonomously and competitively—an approach the University fostered as the best means for achieving world-class science. The recent onset of mission support problems convinced University officials that increased coordination in mission support activities would be beneficial. According to University officials, they wanted to create an environment in which the laboratories would identify problems, share solutions, create best practices, and be more consistent in their approaches to laboratory management. Specific steps the University took to develop a more collaborative relationship between the two laboratories included creating a position of vice president of laboratory management, loaning staff between the two laboratories and from the University as needed, and creating standardized policies. For example, after the Lawrence Livermore laboratory developed a model for a risk-based approach to facilities management, the Lawrence Livermore staff visited Los Alamos to help that laboratory apply the approach to its facilities management. In addition, when the University determined that the two laboratories had different security policies regarding foreign nationals and different definitions of “sensitive property” for property management, it required the laboratories to work out the differences and devise one best practice. The second major change by the University was to provide more outside assistance to the laboratories. The University contracted with firms from private industry for expertise and advice on nuclear facility safety at the Los Alamos laboratory and on project management at both laboratories. In addition, since the early 1990s, a University President’s Council supported by five different panels had been in place to make University expertise available to the laboratories in such areas as project management and environment, safety, and health. In 2001, the University increased the interaction between the Council, the panels, and the laboratories, for example, by assigning a mentor from the project management panel to each of the major projects at the laboratories. In November 2003, the University also created two additional oversight groups to strengthen management and oversight of the two laboratories and to make more expertise available from outside the University. Although NNSA had substantially completed its efforts at incorporating new requirements into the contracts, establishing new performance measures, and modifying its oversight approach, the laboratories’ efforts to implement the new requirements are in various stages of completion. Project management. By June 2002, both laboratories had completed planned actions, including implementing new DOE procedures for managing projects, providing improved training for project managers, and providing improved assistance to project teams. In addition, both laboratories had standardized the formats for monthly reporting on cost and schedule for major projects to make it easier to identify negative performance trends. Facilities management. By October 2002, both laboratories had completed 10-year strategic plans detailing how the laboratories will restore, rebuild, and revitalize their physical infrastructures. The plans incorporated NNSA’s goals of stabilizing deferred maintenance by fiscal year 2005 and reducing deferred maintenance to industry standards by fiscal year 2009. According to the Los Alamos 10-year plan, industry standards set maintenance costs at about 2 to 4 percent of the estimated replacement value of the entire facility. To achieve this goal, Los Alamos plans to increase maintenance funding and implement the corrective actions necessary to create a more efficient program. Los Alamos and NNSA officials noted that some of the actions would involve long-term efforts spanning several years. In contrast, since 1998, the Lawrence Livermore laboratory has had a risk-based facility maintenance program to prioritize facility maintenance requirements. In addition, the laboratory began to assess a “laboratory facility charge” per square foot to building occupants, both to fund facility maintenance and to encourage users to give up unnecessary space. As a result, the Lawrence Livermore laboratory has already met NNSA’s fiscal year 2005 goal of stabilizing the deferred maintenance backlog. Nuclear safety. The two laboratories did not submit all required safety analysis documentation by the April 2003 milestone date, and they may not be fully in compliance until mid-2005. Federal regulations required that by April 10, 2003, the laboratories must have provided assurance that they could meet new and enhanced nuclear safety requirements. The Los Alamos laboratory initially reported that it had met the deadline for providing that assurance, but later disclosed to NNSA that some radioactive waste sites were not included in the original analyses submitted to NNSA. In December 2003, NNSA and the laboratory agreed that 11 radioactive waste sites were subject to the nuclear safety requirement. NNSA expects the laboratory to provide the safety basis documentation on these sites by April 2004. In addition, NNSA reported in its fiscal year 2003 assessment for the Los Alamos laboratory that the quality of 7 of the 12 safety analyses submitted on time had been unsatisfactory and required revision. As for the Lawrence Livermore laboratory, it completed 5 of the 9 required safety analyses by the April 10, 2003, deadline and requested and received extensions for submitting the remaining analyses. NNSA has granted an extension until April 2005 for the laboratory to submit the safety analysis for the final facility. Lawrence Livermore laboratory officials said they did not meet the original deadline because of resource constraints and changes in the work activities at some of the facilities. Despite the steps taken in 2001 to address mission support problems at the two laboratories, the laboratories have encountered additional problems in mission support activities and are taking further actions to address those problems. New problems in business operations, such as controls over purchase cards and property, emerged at the Los Alamos laboratory in 2002, while developing an emergency management program that complied with NNSA requirements continued to be a problem at the Lawrence Livermore laboratory. Beginning in the summer of 2002, a series of problems with business operations at the Los Alamos laboratory surfaced, raising questions about the effectiveness of controls over government purchase cards and property. These events included allegations of fraudulent use of government purchase cards and purchase orders by a few Los Alamos employees, questions about the adequacy of property controls over items such as computers, and disputed rationales for the laboratory’s firing of two investigators who were working on some of these cases. In an April 2003 report on the problems with business operations at Los Alamos, the department identified multiple causes, including (1) the University’s supervision of business processes at the laboratory was ineffective, (2) NNSA’s oversight was narrowly focused on specific performance measures in the contract rather than on overall effectiveness, and (3) both the University and NNSA may have ignored warning signs of problems because other evaluations of contractor performance did not identify significant weaknesses. The report concluded that the actual loss to the federal government could have been far greater than it actually was, and the business practices in place in 2002 would not have been able to prevent or detect such losses. In addition, the report concluded that the firing of the two investigators was inappropriate and demonstrated the degree to which the laboratory’s management was out of touch with activities at the laboratory. In January 2001, when NNSA extended and modified the contract for the Los Alamos laboratory, NNSA officials were not aware of the problems with business operations at the laboratory. NNSA did not consider business operations at Los Alamos to be an area of concern to include in the ongoing mission support improvement efforts. Furthermore, even after the problems with business operations at Los Alamos emerged in the summer of 2002, NNSA did not include any performance measures for business operations at the start of fiscal year 2003. It was not until February 2003 that NNSA added performance measures to the contract to address the business operations problems. NNSA also modified its method of assessing contractor performance in fiscal year 2003 to provide a more integrated evaluation of performance in business operations. Once the significance of the problems with business operations at the Los Alamos laboratory was known, the University responded with corrective actions. The University made sweeping changes in the Los Alamos management team, replacing, among others, the laboratory director, principal deputy director, chief financial officer, and laboratory auditor. In addition, the University commissioned a series of internal and external reviews to identify further problems and control weaknesses in procurement and property management. These reviews resulted in more than 300 findings and recommendations to improve controls. The laboratory is in the process of implementing more than 600 corrective actions to respond to these recommendations and implement other laboratory initiatives and expects to complete these corrective actions by June 2004. As a fuller understanding of the scope of the problem emerged, managers at the Los Alamos laboratory decided to address the problems with business operations in three phases. Initially, the laboratory is taking hundreds of specific actions in response to identified problems and recommendations. These actions included changing procedures and strengthening internal controls. According to the laboratory managers, the second phase involves a more comprehensive change to business systems by designing and implementing an improved, integrated business computer system that will facilitate both more efficient and more effective business operations. Laboratory officials most recently estimated the cost of this system, currently under development, at about $150 million. Laboratory management said the last phase will involve developing and implementing a set of performance goals and measures for business operations so that laboratory management can better track and sustain results in this important area. The laboratory plans on having these additional measures in place by the end of 2005. In April 2003, primarily because of these ongoing problems with business operations, the Secretary of Energy announced his intention to open the Los Alamos contract to competition for the first time, when the current contract expires in September 2005. Although acknowledging the University’s contribution to high caliber science at Los Alamos, the Secretary stated that he held the University responsible for the systemic management failures at the laboratory. The Secretary encouraged the University to enter the contract competition, but made clear that the laboratory’s performance in business services needed to be as good as its performance in science. The Lawrence Livermore laboratory has had difficulty developing an emergency management program that complies with NNSA requirements. Emergency planning consists of identifying hazards, threats, and ways to mitigate hazards; developing and preparing emergency plans and procedures; and identifying personnel and resources needed to ensure effective emergency response. Effective emergency management has been an issue of increasing significance since the terrorist attacks against the United States in September 2001. When NNSA extended the contract in January 2001, NNSA recognized that the emergency management program at the Lawrence Livermore laboratory was not in compliance with DOE Order 151.1, which sets out the program requirements. For example, the laboratory had not included in its previous assessment of hazards and risks even the possibility that a release of materials or other incidents on site would potentially travel off site and affect the local community outside the boundaries of the Livermore site. As a first step toward compliance, NNSA included a requirement in the contract for 2001 that the laboratory prepare and submit for approval hazard assessments for each of its facilities and activities that met the specified thresholds for such assessments. These assessments were submitted on time in May 2001 and were reviewed and approved by NNSA. In the fiscal year 2002 performance measures for the contract, NNSA required the laboratory to develop and implement a plan to achieve substantial compliance with DOE Order 151.1 by September 2003. This plan was to include a schedule and milestones to satisfy all the elements of an emergency management program, such as emergency preparedness training and an emergency public information program. Although the laboratory submitted the first draft of the plan on time early in the fiscal year, the quality of the plan did not meet NNSA expectations, and the laboratory received a marginal rating on this measure for fiscal year 2002. NNSA required the laboratory to revise and resubmit its plan. In its overall evaluation of the laboratory’s performance for that year, NNSA identified implementing an effective emergency management program as one of the three institutional management challenges facing the laboratory. In July of 2002, DOE’s Office of Independent Oversight and Performance Assurance criticized the laboratory’s lack of progress in resolving its problems with emergency management. The review team identified a number of important procedural and performance weaknesses. For example, the laboratory did not have clearly defined processes for deciding on the appropriate on-site and off-site protective actions to take. The report concluded that the laboratory faced significant challenges in implementing improvements in the program and that the poor quality of the documents provided to NNSA raised serious questions about the laboratory’s ability to meet the implementation milestones in the plan. Starting in fiscal year 2003, NNSA officials took additional steps to help ensure that the laboratory addressed its emergency management problems. In the fiscal year 2003 and 2004 performance measures in the laboratory contract, emergency management was included as one of the “critical few” measures. According to an NNSA official, this increased the focus of senior management attention on the activity. Prior to fiscal year 2003, performance measures for emergency management were included as part of the overall performance objective of environment, safety, and health and received less emphasis and attention. The laboratory received a satisfactory rating (one step higher than the fiscal year 2002 marginal rating) on the fiscal year 2003 emergency management measure. NNSA now estimates that the laboratory will have an emergency management program that is in substantial compliance with DOE orders by the end of fiscal year 2004. In contrast to the Los Alamos contract, the Secretary had not signaled his intention about whether to extend or compete the Lawrence Livermore laboratory contract. However, in the fiscal year 2004 appropriations act for energy and water development, the Congress required DOE to compete the Lawrence Livermore laboratory contract when the current contract expires in 2005. The statute provides that no appropriated funds for fiscal year 2004 or any previous fiscal year could be used for specified laboratory contracts that had been awarded more than 50 years ago without competition, unless the Secretary of Energy announces a decision to compete the contracts at the end of their current terms. In late January 2004, the Secretary announced the decision to compete the specified laboratory contracts, including those for the Los Alamos and Lawrence Livermore laboratories. NNSA and the University face three main challenges to sustaining improvements over the long term in mission support activities at the two laboratories. These challenges include (1) ensuring that the mission support activities are effectively performed, (2) ensuring that NNSA provides effective oversight of laboratories’ activities, and (3) ensuring that management improvement initiatives such as the improvements to business systems at the Los Alamos laboratory fully address the existing problems and are carried out in a systematic manner consistent with best practices. A major factor leading to the problems with managing mission support activities at the laboratories, including an increased potential for fraud, waste, and abuse, was what one internal DOE assessment referred to as the devaluing of mission support activities by laboratory personnel. Although significant investments in improving mission support activities and controls have subsequently occurred, there are continuing concerns about whether the laboratories will continue to place sufficient emphasis on mission support activities to ensure that these functions are effectively performed. Ensuring that actions taken to address mission support problems translate into effective performance of mission support requires establishing and maintaining an effective system of management control. Office of Management and Budget Circular No. A-123 defines management controls as the organization, policies, and procedures used to reasonably ensure that (1) programs achieve their intended results; (2) resources are used consistent with agency missions; (3) programs and resources are protected from waste, fraud, and mismanagement; (4) laws and regulations are followed; and (5) reliable and timely information is obtained, maintained, reported, and used for decision making. Effective management controls require leadership and commitment on the part of management. Internal control standards state that the attitude and philosophy of management toward information systems, accounting, and monitoring can have a profound effect on internal control. The standards require management to establish and maintain an organizational environment that sets a positive and supportive attitude toward internal control. The two laboratories have differed in the degree to which they have been successful in ensuring that mission support activities are effectively performed and in maintaining effective management controls. The Los Alamos laboratory had over time been weakening its management controls over some mission support activities. Responding to pressures to reduce overhead costs and a view that the NNSA laboratories were unnecessarily burdened with administrative and procedural requirements that were not adequately serving mission needs, the laboratory pursued cost efficiencies in mission support activities without sufficient regard for ensuring that the overall management control system was effective. For example, since fiscal year 1995, the Los Alamos laboratory has reduced the relative funding for mission support activities in order to provide more funding to mission activities. The reduced funding contributed to the weakening of business controls as the laboratory scaled back or eliminated steps, such as reviewing small item purchases because fewer staff were available to perform the reviews. The problems with management controls at the Los Alamos laboratory were to some extent due to the laboratory’s organizational culture. An April 2003 DOE report on the business operations problems at the Los Alamos laboratory cited cultural problems as one of the underlying causes of the systemic management failure of business systems at the laboratory. The report stated that the Los Alamos culture exalted science and devalued business practices, and that changing this attitude would be the most difficult long-term challenge facing the laboratory, regardless of who manages it in the future. NNSA and laboratory officials at Los Alamos have stated that the pressures to reduce mission support costs will probably continue, which increases the challenges associated with improving controls and ensuring that mission support activities are effectively performed. In contrast to the problems documented at the Los Alamos laboratory, the Lawrence Livermore laboratory has apparently been more successful in emphasizing the importance of mission support activities and ensuring that these support activities are effectively performed. The laboratory encountered similar problems with its business operations in the early- 1990s, including weaknesses in procurement and property management, and took steps at that time to improve its financial and accounting systems. Thus, when faced with the same pressures to reduce overhead costs in recent years, the Lawrence Livermore laboratory was better able to accomplish those reductions without significantly degrading the quality or effectiveness of its internal controls. For example, a June 2003 external assessment of business systems at the Lawrence Livermore laboratory identified no material weaknesses in internal control systems but did contain recommendations to enhance management controls in such areas as procurement and property management. In contrast, external reviews of procurement systems at the Los Alamos laboratory identified significant weaknesses in internal controls, such as insufficient policies and procedures and inadequate management. NNSA recently began to address these organizational culture issues and the need to understand the importance of effective management controls. The problems with management controls at the Los Alamos laboratory are similar to the organizational problems documented at the National Aeronautics and Space Administration (NASA) after the Columbia space shuttle accident in February 2003. The independent panel tasked with investigating the causes of the accident reported in August 2003 that NASA’s organizational culture was a contributing factor to the breakdown in management controls intended to ensure safety for the shuttle and its crew. Specifically, the report cited as one of the root causes of the accident the organizational culture at NASA, which emphasized mission rather than safety. In addition, under pressure to reduce costs, the agency had transferred responsibilities to the private sector while reducing federal oversight. As part of its efforts to improve operations at the laboratories, the NNSA Administrator has required all of the senior NNSA managers to review the Columbia Accident Investigation Report for findings that could be applied to NNSA. In addition, the Administrator chartered a task force to perform an in-depth review of the cultural and organizational issues described in the Columbia report and to make recommendations on how the department could improve the effectiveness of mission support functions to ensure the safe performance of high-risk mission work. NNSA officials estimate that the task force report will be available in March 2004. The laboratories have also taken steps to address organizational attitudes about mission support activities. At the Lawrence Livermore laboratory, the director and senior management developed a list of values for the laboratory that includes 10 items deemed to be critical to success. One of the items is “simultaneous excellence in science and technology, operations, and business practices.” At the Los Alamos laboratory, the director established priorities in 2003 to help guide the laboratory’s efforts. The priorities include safety and security, mission, and business operations. The laboratory is also developing a strategic plan that includes both mission and mission support goals and objectives. The University is also exploring other ways to improve management of the two laboratories. In January 2004, the University Board of Regents took steps to allow the University to form partnerships with outside companies for the upcoming competition for the laboratory contracts. The University’s Vice President for Laboratory Management said that outside partners with strong management and business experience could strengthen its performance in the areas of business operations and other mission support areas. These actions are positive steps toward increasing the awareness of the importance of mission support activities, but for several reasons concerns remain about whether the laboratories will continue to ensure that mission support activities are effectively performed. First, although current senior management at both laboratories supports the importance of having effective mission support activities, there has been a long history of emphasizing mission over mission support. Second, NNSA has also typically placed much more emphasis on mission than mission support and has often failed to detect problems when they existed. For example, the April 2003 DOE report on business operations at the Los Alamos laboratory stated that the NNSA evaluation system in place at Los Alamos failed to consider relationships between different processes at the laboratory and therefore failed to detect overall systemic problems. One of the report’s recommendations was to ensure that NNSA reviews of contractor performance capture cross-cutting information in both mission and mission support areas to form a more complete picture of performance. Furthermore, although much work has been done to implement new mission support requirements and improve management of mission support at the two laboratories, considerable time may be needed to determine the extent to which the actions taken will result in improved performance. For example, some of the efforts, such as the longer-term efforts needed to reduce deferred maintenance to industry standards by fiscal year 2009, will take years to complete. In addition, the Los Alamos laboratory continues to have problems with workers failing to comply with nuclear safety procedures. For fiscal years 2001 through 2003, the Los Alamos laboratory had filed 51 reports on nuclear safety incidents, some resulting in exposures of workers to radiation. Efforts are underway to improve performance in this area, but safety officials at the laboratory acknowledge that some of the improvement efforts may take months or years of sustained effort to complete. The laboratories will also need to ensure that improvement efforts are sustained and effective. NNSA officials, including the senior technical safety advisor at the Los Alamos site office, noted they did not have a high level of confidence in the laboratory’s ability to sustain improvements because the laboratory’s track record in this regard has not been good. For example, a March 2003 report on nuclear safety at the Los Alamos laboratory analyzed 32 corrective actions between fiscal years 1996 and 2003 and concluded that many of the improvement efforts had not been sustained or followed up on, allowing many of the safety violations to recur. NNSA officials, including the assistant manager for business management, also noted that the laboratory was reporting on when corrective actions had been implemented rather than on their effectiveness. For example, in December 2003, Los Alamos reported that it had completed 80 percent of its improvement efforts in procurement but did not report on the effectiveness of those efforts. Based on complaints from vendors and others outside the laboratory, NNSA officials sampled procurement records and found recurring problems. The NNSA officials said that they had been in discussions with Los Alamos laboratory officials since September 2003 in efforts to reach agreement on how to assess improvement efforts for effectiveness, but as of December 2003, no agreement had been reached. NNSA’s reliance on contractors to operate its facilities and carry out its missions makes effective oversight of contractor activities crucial to success. In the past, however, oversight of the laboratories’ mission support activities has been inadequate. Both the University and NNSA had failed to ensure that the laboratories’ mission support activities were effective. The University had in general taken a “hands off” approach to overseeing the laboratories. For example, in its April 2003 report on evaluating problems at the Los Alamos laboratory, NNSA stated that prior to November 2002, the University’s oversight of Los Alamos was ineffective in the area of business processes. The report added that the University was slow to respond to allegations of problems with business practices, initially limiting its involvement to providing assistance as requested by the laboratory director and not ensuring that the laboratory director was taking sufficient steps to address the problem. NNSA oversight also was not adequate to identify and address the critical shortcomings in management controls. Regarding the Los Alamos laboratory, in May 2003, DOE’s Office of Inspector General reported that improvements were needed in NNSA’s project management oversight and control. This conclusion was based on the Inspector General’s review of a new facility that will be used to evaluate the effects of aging on the nation’s nuclear weapons stockpile. The report stated that the project did not have a viable baseline and that at least $57 million in cost increases had occurred, but NNSA oversight was inadequate to identify the problem. Also, the April 2003 DOE report on problems at the Los Alamos laboratory stated that NNSA’s direct federal oversight had been narrowly focused on specific performance measures in the contract, rather than on overall effectiveness. Weaknesses in NNSA oversight also occurred at the Lawrence Livermore laboratory. For example, in a May 2003 report on a new waste treatment facility at the laboratory, we concluded that a delay in initiating storage and treatment operations at the new facility occurred because NNSA managers took over a year to resolve disagreements with the laboratory on technical issues affecting the safe operation of the new building for temporarily storing wastes. Providing clear requirements and ensuring that the contractor complies with those standards in a timely manner is part of NNSA’s oversight responsibilities. These past problems with oversight raise concerns about NNSA’s proposed change to its oversight approach. NNSA’s August 2003 draft Line Oversight and Contractors’ Assurance System policy would rely more on contractor oversight and self-assessment and less on NNSA’s direct oversight. The proposal would require a comprehensive contractor assurance system, or system of management controls, to be in place and would primarily rely upon these systems and controls to ensure that its missions and activities are properly executed in an effective, efficient, and safe manner. NNSA would use a risk-based, graded approach to its oversight and tailor the extent of federal oversight to the quality and completeness of the contractor’s assurance systems and to evidence of acceptable contractor performance. NNSA’s oversight functions would include review and analysis of contractor performance data, direct observations of contractor work activities in nuclear and other facilities, annual assessments of overall performance under the contract, and certifications by the contractor or independent reviewers that the major elements of risk associated with the work performed are being adequately controlled. NNSA stated in its draft policy and in public meetings before the Defense Nuclear Facilities Safety Board that the department plans to phase in this new oversight approach over the next few years. Although we believe that the overall concept of a risk-based approach to federal oversight is reasonable, concerns exist about whether NNSA will be able to effectively carry out this approach while successfully meeting its responsibility for safe and secure operations. For example, considerable work will be needed to successfully implement a risk-based approach to laboratory oversight. According to the Associate Director for Operations at the Los Alamos laboratory, the laboratory’s ability to manage risk is at a beginning level of maturity. Other officials at the Los Alamos laboratory, including officials from the Performance Surety Division and the Quality Improvement Office, said that the laboratory and NNSA have different perceptions of risks at the laboratory and how to manage those risks. In addition, they said that both the laboratory and NNSA have been reacting to problems after they have come to light rather than managing risks to prevent problems from occurring. In addition to these concerns specific to the two laboratories, DOE and others have raised broader concerns about the adequacy of oversight. For example, in November 2003, DOE’s Office of Inspector General released its annual report on management challenges, including oversight of contracts and project management as two of the three internal control challenges facing the department. The report stated that these challenges represent issues that, if not addressed, may impede the department’s ability to carry out its program responsibilities and ensure the integrity of its operations. The department also included program oversight of contractors as a significant matter of concern in its performance and accountability report for fiscal year 2003. Furthermore, the Defense Nuclear Facilities Safety Board, in recent public meetings, has expressed concerns about nuclear safety under the proposed NNSA contractor assurance policy and said that NNSA should not delegate responsibility for such an inherently high-risk area of operations. NNSA and the University have not ensured that the laboratories manage major improvement initiatives using a best practices framework that helps ensure successful implementation. For example, one aspect of improving mission support activities at the Los Alamos laboratory has involved a major upgrade of business systems including budgeting, accounting, procurement, and property management. This business systems improvement initiative is planned to take 18 months and be completed at the end of June 2004. Unfortunately, laboratory officials have not followed best practices in managing the improvement initiative, increasing the risk that the initiative may not fully address existing problems or be the most effective approach. In previous work, we found that best practices by leading organizations to sustain management improvement initiatives involved using a systematic, results-oriented approach that incorporated a rigorous measurement of progress. Such an approach typically included the following elements: (1) define clear goals for the initiative, (2) develop an implementation strategy that sets milestones and establishes responsibility, (3) establish results- oriented outcome measures early in the process to gauge progress toward the goals, and (4) use results-oriented data to evaluate the effectiveness of the initiative and to make additional changes where warranted. For its business systems improvement initiative, the Los Alamos laboratory established an implementation strategy that set milestones and assigned responsibility for carrying out the strategy. For example, the business process improvement plan included over 600 required actions, each of which had a time frame for completion and a laboratory employee responsible for the action. The laboratory is tracking each of the actions to ensure that they are completed. While the Los Alamos laboratory had an implementation strategy for its business systems improvement initiative, it implemented those actions largely without clearly defined goals, results-oriented measures, or results- oriented data to evaluate the effectiveness of its actions. Clear goals not defined. Although the Los Alamos laboratory had a strategy for business improvement that included general goals, it did not define the goals in measurable terms. The laboratory’s primary goal for 2003 was to reduce the risks associated with internal control vulnerabilities in its business systems. This general goal does not provide a measurable end point; it does not indicate how much risk reduction is enough or how changes in risk could be measured. Nor does the even more general objective of restoring trust in the laboratory’s business systems, mentioned by some Los Alamos officials, provide a measurable end point. While addressing internal control problems is important, it does not by itself indicate that improvements are sufficient or effective. Results-oriented outcome measures not established. The laboratory did not establish results-oriented outcome measures for its business improvement initiative. Instead, the laboratory generally focused on measuring the progress of implementing its improvement actions, such as the percentage of improvements that have been implemented. Such measurements, however, do not provide an indication of progress toward the overall goal of reducing the risk of fraud, waste, and abuse. Laboratory officials told us that instead of developing such measures early in the business improvement initiative, their strategy is to define measurable performance goals after the many actions associated with the business improvement initiative are in place. Results-oriented data not used to evaluate effectiveness. The laboratory did not have the results-oriented outcome data needed to evaluate the effectiveness of its business improvement initiative. Again, laboratory officials told us that after the improvement actions are in place, they plan to define and generate results-oriented data to correspond with measurable performance goals. In addition to the lack of results- oriented outcome data, the laboratory also lacks the information necessary to determine if these improvement efforts are cost-effective. The laboratory has only partial information on the cost of the improvement initiative. The laboratory’s Associate Director for Administration told us that the laboratory has not focused on the costs of the current business improvements. He added that the laboratory will consider costs once the new systems are in place and decisions must be made about balancing the cost of business activities against the risks of removing some internal control activities. Because the Los Alamos laboratory lacked several elements of a best practices approach to managing improvement initiatives, the laboratory did not have a sufficient basis from which to objectively review the results of the improvement initiative, assess the reasonableness of costs incurred, or take further corrective actions if necessary to achieve the overall goals of the initiative. Laboratory officials explained that they had given immediate priority to fixing business system problems rather than measuring and sustaining improved business results. Furthermore, the Associate Director for Administration concluded that good performance measures would take considerable time to develop and that implementing corrective actions was a higher priority. However, by waiting to focus on results and costs later, the laboratory increases its risk that the initiative may not fully address existing problems or be the most cost-effective approach to reducing its internal control vulnerabilities to appropriate levels. Laboratory officials said that they generally follow an organized process for implementing improvement initiatives that includes defining the tasks to be accomplished, creating a schedule with milestones, and assigning responsibility for the actions. However, such a process does not include all the elements that we have identified as necessary for a best practices approach. Neither the University nor NNSA was influential in ensuring that the laboratory followed best practices in managing the business system improvement initiative, even after the department had issued guidance on managing improvement initiatives. In October 2003, DOE issued Notice 125.1, Managing Critical Management Improvement Initiatives, which describes best practices for managing improvement initiatives and requires that those practices be followed by NNSA. However, this notice does not apply to DOE’s contractors, and NNSA has not incorporated similar requirements into NNSA’s contracts with the University to manage the Los Alamos and Lawrence Livermore laboratories. Effectively accomplishing the mission of conducting world-class scientific work at the Los Alamos and Lawrence Livermore laboratories also requires the laboratories to maintain good business practices; accountability for mission support activities; and safeguards against fraud, waste, abuse, and mismanagement. Sufficient emphasis on mission support activities has been lacking, especially at the Los Alamos laboratory, and achieving and sustaining effective performance in mission support will require strong leadership and commitment. Efforts to improve performance in mission support activities are still underway at the laboratories, and it may take considerable time to determine if the efforts are effective. Managing these efforts using best practices will help ensure that they succeed. Keeping these improvements in place over the long term also requires an effective process for assessing contractor performance on mission support activities. We continue to have concerns about NNSA’s oversight approach. Under its proposed risk-based approach to federal oversight, NNSA would determine the risks associated with a given operation or function, evaluate how good the contractor assurance system is in that area, and also factor in past contractor performance. NNSA would take these factors into consideration to determine whether it could reduce federal oversight of an operation and rely more on the contractor’s assurance that the risk is being adequately addressed and controlled. In our view, such autonomy for the laboratories is inadvisable this soon into the process of recovery from a string of embarrassing revelations. Regardless of whether the University of California retains the contracts when they are competed in 2005 or another organization is selected to operate one or both of the laboratories, until the laboratories have demonstrated the maturity and effectiveness of contractor assurance systems and the adequacy of the contractor’s oversight has been validated, NNSA needs to maintain sufficient oversight of mission support activities to fulfill its responsibilities independently. We recommend that the Secretary of Energy direct the Administrator of NNSA to ensure through contract and other management mechanisms that the University of California and any future contractor managing Los Alamos and Lawrence Livermore National Laboratories provide leadership, resources, and oversight to ensure effective mission support activities, including evaluating the impact of improvement actions on performance; ensure that NNSA performance assessments at the laboratories include evaluations of the adequacy of leadership, resources, and internal controls associated with mission support activities; ensure that as NNSA implements its proposed oversight and contractor assurance policy at Los Alamos and Lawrence Livermore National Laboratories, NNSA retains sufficient independent federal oversight of mission support activities to fulfill its responsibilities associated with protecting public resources and safety; and include in its contract with the University of California and any future contractor at Los Alamos and Lawrence Livermore National Laboratories a requirement that major improvement initiatives be managed consistent with the best practices of high-performing organizations, as defined in DOE Notice 125.1. We provided a draft of this report to NNSA and the University of California for their review and comment. The University provided its comments through NNSA. In written comments, NNSA’s Associate Administrator for Management and Administration generally agreed with the accuracy of the report and acknowledged that both NNSA and the University face challenges in improving mission support activities at the two laboratories. NNSA also cited actions taken or planned that it said met the intent of our recommendations. However, regarding the report’s accuracy, NNSA said our report substantially understates the extent of progress made in correcting the laboratories’ mission support problems. We believe that we have accurately described the progress made in implementing actions aimed at improving mission support. Even though the University has made progress in implementing corrective actions and new requirements, the extent to which these actions have resulted in improvements in mission support performance at the laboratories is still unclear. In an attachment to the letter, NNSA raised a concern about our discussion of its efforts to oversee the laboratories and our recommendation concerning NNSA’s proposed risk-based approach to laboratory oversight. NNSA disagreed with our reservations about its proposal to rely more on a contractor’s system of management controls and less on NNSA’s own independent oversight. NNSA acknowledged that there have been problems with its oversight in the past but believes that its proposed risk- based approach will be successfully implemented, resulting in improved contractor oversight. Therefore, NNSA said that our recommendation to ensure that it retains sufficient independent oversight of the laboratories’ mission support activities was not necessary. However, as we discussed in our report, a risk-based approach to federal oversight appears reasonable in concept, but the University of California has not demonstrated that its contractor assurance systems can be relied on to prevent or detect fraud, waste, abuse, or mismanagement. And, in the past, NNSA has not been effective at detecting these weaknesses. Until improved performance in these areas has been clearly demonstrated, we continue to be concerned about whether NNSA can effectively implement a risk-based approach to contractor oversight. That is why we are recommending that NNSA retain sufficient independent oversight of mission support activities to ensure that those activities are safe and effective. The attachment to the letter also discussed or referred to the report’s other recommendations. Regarding our recommendation that NNSA’s performance assessments at the laboratories include an evaluation of the adequacy of leadership, resources, and internal controls associated with mission support activities, NNSA said that the performance assessment process it began using in fiscal year 2003 already includes such an evaluation, so this recommendation is no longer required. For fiscal year 2003, the “critical few” performance measures used as a basis for evaluation of contractor performance did include maintaining secure, safe, environmentally sound, effective, and efficient operations and infrastructure in support of mission objectives. However, we do not believe that NNSA’s assessment of contractor performance on this measure is equivalent to evaluating the adequacy of leadership, resources, and internal controls associated with mission support activities. Therefore, we continue to believe that such an evaluation is an important part of NNSA’s oversight of contractor performance and that the recommendation is warranted. Regarding our recommendations that NNSA ensure that any contractors operating the laboratories (1) provide the leadership, resources, and oversight to ensure effective mission support activities and (2) manage improvement initiatives consistent with best practices, NNSA was silent on the usefulness of the recommendations, but stated that the University is committed both to providing the leadership, resources, and oversight to ensure that mission support activities are conducted effectively and to ensuring that its improvement efforts continue to achieve the desired results. We believe that the oversight activities inherent in these recommendations are an important part of improving the management of mission support at the laboratories. NNSA also provided technical comments, which we have incorporated as appropriate. NNSA’s written comments on our draft report are included in appendix II. We conducted our review from May 2003 through February 2004, in accordance with generally accepted government auditing standards. Appendix I provides details on our scope and methodology. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Energy and to the University of California Office of the President. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on this report, please call me at (202) 512-3841. Other staff contributing to this report are listed in appendix III. We addressed three of the five problem areas NNSA identified when it extended the laboratory contracts with the University of California in January 2001. NNSA had identified problems with management accountability; safeguards and security management; facility safety, including nuclear facility operations; critical skills, knowledge, and technical capabilities; and project management. Based on discussions with committee staff, we reviewed three of these initiatives—management accountability, including steps taken by the University to strengthen its oversight of the two laboratories; facility safety, including nuclear safety and facilities management; and project management. The other two areas NNSA identified—safeguards and security management and critical employee skills—were outside the scope of our review. These topics are the subject of other GAO engagements. To identify the status of actions taken by NNSA and the University of California to address the mission support problems that were highlighted when NNSA decided to extend the contracts for the Los Alamos and Lawrence Livermore National Laboratories in January 2001, we reviewed and analyzed prior GAO reports and testimonies, reports from the Inspector General, and other reports in order to determine the types of problems identified in the past. In addition, we reviewed and analyzed documentation on the two contracts, including the annual performance measures and NNSA’s annual evaluations of contractor performance. We reviewed DOE orders and other agencywide requirements that had been incorporated into the contracts, as well as the documentation provided by the laboratories that demonstrated compliance with these requirements. We interviewed officials in the University of California Office of the President, Laboratory Administration Office, to understand the contractor’s perspective and management role. In addition, we interviewed officials at the two laboratories and the NNSA site offices to determine actions they had taken or were taking to address identified problem areas. We also reviewed documentation obtained from DOE’s Office of Environment, Safety and Health regarding nuclear safety violations and similar problems. In addition, we viewed public meetings of the Defense Nuclear Facilities Safety Board (DNFSB), an independent oversight board charged with providing safety oversight of the nuclear weapons complex, and interviewed a DNFSB official at the Los Alamos National Laboratory to obtain DNFSB views on the progress made by Los Alamos in the area of nuclear safety. We also interviewed officials from DOE’s Office of Engineering and Construction Management regarding project management, the Office of Price-Anderson Enforcement regarding nuclear safety, and the Office of Environmental Management regarding environmental cleanup. Lastly, we interviewed officials with the Department of Energy Inspector General at the Los Alamos and Lawrence Livermore National Laboratories and in Washington, D.C., to obtain additional information on actions taken to address problems and the progress that has been made. To identify the status of the actions taken to address additional mission support problems that have emerged or become more significant since 2001, we took several steps. To obtain information on the problems with business operations at Los Alamos, we reviewed and analyzed reports by DOE’s Office of Inspector General and external reviews done by Price- Waterhouse Coopers, Ernst and Young, and others. We also reviewed and analyzed the April 2003 report from the Deputy Secretary of Energy and the NNSA Administrator on the business problems and Los Alamos and the recommendations to the Secretary of Energy. In addition, we interviewed officials with the Los Alamos laboratory, the NNSA Los Alamos site office, and the University of California Laboratory Administration Office on the status of corrective actions, and we reviewed documentation and reports they provided. To obtain information on the continuing problems with emergency management at the Lawrence Livermore laboratory, we reviewed reports of DOE’s Office of Independent Oversight and Performance Assurance, the annual performance measures in the contract for the Lawrence Livermore laboratory, and NNSA’s annual evaluations of contractor performance. In addition, we interviewed NNSA officials at the Lawrence Livermore site office and reviewed documents provided by them. We also reviewed the Fiscal Year 2004 Energy and Water Development Appropriations Act and corresponding House of Representatives and Conference Reports to understand the requirement to compete DOE contracts that had been awarded more than 50 years ago without competition. To determine the remaining challenges, if any, that NNSA and the University face in sustaining improvements in mission support activities, we reviewed and analyzed Office of Management and Budget Circular A- 123, Management Accountability and Control; GAO’s Standards for Internal Control in the Federal Government; and DOE Office of Inspector General and Performance and Accountability reports. We also reviewed and analyzed University, laboratory, and independent reports obtained during site visits and during interviews with officials to identify other areas of concern and any potential barriers to implementing and sustaining the improvement efforts. We interviewed officials with the University Laboratory Administration Office, the Los Alamos and Lawrence Livermore laboratories, and the NNSA site offices to obtain their views on remaining challenges the laboratories face and the need for improved oversight of the laboratories. In particular, we discussed DOE’s oversight policies, NNSA’s proposed oversight policies, and challenges to improving oversight. We also reviewed documents relevant to oversight issues, such as NNSA’s draft policy letter on contractor assurance and oversight and the investigative report on the space shuttle Columbia accident, which was prepared for the National Aeronautics and Space Administration. Finally, for additional insight into remaining challenges, we analyzed the results of public meetings on DOE and NNSA oversight of nuclear safety held by the Defense Nuclear Facilities Safety Board. We conducted our review from May 2003 through February 2004 in accordance with generally accepted government auditing standards. In addition to the individuals named above, Carole Blackwell, Doreen Feldman, Terry Hanford, Jonathan McMurray, Jill Peterson, Robert Sanchez, and Stan Stenersen made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | The University of California (University) operates the Los Alamos and Lawrence Livermore National Laboratories for the Department of Energy's National Nuclear Security Administration (NNSA). The two research laboratories, with a combined fiscal year 2003 budget of $2.3 billion, have had problems in such mission support areas as managing projects, conducting business operations, and ensuring safe nuclear operations. GAO was asked to describe the actions taken to address mission support problems identified in 2001, as well as problems that have since emerged, and to assess the main challenges to sustaining mission support improvements. For the three mission support areas with problems as of 2001--managing construction and other major projects, maintaining and managing existing facilities, and ensuring safe operations of nuclear facilities--actions are basically complete in the first two areas but not in the third. For all three areas, NNSA incorporated new requirements into the contracts, developed new performance measures, and increased its oversight. The University of California has strengthened oversight of the laboratories by, among other things, establishing a new position of vice president for laboratory management. The laboratories will not fully comply with new requirements for providing a safety analysis of each nuclear facility until mid-2005. The actions taken by NNSA and the University to correct problems in project, facilities, and nuclear safety management were not systemic enough to keep problems from developing in other mission support areas after 2001. At the Los Alamos laboratory, emerging problems centered on business operations, including inadequate controls over procurement, purchase cards, and property management. The laboratory now has extensive corrective actions underway and expects to have most of the new measures in place by the end of 2005. At the Lawrence Livermore laboratory, the problems centered on emergency planning and preparedness, in that the laboratory had made little progress in developing an emergency management program that complied with NNSA requirements. The laboratory has taken steps over the past 2 years to improve in this area, and NNSA now estimates that the laboratory will have an approved emergency management program by the end of fiscal year 2004. NNSA and the University face three main challenges to sustaining improvements in mission support performance over the long term. The first challenge is for the laboratories to ensure that actions taken to address mission support problems translate into effective performance of mission support activities. A past lack of emphasis on mission support activities was a major factor when problems surfaced, particularly at the Los Alamos laboratory. Ensuring that mission support activities are effective will require sustained leadership, resources, and effective internal controls. The second challenge is ensuring appropriate and effective oversight of mission support activities, which has been inadequate in the past. In particular, a draft NNSA policy that calls for relying primarily on contractors' management controls raises concerns about the future adequacy of NNSA oversight. The third challenge is ensuring that the laboratories follow best practices in developing any future improvement initiatives. In its efforts to improve business systems, the Los Alamos laboratory did not follow best business practices for managing such improvements. Not doing so lessens the laboratory's ability to ensure that the efforts will achieve the desired results. |
According to the Financial Literacy Act, the purpose of the Financial Literacy and Education Commission is to improve financial literacy and education through the development of a national strategy to promote them. The act defines the composition of the Commission—the Secretary of the Treasury and the heads of 19 other federal departments and agencies—and allows the President to appoint up to five additional members. The Commission must hold one public meeting at least every 4 months. It held its first meeting in January 2004 and nine subsequent meetings, most recently in January 2007. The act requires the Commission to undertake certain activities, including (1) developing a national strategy to promote financial literacy and education for all Americans; (2) establishing a financial education Web site to provide information about federal financial literacy education programs and grants; (3) establishing a toll-free hotline; (4) identifying areas of overlap and duplication among federal activities and coordinating federal efforts to implement the national strategy; (5) assessing the availability, utilization, and impact of federal financial literacy and education materials; and (6) promoting partnerships among federal, state, and local governments, nonprofit organizations, and private enterprises. The act requires that the national strategy be reviewed and modified as deemed necessary at least once a year. It also requires the Secretary of the Treasury to develop, implement, and conduct a pilot national public service multimedia campaign to enhance the state of financial literacy and education in the United States. The Treasury Department’s Office of Financial Education provides primary support to the Commission and coordinates its efforts. As of April 2007, the office had assigned the equivalent of about 3 full-time professional staff to handle work related to the Commission and in the past also has received assistance from staff detailed from other federal agencies. The Commission has no independent budget. The act authorized appropriations to the Commission of amounts necessary to carry out its work, and for fiscal year 2005 Congress specified that $1 million should be used for the development and implementation of the national strategy. To develop the National Strategy for Financial Literacy, the Commission formed a national strategy working group of 13 member agencies, issued a call for public comment in the Federal Register, and held six public meetings—five organized around the commercial, government, nonprofit, education, and banking sectors and one for individual consumers. Although the Financial Literacy Act required the Commission to adopt the strategy within 18 months of enactment, or June 2005, the strategy was not publicly released until April 2006. The Commission sought unanimous consent on the national strategy, and Commission members told us that the Treasury Department faced a significant challenge in trying to get 20 federal agencies—each with its own mission and point of view—to unanimously agree to a strategy. A particular source of disagreement involved whether nonfederal entities should be cited by name as illustrative examples in the strategy. The Commission ultimately agreed that it would not name these organizations in the national strategy, but cite them in a separate document issued by Treasury, called the Quick Reference Guide to the strategy. The content of the National Strategy for Financial Literacy largely consists of a comprehensive overview of issues related to financial literacy and examples of ongoing initiatives. It describes many major problems and challenges that relate to financial literacy in the United States, identifies key subject matter areas and target populations, and describes what it believes to be illustrations of potentially effective practices in financial education across a broad spectrum of subjects and sectors. As such, the strategy represents a useful first step in laying out key issues and highlighting the need for improved financial literacy. At the same time, as some representatives of the Commission told us, the strategy is fundamentally descriptive rather than strategic. It provides information on disparate issues and initiatives but is limited in presenting a long-term plan of action for achieving its goal. Most notably, the strategy’s recommendations are presented as “calls to action,” defined as concrete steps that should be taken for improving financial literacy and education. Sixteen of these 26 calls to action are addressed to federal entities, 5 to private or nonprofit organizations, and 5 to the public. However, many of these calls to action are very general and do not discuss an implementation strategy, and others describe initiatives that already exist. For example, one call to action states, “Investors should take advantage of the wealth of high quality, neutral, and unbiased information offered free of charge,” but does not lay out a plan for helping ensure that investors will do so. We have previously identified a set of desirable characteristics for any effective national strategy. While national strategies are not required to contain a single, consistent set of attributes, we found six characteristics that can offer policymakers and implementing agencies a management tool to help ensure accountability and more effective results. As shown in the table below, we found that the National Strategy for Financial Literacy generally addresses the first of these characteristics and partially addresses the other five. The six characteristics we considered follow: Clear Purpose, Scope, and Methodology. An effective strategy describes why the strategy was produced, the scope of its coverage, and how it was developed. The National Strategy for Financial Literacy generally addresses this characteristic. For example, it cites the legislative mandate that required the strategy, the overall purpose, and subsidiary goals such as making it easier for consumers to access financial education materials. At the time of our review, the strategy did not specifically define “financial literacy” or “financial education” and we noted that doing so could provide additional benefit in helping define the scope of the Commission’s work. In its April 2007 report to Congress, the Commission provided definitions of these terms that it said would guide its work. Detailed Discussion of Problems and Risks. A strategy with this characteristic provides a detailed discussion or definition of the problems the strategy intends to address, their causes, and the risks of not addressing them. Based on our review, the National Strategy for Financial Literacy partially addresses this characteristic. It identifies specific problems that indicate a need for improved financial literacy and often discusses the causes of these problems. However, it might benefit further from a fuller discussion of the long-term risks—to the well-being of individuals, families, and the broader national economy—that may be associated with poor financial literacy. As we have reported in the past, a clear understanding of our nation’s overall financial condition and fiscal outlook is an indispensable part of true financial literacy. Due to current demographic trends, rising health care costs, and other factors, the nation faces the possibility of decades of mounting debt, which left unchecked will threaten our economic security and adversely affect the quality of life available to future generations. One element of financial literacy is ensuring that Americans are aware of these potential developments in planning for their own financial futures since, for example, we can no longer assume that current federal entitlement programs will continue indefinitely in their present form. Desired Goals, Objectives, Activities and Performance Measures. The National Strategy for Financial Literacy partially addresses this characteristic, which deals not only with developing goals and strategies to achieve them, but also the milestones and outcome measures needed to gauge results. The strategy does identify key strategic areas and includes 26 calls to action that, although often lacking detail, provide a picture of the types of activities the strategy recommends. However, in general, the strategy neither sets clear and specific goals and objectives, nor does it set priorities or performance measures for assessing progress. Several stakeholders in the financial literacy community that we spoke with noted that the strategy would have been more useful if it had set specific performance measures. The Commission might also have set measurable goals for changing consumer behavior, such as seeking to reduce the number of Americans without bank accounts or increase the number saving for their retirement to a specified figure in the next 5 or 10 years. Without performance measures or other evaluation mechanisms, the strategy lacks the means to measure progress and hold relevant players accountable. Description of Future Costs and Resources Needed. Effective national strategies should include discussions of cost, the sources and types of resources needed, and where those resources should be targeted. The National Strategy for Financial Literacy discusses, in general terms, the resources that are available from different sectors and its Quick Reference Guide provides a list of specific organizations. However, the strategy does not address fundamental questions about the level and type of resources that are needed to implement the national strategy. The strategy does little to acknowledge or discuss how funding limitations could be a challenge to improving financial literacy and offers little detail on how existing resources could best be leveraged. Neither does it provide cost estimates nor does it discuss specifically where resources should be targeted. For example, it does not identify the sectors or populations most in need of additional resources. The strategy also might have included more discussion of how various “tools of government” such as regulation, standards, and tax incentives might be used to stimulate nonfederal organizations to use their unique resources to implement the strategy. Without a clear description of resource needs, policymakers lack information helpful in allocating resources and directing the strategy’s implementation. Organizational Roles, Responsibilities, and Coordination. Effective national strategies delineate which organizations will implement the strategy and describe their roles and responsibilities, as well as mechanisms for coordinating their efforts. The National Strategy for Financial Literacy partially addresses these issues. For example, it discusses the involvement of various governmental and nongovernmental sectors in financial education and identifies in its calls to action which agencies will or should undertake certain tasks or initiatives. However, the strategy is not specific about roles and responsibilities and does not recommend changes in the roles of individual federal agencies. Addressing these issues more fully is important given our prior work that discussed the appropriate federal role in financial literacy in relation to other entities and the potential need to streamline federal efforts in this area. In addition, the strategy is limited in identifying or promoting specific processes for coordination and collaboration between sectors and organizations. Description of Integration with Other Entities. This characteristic addresses how a national strategy relates to other federal strategies’ goals, objectives, and activities. The National Strategy for Financial Literacy does identify and describe a few plans and initiatives of entities in the federal and private sectors, and it includes a chapter describing approaches within other nations and international efforts to improve financial education. However, the strategy is limited in identifying linkages with these initiatives, and it does not address how it might integrate with the overarching plans and strategies of these state, local, and private-sector entities. Because the National Strategy for Financial Literacy is more of a description of the current state of affairs than an action plan for the future, its effect on public and private entities that conduct financial education may be limited. We asked several major financial literacy organizations how the national strategy would affect their own plans and activities, and the majority said it would have no impact at all. Similarly, few federal agencies with which we spoke could identify ways in which the national strategy was guiding their work on financial literacy. Most characterized the strategy as a description of their existing efforts. Our report recommended that the Secretary of the Treasury, in concert with other agency representatives of the Financial Literacy and Education Commission, incorporate into the national strategy (1) a concrete definition for financial literacy and education to help define the scope of the Commission’s work; (2) clear and specific goals and performance measures that would serve as indicators of the nation’s progress in improving financial literacy and benchmarks for the Commission; (3) actions needed to accomplish these goals, so that the strategy serves as a true implementation plan; (4) a description of the resources required to help policymakers allocate resources and direct implementation of the strategy; and (5) a discussion of appropriate roles and responsibilities for federal agencies and others, to help promote a coordinated and efficient effort. In commenting on our report, Treasury, in its capacity as chair of the Commission, noted that the National Strategy for Financial Literacy was the nation’s first such effort and, as such, was designed to be a blueprint that provides general direction while allowing diverse entities the flexibility to participate in enhancing financial education. The department said that the strategy’s calls to action are appropriately substantive and concrete—setting out specific issues for discussion, conferences to be convened, key constituencies, and which Commission members should be responsible for each task. As noted earlier, in its April 2007 report to Congress, the Commission provided definitions for “financial literacy” and “financial education” to help guide its work. We acknowledge that the national strategy represents the nation’s first such effort, but continue to believe that future iterations of the strategy would benefit from inclusion of the characteristics cited in our report. The Financial Literacy Act required the Commission to establish and maintain a Web site to serve as a clearinghouse and provide a coordinated point of entry for information about federal financial literacy and education programs, grants, and materials. With minor exceptions, the Commission did not create original content for its Web site, which it called My Money. Instead, the site serves as a portal that consists largely of links to financial literacy and education Web sites maintained by Commission member agencies. According to Treasury representatives, the English- language version of the My Money site had more than 290 links as of April 2007, organized around 12 topics. A section on federal financial education grants was added to the site in October 2006, which includes links to four grant programs. Many representatives of private and nonprofit financial literacy initiatives and organizations with whom we spoke were generally satisfied with the Web site, saying that it provided a clear and useful portal for consumers to federal financial education materials. From its inception in October 2004 through March 2007, the My Money Web site received approximately 1,454,000 visits. The site received an average of 35,000 visits per month during the first 6 months after its introduction in October 2004. Use of the site has increased since that time and reached 78,000 visits in April 2006, when the Commission and the Web site received publicity associated with the release of the national strategy. From October 2006 through March 2007, the site averaged about 69,000 visits per month. The number of visits to the My Money Web site has been roughly comparable to some recently launched private Web sites that provide financial education. Some representatives of financial literacy organizations with whom we spoke said the Commission should do more to promote public awareness of the Web site. Commission representatives, however, noted to us several steps that have been taken to promote the site, including, for example, a promotional effort in April 2006 that printed the My Money Web address on envelopes containing federal benefits and tax refunds. However, the Commission has not yet conducted usability tests or measured customer satisfaction for the My Money Web site. The federal government’s Web Managers Advisory Council provides guidance to help federal Web managers implement recommendations and best practices for their federal sites. The council recommends testing usability and measuring customer satisfaction to help identify improvements and ensure that consumers can navigate the sites efficiently and effectively. Representatives of the General Services Administration (GSA), which operates the site, acknowledged that these steps are standard best practices that would be useful in improving the site. They said they had not yet done so due to competing priorities and a lack of funding. Without usability testing or measures of customer satisfaction, the Commission does not know whether the Web site’s content is organized in a manner that makes sense to the public, or whether the site’s visitors can readily find the information for which they are looking. Our report recommended that the Commission (1) conduct usability testing to measure the quality of visitors’ experience with the site; and (2) measure customer satisfaction with the site, using whatever tools deemed appropriate, such as online surveys, focus groups, or e-mail feedback. In its April 2007 report to Congress, the Commission said it would conduct usability testing of, and measure customer satisfaction with, its Web site by the second quarter of 2009. In addition to a Web site, the Financial Literacy Act also required that the Commission establish a toll-free telephone number for members of the public seeking information related to financial literacy. The Commission launched the telephone hotline, 1-888-My Money, simultaneously with the My Money Web site in October 2004. The hotline supports both English- and Spanish-speaking callers. A private contractor operates the hotline’s call center and GSA’s Federal Citizen Information Center oversees the operation and covers its cost. According to GSA, the cost of providing telephone service for the hotline was about $28,000 in fiscal year 2006. The hotline serves as an order line for obtaining a free financial literacy “tool kit”—pamphlets and booklets from various federal agencies on topics such as saving and investing, deposit insurance, and Social Security. The tool kit is available in English and Spanish versions, and consumers can also order it via the My Money Web site. The volume of calls to the My Money telephone hotline has been limited—526 calls in March 2007 and an average of about 200 calls per month between February 2005 and February 2006. As part of the national strategy, the Financial Literacy Act required the Secretary of the Treasury to develop, implement, and conduct a pilot national public service multimedia campaign to enhance the state of financial literacy in the United States. The department chose to focus the multimedia campaign on credit literacy among young adults. It contracted with the Advertising Council to develop and implement the multimedia campaign, which is expected to be advertised—using donated air time and print space—on television and radio, in print, and online. According to the Commission’s April 2007 report to Congress, the launch of the campaign is scheduled for the third quarter of 2007. The Financial Literacy Act required that the Commission develop a plan to improve coordination of federal financial literacy and education activities and identify areas of overlap and duplication in these activities. The Commission created a single focal point for federal agencies to come together on the issue of financial literacy and education. Some Commission members told us that its meetings—including formal public, working group, and subcommittee meetings—have helped foster interagency communication and information sharing that had previously been lacking. In addition, the Commission’s Web site, hotline, and tool kit have helped centralize federal financial education resources for consumers. Further, the national strategy includes a chapter on federal interagency coordination and several of the strategy’s calls to action involve interagency efforts, including joint conferences and other initiatives. However, the Commission has faced several challenges in coordinating the efforts of the 20 federal agencies that form the Commission. Each of the Commission’s participating federal agencies has different missions and responsibilities and thus different perspectives and points of view on issues of financial literacy. The agencies also differ in their levels of responsibility for and expertise on financial literacy and education. Further, because agencies tend to be protective of their resources, it might be very difficult to recommend eliminating individual agencies’ programs. Moreover, the Commission’s ability to coordinate such major structural change, if it chose to do so, would be constrained by its limited resources in terms of staff and funding. In addition, the Commission has no legal authority to compel an agency to take any action, but instead must work through collaboration and consensus. Given these various constraints, a Treasury official told us that the Commission saw its role as improving interagency communication and coordination rather than consolidating federal financial education programs or fundamentally changing the existing federal structure. To meet a requirement of the Financial Literacy Act that the Commission identify and propose means of eliminating areas of overlap and duplication, the Commission asked federal agencies to provide information about their financial literacy activities. After reviewing these resources, the Commission said it found minimal overlap and duplication among federal financial literacy programs and did not propose the elimination of any federal activities. Similarly, to meet a requirement of the act that it assess the availability, utilization, and impact of federal financial literacy materials, the Commission asked each agency to evaluate the effectiveness of its own materials and programs—and reported that each agency deemed its programs and resources to be effective and worthy of continuance. In both cases, we believe that the process lacked the benefit of independent assessment by a disinterested party. Our report recommended that the Secretary of the Treasury, in conjunction with the Commission, provide for an independent third party to carry out the review of duplication and overlap among federal financial literacy activities as well as the review of the availability, utilization, and impact of federal financial literacy materials. In response to these recommendations, the Commission reported in its April 2007 report to Congress that it would identify an independent party to conduct assessments on both of these matters, with the first series of independent assessments to be completed in 2009. The Financial Literacy Act also charged the Commission with promoting partnerships between federal agencies and state and local governments, nonprofit organizations, and private enterprises. Partnerships between federal agencies and private sector organizations are widely seen as essential to making the most efficient use of scarce resources, facilitating the sharing of best practices among different organizations, and helping the federal government reach targeted populations via community-based organizations. Treasury officials have cited several steps the Commission has taken to promote such partnerships. These have included calls to action in the Commission’s national strategy that encouraged partnerships; community outreach and events coordinated by Treasury and other agencies; and public meetings designed to gather input on the national strategy from various stakeholders. In general, the private and nonprofit financial literacy organizations with which we spoke said that these steps had been useful, but that their relationships with federal agencies and other entities have changed little overall as a result of the Commission. Several private and nonprofit national organizations have extensive networks that they have developed at the community level across the country, and some of these organizations suggested the Commission could do more to mobilize these resources as part of a national effort. Some stakeholders told us they also felt the Commission could do more to involve state and local governments. Greater collaboration by the Commission with state and local governments may be particularly important given the critical role that school districts can play in improving financial literacy. The Commission might consider how the federal government can influence or incentivize states or school districts to include financial education in school curriculums, which many experts believe is key to improving the nation’s financial literacy. Given the wide array of state, local, nonprofit, and private organizations providing financial literacy programs, the involvement of the nonfederal sectors is important in supporting and expanding Commission efforts to increase financial literacy. Thus far, the Commission has taken some helpful steps to promote partnerships, consisting mainly of outreach and publicity. As the Commission continues to implement its strategy, we believe it could benefit from further developing mutually beneficial and lasting partnerships with nonprofit and private entities that will be sustainable over the long term. Our report recommended that the Commission consider ways to expand upon current efforts to cultivate sustainable partnerships with nonprofit and private entities. As part of these efforts, we recommended that the Commission consider additional ways that federal agencies could coordinate their efforts with those of private organizations that have wide networks of resources at the community level, as well as explore additional ways that the federal government might encourage and facilitate the efforts of state and local governments to improve financial literacy. In commenting on our report, Treasury noted that it had a long history of partnerships with nonfederal entities and would consult with the Commission about how to work more closely with the types of organizations described in our report. On April 17, 2007, the Commission held the inaugural meeting of the National Financial Education Network, which it said was intended to create an open dialogue and advance financial education at the state and local level. In conclusion, in the relatively short period since its creation, the Commission has played a helpful role by serving as a focal point for federal efforts and making financial literacy a more prominent issue among the media, policymakers, and consumers. We recognize the significant challenges confronting the Commission—most notably, the inherent difficulty of coordinating the efforts of 20 federal agencies. Given the small number of staff devoted to operating the Commission and the limited funding it was provided to conduct any new initiatives, we believe early efforts undertaken by the Commission represent some positive first steps. At the same time, more progress is needed if we expect the Commission to have a meaningful impact on improving the nation’s financial literacy. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact Yvonne D. Jones at (202) 512-8678, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Jason Bromberg, Assistant Director; Nima Patel Edwards; Eric E. Petersen; William R. Chatlos; Emily R. Chalmers; and Linda Rego. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Financial Literacy and Education Improvement Act created, in December 2003, the Financial Literacy and Education Commission. This statement is based on a report issued in December 2006, which responded to the act's mandate that GAO assess the Commission's progress in (1) developing a national strategy; (2) developing a Web site and hotline; and (3) coordinating federal efforts and promoting partnerships among the federal, state, local, nonprofit, and private sectors. To address these objectives, GAO analyzed Commission documents, interviewed its member agencies and private financial literacy organizations, and benchmarked the national strategy against GAO's criteria for such strategies. The National Strategy for Financial Literacy serves as a useful first step in focusing attention on financial literacy, but it is largely descriptive rather than strategic and lacks certain key characteristics that are desirable in a national strategy. The strategy provides a clear purpose, scope, and methodology and comprehensively identifies issues and challenges. However, it does not serve as a plan of action designed to achieve specific goals, and its recommendations are presented as "calls to action" that generally describe existing initiatives and do not include plans for implementation. The strategy also does not fully address some of the desirable characteristics of an effective national strategy that GAO has previously identified. For example, it does not set clear and specific goals and performance measures or milestones, address the resources needed to accomplish these goals, or fully discuss appropriate roles and responsibilities. As a result of these factors, most organizations that GAO spoke with said the strategy was unlikely to have a significant impact on their financial literacy efforts. The Commission has developed a Web site and telephone hotline that offer financial education information provided by numerous federal agencies. The Web site generally serves as an effective portal to existing federal financial literacy sites. Use of the site has grown, and it averaged about 69,000 visits per month from October 2006 through March 2007. The volume of calls to the hotline--which serves as an order line for a free tool kit of federal publications--has been limited. The Commission has not tested the Web site for usability or measured customer satisfaction with it; these are recommended best practices for federal public Web sites. As a result, the Commission does not know if visitors are able to find the information they are looking for efficiently and effectively. The Commission has taken steps to coordinate the financial literacy efforts of federal agencies and has served as a useful focal point for federal activities. However, coordinating federal efforts has been challenging, in part because the Commission must achieve consensus among 20 federal agencies, each with its own viewpoints, programs, and constituencies, and because of the Commission's limited resources. A survey of overlap and duplication and a review of the effectiveness of federal activities relied largely on agencies' self-assessments rather than the independent review of a disinterested party. The Commission has taken steps to promote partnerships with the nonprofit and private sectors through various public meetings, outreach events, and other activities. The involvement of state, local, nonprofit, and private organizations is important in supporting and expanding Commission efforts to increase financial literacy, and our report found that the Commission could benefit from further developing mutually beneficial and lasting partnerships with these entities that will be sustainable over the long term. |
The Spallation Neutron Source Project is, according to DOE and its scientific advisers, vitally important to the nation’s scientific community. DOE estimates that as many as 2,000 scientists from universities, industries, and federal laboratories will use this facility, which is scheduled to be completed in December 2005. The five DOE national laboratories collaborating on the project are the Lawrence Berkeley National Laboratory in California, Los Alamos National Laboratory in New Mexico, Brookhaven National Laboratory in New York, Argonne National Laboratory in Illinois, and Oak Ridge National Laboratory in Tennessee. Each of the five participating laboratories is responsible for designing, building, and assembling separate components of the project. Oak Ridge National Laboratory’s current operating contractor is Lockheed Martin Energy Research Corporation, which serves as the project’s overall manager. Several advisory committees provide scientific advice, and a DOE review process gives technical and managerial advice. According to current estimates, the facility will take 7-¼ years to complete and will cost $1.36 billion. DOE approved the conceptual design for the project in June 1997 and has spent about $39 million on the project through fiscal year 1998. The Congress approved the start of the construction phase in fiscal year 1999 and provided $130 million for this purpose. DOE expects actual construction to begin in mid-2000. We reviewed the project in the context of our past experiences in examining large DOE construction projects. As this Subcommittee is well aware, DOE has not always managed large projects successfully. Our 1996 report on DOE’s management of major system acquisitions (defined as projects costing about $100 million and more) found that many of DOE’s large projects have cost more and taken longer to complete than planned. In the past, many were terminated before they were completed, and others never performed as expected. One reason for the cost and schedule problems associated with these projects was the lack of sufficient DOE personnel with the appropriate skills to oversee contractors’ operations. Most recently, we examined DOE’s efforts to clean up large concentrations of radioactive waste at the Department’s Hanford Site in southeast Washington State. Although DOE is making changes to improve its management of this project, we found early indications that DOE may be having difficulty ensuring that the proper expertise is in place. In a 1997 review, DOE reported that the success of the project depends on a having a project director skilled in accelerator science and in the management of large construction projects. “It is critical that the permanent leadership for the be named as soon as possible,” the review said. “It will also be a mark of ability to execute this project that key scientific, technical, and management leadership, committed to making the succeed, can be successfully recruited to before the project is funded by Congress.” Despite this recognized need and the Congress’s approval of the project’s construction phase 5 months earlier (the Congress provided funding for design activities beginning in fiscal year 1996), Oak Ridge National Laboratory has just announced the hiring of an experienced project director. In the interim, the laboratory’s associate director has been serving as the project director. This announcement came shortly after DOE’s internal review committee and an independent review team strongly recommended that a project director with the right skills be recruited as quickly as possible. Other key positions remain unfilled. The project is still without a technical director, and DOE’s review committee recently concluded that there was still “an inadequate level of technical management at the laboratory.” This committee also noted that a full-time operations manager should be appointed and that a manager is needed to oversee the construction of the facilities that will house the equipment and instruments being built by the individual laboratories. In addition, the committee reported that the slow progress in the facilities portion of the project is due in large part to the relative inexperience of the project facilities staff. DOE also found that the designs of each of the collaborating laboratories’ component parts have not effectively been integrated into the total project, primarily because Oak Ridge National Laboratory’s project office lacks the appropriate technical expertise to integrate the designs and to plan for commissioning and operating the facility. Several other key project officials were hired later than originally planned. For example, a manager for environment, safety, and health was hired in December 1998, and the architect-engineering/construction management contractor was hired in November 1998. DOE had hoped to fill these important positions before the construction phase began in October 1998. Because of these delays in hiring staff, the project is underspending its appropriation. Obligations and costs are currently running at about 60 percent of the planned budget (through 4 months of the project’s 87-month schedule). A major reason for the slow pace of spending is that Los Alamos National Laboratory only recently (Nov. 1998) hired a permanent team leader and consequently is behind the other laboratories in completing several project tasks. In addition, the architect-engineering/construction management contract was finalized later than originally planned. DOE officials told us they are confident, however, that the current spending pace will not affect the project’s overall schedule and that the current spending patterns represent the prudent use of funds. The project’s cost and schedule estimates are not fully developed and thus do not yet represent a reliable estimate (baseline). According to a senior DOE official, the current project team does not have the expertise to develop a detailed cost estimate, preferring instead to accept laboratories’ cost estimates that lack supporting detail. This shortfall in expertise has delayed the development of an accurate estimate of the project’s total cost. DOE’s independent reviewer expressed a similar concern, noting that the cost estimate in the project is based on its design and that “higher quality estimates are needed for a credible baseline.” Of particular concern are the inadequate allowances for contingencies (unforeseen costs and delays) built into the project’s current cost and schedule estimates. The project’s cost estimate allows 20 percent for contingencies, well below the 25-30-percent allowance that DOE and contractor officials believe is necessary for a project of this scope and complexity. Concerned about the low contingency allowance, DOE’s independent review team reported that the project will not be completed at the current cost estimate. The project’s contingency allowance for delays is also too low, according to current project officials. The project allows about 6 months for delays, well below the 9 to 12 months desired by project managers. DOE and laboratory project managers told us they are confident that they can increase these contingency allowances without jeopardizing the project’s overall cost and schedule. The complex management approach that DOE has devised for the project creates a need for the strongest possible leadership. In particular, integrating the efforts of five national laboratories on a project of this scope requires an unprecedented level of collaboration. While staff from multiple laboratories collaborate on other scientific programs, DOE has never attempted to manage a multilaboratory effort as large and complex as this one. According to DOE, a multilaboratory structure was chosen to take advantage of the skills offered by the individual laboratories. Although Oak Ridge National Laboratory serves as the project’s overall manager, staff at each of the participating laboratories do not report to Lockheed Martin Energy Research Corporation, the current Oak Ridge contractor that is managing the project. Instead, the collaborating laboratory staff report to their respective laboratory contractors—the educational institutions or private enterprises that operate the laboratories. In addition, the five laboratories participating in the project are overseen by four separate DOE operations offices. Further complicating this reporting structure, four of the five laboratories receive most of their program funding from DOE’s Office of Science, under whose leadership the project is funded and managed. Los Alamos, however, is primarily funded by DOE’s Defense Programs, a different component within DOE’s complex organizational structure. Achieving a high level of collaboration among the diverse cultures, systems, and processes that characterize the participating laboratories, operations offices and headquarter program offices is widely recognized as the project’s biggest management challenge. To facilitate collaboration among the laboratories, DOE has developed memorandums of agreement between and among the laboratories and with the four DOE operations offices that oversee the laboratories. These agreements articulate each cooperating laboratory’s role and expectation for its component of the project. However, these agreements are not binding and represent the laboratory director’s promise to support the project and cooperate with Oak Ridge in ensuring that required tasks at each laboratory are completed on time and within cost. DOE told us that only two of the laboratories—Los Alamos National Laboratory and Argonne National Laboratory—have the project as a performance element in their contracts with DOE. A construction project of this scale and complexity needs a single, experienced individual in charge of all aspects of the project. This individual must have the responsibility and the full authority needed to direct all aspects of the project. Because of the multi-laboratory collaborative nature of the project, the project leader must be able to directly access the management of the collaborating laboratories at the highest level.” DOE’s management approach for this project raises several risks. The new project director will remain an employee of Argonne National Laboratory (operated by the University of Chicago), but will work directly with Lockheed Martin Energy Research Corporation. The project director will not have direct authority over other laboratories’ staff and will, in our opinion, be handicapped by having to work through many other officials to achieve results on a day-to-day basis. Senior DOE officials responded to our concerns by noting that the project director approves all work packages authorizing funding to the participating laboratories, and thereby exercises direct control over the project. DOE officials told us that the participating laboratory directors are highly committed to the project and that senior DOE managers will not hesitate to intervene to resolve disputes. Finally, DOE officials observed that the DOE review committee and the independent reviewer have praised the level of collaboration already achieved on the project. We agree that the laboratories appear to be collaborating on the project at this very early stage, but we remain concerned about DOE’s reliance on memorandums of agreements in the absence of direct control. In commenting on the collaboration achieved to date, the independent reviewer also noted that “the laboratories have traditionally operated in an independent and decentralized manner which contributes to the Team’s concern in this area.” The independent reviewers also said that there is not a clear chain of command in the project’s current organizational structure. Contributing to our concerns is well-documented evidence of problems in the laboratories’ chain of command. We, along with many other reviewers, have reported that the Department lacks an effective organizational structure for managing the laboratories as a system. We noted that the absence of a senior official in the Department with program and administrative authority over the operations of all the laboratories prevents effective management of the laboratories on a continuing basis. DOE officials told us that the Under Secretary is paying close attention to the project and will intervene as necessary to resolve disputes. DOE officials have also told us that the many advisory committees created to provide technical and managerial assistance serve to enhance the laboratories’ collaboration. DOE and laboratory officials have cited several instances in which the laboratories have worked together in a highly effective manner, citing, for example, the recent completion of the Advanced Photon Source at Argonne National Laboratory. These achievements, however, are not representative of the current challenges facing DOE and its laboratories and do not resolve management problems inherent in the project’s current organizational structure and reporting relationships. Mr. Chairman, this concludes our statement. We would be happy to respond to any questions from you or Members of the Subcommittee. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the Department of Energy's (DOE) management of the Spallation Neutron Source Project, focusing on the: (1) project's cost and schedule; and (2) effectiveness of the collaborating laboratories' coordination. GAO noted that: (1) the project is not currently in trouble, but warning signs in three key areas raise concerns about whether it will be completed on time and within budget; (2) DOE has not assembled a complete team with the technical skills and experience needed to properly manage the project; (3) a permanent project director was just hired last week, 5 months after Congress approved the start of construction and over a year after the project's design was approved; (4) other important positions remain unfilled, including those of a technical director and an operations manager; (5) cost and schedule estimates for the project have not been fully developed; (6) furthermore, the project's contingency allowances for unforeseen costs and delays are too low for a project of this size and scope, according to project managers and DOE; (7) DOE's approach to managing the project requires an unprecedented level of collaboration among five different laboratories, managed through DOE's complex organizational structure; and (8) coupled with DOE's history of not successfully completing large projects on time and within budget, these warning signs make the Spallation Neutron Source project a significant management challenge for DOE and suggest a need for continued close oversight. |
The Department of Defense (DOD) refers to the amount of secondary inventory that it needs to have on hand or on order to support current operations as the requirements objective. The requirements objective includes inventory requirements for a reorder point and an economic order quantity. The reorder point is the point at which inventory replenishment will normally prevent out-of-stock situations from occurring. The economic order quantity is the amount of inventory that, when ordered and received, results in the lowest total cost for ordering and holding inventory. When the combined total of on-hand and on-order inventories falls to or below the reorder point, an item manager generally places an order for additional inventory so that the total of on-hand and on-order inventories is equal to the requirements objective. Subsequently, on-hand inventory is used to satisfy customer requisitions that are received after the item manager orders new inventory, and thus the total of on-hand and on-order inventories is generally less than the requirements objective. Furthermore, an item’s reorder point can move up or down over time and—depending on the item—may include one or more of the following: war reserves, unfilled requisitions, a safety level to be on hand in case of minor interruptions in the resupply process or unpredictable fluctuations in demand, minimum quantities for essential items for which demand is not normally predicted (also referred to as insurance items), inventory to satisfy demands while broken items are being repaired, inventory to satisfy demands during the period between when the need to replenish an item through a purchase is identified and when a contract is let (also referred to as administrative lead time), and inventory to satisfy demands during the period between when a contract for inventory is let and when the inventory is received (also referred to as production lead time). Because the reorder point provides for inventory to be used during the time needed to order and receive inventory and for a safety level, item managers are able to place orders so that the orders arrive before out-of-stock situations occur. Generally, an item manager orders an amount of inventory needed to satisfy both the reorder point requirement and the economic order quantity. Between September 30, 1999, and September 30, 2001, DOD’s inventory on hand increased by $5.6 billion and inventory on order increased by $1.7 billion, reversing past inventory reductions. These inventory increases were primarily due to the Navy reporting aviation parts held by ships and air squadrons that were previously not reported and to overall DOD inventory requirements increases. In addition, large imbalances in the inventory continue to exist. As of September 30, 2001, over 1.7 million items had $38 billion of inventory on hand or on order that exceeded the items’ current inventory operating requirements of $24.9 billion. We also identified 523,000 items that did not have enough inventory on hand or on order to meet the items’ current inventory operating requirements. While the services are implementing management changes that will reduce the size of DOD’s inventory, long-standing and systemic inventory management problems continue to exist. As of September 30, 2001, DOD’s on-hand inventory was $69.8 billion, up $5.6 billion, or 9 percent, since September 30, 1999, and on-order inventory was $9.9 billion, up $1.7 billion, or 21 percent (see figs. 1 and 2). As indicated in figures 1 and 2, the period September 30, 1996, to September 30, 1999, shows a decline in on-hand and on-order inventories. During this period, inventory on hand dropped $5.5 billion and inventory on order dropped $0.7 billion. A Navy inventory reporting change and increased DOD inventory requirements contributed significantly to the growth in DOD’s inventory. In 1996, the Navy began including aviation inventories held by ships and air squadrons in its inventory reports. Most of the change occurred in 1999 when the Navy began reporting parts held by aircraft carriers. Previously, the Navy considered these inventories as having been sold to ships and installations and not as reported inventory. Based on Navy records, we estimate that parts valued at about $3.3 billion were added to the reported inventory as a result of the accounting change. A similar change by the Army resulted in an inventory increase of $0.3 billion between September 30, 1999, and September 30, 2001. These Navy and Army inventory reporting changes correspond to the reporting methods already in use by the Air Force. In addition, overall DOD inventory requirements increased from $40.6 billion as of September 30, 1999, to $51.2 billion as of September 30, 2001. Increased requirements can affect an item’s reorder point and economic order quantity. Consequently, an increase in requirements can affect when item managers place orders and the amount of inventory they purchase and can affect how much inventory is on hand. For example, if the requirements increase and enough inventory is not on hand or on order to satisfy the requirements, an item manager will place an order for additional inventory. When the additional inventory is received, inventory levels will also be increased. Since 1995 we have reported on imbalances in DOD’s inventory, and our current work shows that these imbalances continue to exist. Our comparison of September 30, 2001, on-hand and on-order inventories to the requirements objectives for 2.4 million items showed that 1.7 million items, or 70 percent, had inventory on hand or on order that exceeded the requirements, and 523,000 items, or 21 percent, did not have enough inventory on hand or on order to satisfy all of the requirements. The remaining 209,000 items, or 9 percent, had the right amount of inventory on hand and/or on order to satisfy all requirements. The 1.7 million items had $22.1 billion of inventory on hand and $2.8 billion of inventory on order that satisfied requirements and an additional $36 billion of inventory on hand and $2.0 billion on order that exceeded requirements (see table 1). Overall, the amount of DOD’s inventory that exceeds current operating requirements has decreased since 1996. On-hand inventory that exceeds current operating requirements decreased from $41.3 billion, or 59 percent, of on-hand inventory on September 30, 1996, to $36.1 billion, or 52 percent, of the $69.8 billion inventory on hand on September 30, 2001. During the same period, DOD’s inventory on order that exceeds requirements decreased from $1.7 billion, or 19 percent, of on-order inventory to $1.6 billion, or 16 percent, of the $9.9 billion inventory on order. In 1997, we reported that requirement decreases contributed to items having inventory on hand that exceeded current requirements. Similarly, in 2000, we reported that while inventory managers made inventory purchases that were supported by requirements, subsequent requirement decreases resulted in the purchases being in excess of requirements. We identified 523,000 items that did not have enough inventory on hand or on order to satisfy all of the requirements that make up the requirements objective. The items had requirements valued at $23.4 billion that were partially satisfied by $7.7 billion of inventory on hand and $5.3 billion of inventory that was on order (see fig. 3). The remaining $10.4 billion of requirements could be satisfied by purchases. The amount of inventory exceeding or failing to meet inventory requirements indicates that many of the long-standing and systemic inventory management problems previously identified in our Performance and Accountability Series still exist. We recommended in these reports that DOD address the long-standing weaknesses that limit the economy and efficiency of its logistics operations, including having too much inventory on hand and on order and shortages of key spare parts. Appendix II lists past reports and recommendations relating to DOD’s long-standing inventory management problems. The services are implementing management changes that will reduce the size of DOD’s reported inventory and the amount of inventory that satisfies requirements. These changes include an initiative to transfer the traditional DOD inventory and technical support function to parts contractors and initiatives to implement new inventory management systems. The services have initiatives that will transfer the traditional DOD inventory and technical support function to parts contractors. For example, as of September 30, 2001, the Navy had about 22,000 items that were managed by contractors. In some cases, Navy-owned inventory is being replaced by contractor-owned inventory. The Navy was paying $330 million for contractors to manage the 22,000 items, and the Navy planned to increase that amount to over $700 million for the next fiscal year. According to an official from the Office of the Secretary of Defense, contractor-owned inventories used to support military operations are not included in its inventory report. Consequently, the use of contractor- owned inventories will decrease the growth of DOD’s inventory. In addition, new inventory management systems that the military components are implementing may also affect the amount of DOD’s reported inventory. For example, the Air Force’s requirements for insurance items decreased by $600 million between 1999 and 2001. According to the Air Force, the requirements decreased as a result of implementing a new requirements determination system that changed the way in which it computed those requirements. The Army, the Navy, and the Defense Logistics Agency are also in the process of developing new inventory management systems. However, the impact of the implementation of these new inventory management systems on the size of DOD’s inventory is not yet known. Although the initiatives described above will reduce the size of DOD’s inventory, they do not address the long-standing and systemic problems that are limiting the economy and efficiency of the department’s logistics operations. DOD’s overall inventory requirements increased by $10.6 billion, or 26 percent, between the end of fiscal years 1999 and 2001, with some of the Navy’s requirements being overstated. The Navy was responsible for $4.7 billion of the overall $10.6 billion increase. A large part of the Navy increase, $3.4 billion, was due to the Navy reporting change we discussed in the previous section—that is, reporting aviation parts held by ships and air squadrons as inventory that were previously not reported. Consequently, the Navy also began reporting the associated requirements. The remaining $1.3 billion Navy increase was due to a variety of reasons related to inventory cost and usage. However, some Navy increases were caused by inaccurate data used to compute administrative lead time requirements, and as a result, those requirements are overstated. DOD’s overall inventory requirements increased from $40.6 billion as of September 30, 1999, to $51.2 billion as of September 30, 2001, an increase of $10.6 billion, or 26 percent. Army, Navy, and Defense Logistics Agency inventory requirements increased significantly while the Air Force’s requirements decreased (see table 2). The Navy was responsible for the largest share of DOD’s overall inventory requirements increase, with $4.7 billion of the $10.6 billion inventory change. All requirements that comprise DOD’s requirements objective increased except for unfilled requisitions and nonrecurring lead time requirements used by the Air Force. Requirements for safety levels, items held as insurance against outages; economic order quantities; and production lead time increased most significantly. Appendix III provides a detailed comparison of the military components’ inventory requirements as of September 30, 1999, and September 30, 2001. Table 2 shows a decrease in the Air Force’s requirements. According to an Air Force Materiel Command official: Higher congressional funding levels allowed the Air Force to buy and repair more of the items that were needed and reduce requirements for unfilled requisitions. Requirements for items held as insurance against outages decreased as a result of implementing a new requirements determination system that changed the way in which the Air Force computed those requirements. Requirements for war reserves decreased as a result of decreased need for F-16 fuel tanks. Navy requirements increased $4.7 billion between September 30, 1999, and September 30, 2001, primarily due to a change in how the Navy accounts for aviation inventory requirements. The remaining Navy increase was due to such reasons as price increases and increased usage of items. Also, because the Navy has not updated the data used to compute administrative lead time requirements for some aviation items, those requirements are overstated. The Navy’s $4.7 billion increase was not uniform across all requirements. Safety level, repair cycle, production lead time, economic order quantity, and insurance items requirements all increased by approximately $5.0 billion. However, requirements for Navy war reserves, unfilled requisitions, and administrative lead time actually decreased during this period, by $331 million (see fig. 4). A large part of the Navy’s increase was due to a change in the way the Navy accounts for aviation inventory requirements for parts held by ships and air squadrons. According to the Navy, prior to 1996, aviation items that inventory control points sold to customers onboard ships and at installations were not accounted for in its inventory. In 1996, the Navy began accounting for aviation items held by ships and installations by recognizing these requirements and assets in its inventory system and recording them as insurance item requirements. The Navy made the change in order to provide item managers visibility of the inventory and associated requirements and assets. Most of the increase in requirements and inventory occurred after 1999 when the Navy began to include aviation parts held on aircraft carriers. Generally, the change in accounting for these requirements resulted in a $3.4 billion increase in Navy insurance item requirements, from $2.4 billion on September 30, 1999, to $5.8 billion on September 30, 2001. To gain insight into why increases in the Navy’s inventory requirements occurred, we compared the 307,000 items the Navy managed as of September 30, 1999, to the 309,000 items managed as of September 30, 2001, and identified 279,000 items that were managed in both years. Overall, the value of the 279,000 items increased $4.2 billion between September 30, 1999, and September 30, 2001 (see table 3). Of this amount, $3.1 billion was the result of increased inventory requirement quantities and $1.1 billion was due to price changes. About 37,000 items accounted for $4.3 billion in inventory requirements increases, and another 37,000 items accounted for a $1.2 billion decrease in inventory requirements decreases. There was no change in inventory requirement quantities for the remaining 205,000 items during the same period of review. We also reviewed in more detail 90 of the 279,000 items. We selected the 90 items because they had large increases in requirements and accounted for $1.1 billion of the $4.2 billion of the requirements increase associated with the 279,000 items. For 37 of the 90 items, insurance requirements increases accounted for $454 million of the 90 items’ $1.1 billion total requirements increase between 1999 and 2001. Of the $454 million, $428 million of the increase was attributable to including existing aviation requirements and $26 million was attributable to new aviation requirements. For example, the insurance requirement for an aviation radar transmitter, valued at $446,000 each and used on the F-18 and the AV-8B aircraft, increased from 44 transmitters on September 30, 1999, to 196 on September 30, 2001. The requirement caused an increase of 128 transmitters by recognizing existing aviation requirements in the Navy’s inventory and an increase of another 24 transmitters as a result of new requirements for these transmitters in newer versions of the F-18 aircraft. In addition to the $454 million increase in insurance item requirements, our analysis of the 90-item sample identified a wide variety of additional reasons for the increases in requirements. For example, increased usage of items resulted in requirements increasing by $294 million for 46 items. Increased usage was often the result of changes in demand for an item, defective parts needing to be replaced, and items wearing out at a faster rate than expected. Changes in the Navy’s stock, overhaul, or operational policies; the inability to find a commercial source for an item; and the unavailability of material needed to manufacture items were among the other reasons for requirements increases. Table 4 summarizes the reasons identified for the requirements increases. Additional information and examples are discussed in more detail in appendix IV. The Navy has not formally updated the data it uses to project administrative lead time requirements for aviation parts since 1999, and thus these requirements are overstated. Before 1999, the Navy used the actual administrative lead time from an item’s previous procurement as a basis for projecting its future administrative lead time requirements for aviation parts. In 1999, the Navy began using an administrative lead time matrix for computing the requirements. Under this approach, the Navy places aviation items into matrix cells based on the type of item being purchased, the size of the potential purchase, and the type of contract to be used to purchase the item. The Navy believes that items that are similar and are purchased in a similar manner will have similar lead times. As of September 30, 2001, the Navy had computed $895 million of administrative lead time requirements for its 101,000 aviation parts. When the Navy implemented the matrix approach for computing administrative lead time requirements in 1999, it based the requirements on actual fiscal year 1997 lead time data. Since 1997 the Navy has generally reduced its actual administrative lead time. While the Navy has recomputed its administrative lead times using statistical techniques aimed at reducing fluctuations from year to year, it has not formally updated the administrative lead time matrix used to compute requirements to reflect the most current, lower data. However, in response to our inquiries, the Navy, in December 2002, reviewed the administrative lead time data used to compute requirements and found that the data had been revised. Item manager reviews and the purchase of items that had not recently been purchased led to changes to the lead time data in the files. Our analysis of the changes showed that the revised data had lowered the administrative lead times for most of the matrix cells and that the Navy-computed lead times would be further reduced for most matrix cells. For example, revised data reduced the lead time from 200 days to 183 days for medium-sized sole-source contracts for repairable items. The Navy-computed lead time further reduced the lead time to 130 days. In contrast, for large-sized sole-source contracts for repairable items, the revised data reduced the lead time from 280 days to 183 days while the Navy-computed lead time set it at 195 days. Navy officials responsible for aviation parts have been reluctant to use the lower Navy-computed lead time data. Even though the Navy uses a technique to reduce fluctuations in its computed lead time from year to year, the officials believe that annual changes in the lead time will result in terminating contracts for parts in 1 year and possibly having to repurchase the same items the next year. The Navy is overstating its administrative lead time requirements for aviation items by not using the most current data available for computing those requirements. Because the most current data reflects the Navy’s reduced administrative lead time, using old data unnecessarily results in inaccurate and overstated requirements that can lead to unnecessary purchases. The Navy is concerned that using the most current data will result in cycles of ordering inventory, canceling the orders, and subsequently reordering the items. We believe that using the most current data that is based on statistical techniques aimed at reducing potential fluctuations in the requirements will result in stable and more accurate administrative lead time requirements and help the Navy avoid unnecessary purchases. To improve the accuracy of the Navy’s secondary inventory requirements, we recommend that the Secretary of Defense direct the Secretary of the Navy to require the Commander, Naval Supply Systems Command, require its inventory managers to use the most current data available for computing administrative lead time requirements. In commenting on a draft of the report, DOD generally concurred with the report. With regard to our recommendation, DOD noted that item managers use the most current data available to manually compute administrative lead time requirements when making management decisions for individual items and that in March 2003, the Navy formally updated its automated inventory system to begin using the most current data available to compute administrative lead time requirements for all items. This action to update the Navy’s automated inventory system responds to our recommendation. DOD’s comments can be found in appendix V. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the issue date. At that time, we will send copies of this report to the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Director, Defense Logistics Agency; the Director, Office of Management and Budget; and other interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov/. Please contact me on (202) 512-8365, if you or your staff have any questions concerning this report. Staff acknowledgments are listed in appendix VI. To identify changes in the Department of Defense’s (DOD) on-hand and on-order inventories for fiscal years 1996 through 2001, we used data developed in prior reviews and inventory stratification reports. We analyzed on-hand and on-order inventories as they related to the military components’ requirements objectives. We held meetings to discuss these observations with officials from the Army Materiel Command, Alexandria, Virginia; the Naval Supply Systems Command, Mechanicsburg, Pennsylvania; the Air Force Materiel Command, Dayton, Ohio; and the headquarters of Defense Logistics Agency, Alexandria, Virginia. To determine the number of items that had more than or less than enough inventory to satisfy requirements, we obtained computerized inventory records from the military components as of September 30, 2001, the most recent end of fiscal year data at the time we began our examination. We did not test the reliability of the data. We used the computerized records to compare on-hand and on-order inventories to requirements on an item-by- item basis to determine if items had sufficient inventory available to satisfy requirements. DOD reported that its secondary inventory was valued at $63.3 billion in its September 30, 2001, Supply System Inventory Report. For our analyses, we used inventory stratification files and reports. We did not revalue the inventory that needs to be repaired to recognize the repair cost, and we did not value inventory that is to be disposed of at salvage prices. Also, our analyses did not include fuel, certain inventories held by units, and Marine Corps inventory. Fuel and inventories held by units are not stratified by requirement, and the Marine Corps inventory represents a small part of the universe. To ascertain the causes for increases in inventory requirements, we compared September 30, 1999, inventory requirements to September 30, 2001, inventory requirements for the military components. Because the Navy had the largest dollar increase in requirements, we analyzed the Navy requirements in more detail. For items that the Navy managed in both 1999 and 2001, we compared the requirements to determine if the requirements increased, stayed the same, or decreased. We selected 90 items for detailed review based on how much their requirements increased between 1999 and 2001. The 90 items accounted for about $1.1 billion of the Navy’s $4.7 billion increase in requirements. We met with appropriate personnel from the Philadelphia and Mechanicsburg, Pennsylvania, offices of the Naval Inventory Control Point to identify the specific reasons for the items’ increase in requirements. DOD’s reports on spare parts spending—called Exhibit OP-31, Spares and Repair Parts, and submitted as part of the President’s annual budget submission—do not provide an accurate and complete picture of spare parts funding as required by financial management regulation. As a result, the reports do not provide Congress with reasonable assurance about the amount of funds being spent on spare parts. Furthermore, the reports did not always contain actual expenditure data: all of the Army’s annual operations and maintenance appropriations data and most of the services’ commodity amounts were shown as estimates. Without actual data, the reports are of limited use to Congress as it makes decisions on how best to spend resources to reduce spare parts shortages and improve military readiness. We recommended that the Secretary of Defense: issue additional guidance on how the services are to identify, compile, and report on actual and complete spare parts spending information, including supplemental funding, in total and by commodity, as specified by Exhibit OP-31; and direct the Secretaries of the military departments to comply with Exhibit OP-31 reporting guidance to ensure that complete information is provided to Congress on the quantities of spare parts purchased and explanations of deviations between programmed and actual spending. The Army, in its approach for assessing wartime spare parts industrial base capability, still does not use current data from industry. Instead, the Army uses historical parts procurement data because its prior efforts to collect current data from industry were not successful due to poor response rates. The Army’s assessments depend on historical data and resulting lead-time factors to project industry’s contribution to satisfying wartime spare parts requirements. Without current data on industry’s capability, assessments could be unreliable, resulting in reduced readiness due to critical spare parts shortfalls in wartime or inflated and costly war reserve spare parts inventories in peacetime. Moreover, the Army’s budget requests to Congress for war reserve spare parts risk being inaccurate. We identified a program in the Defense Logistics Agency that has several attributes reflecting sound management practices that are required for reliable industrial base capability assessments. Our analysis of the approach used by the Army compared to the Defense Logistics Agency’s spare parts industrial base assessment program revealed that the Army’s approach can be improved in three areas—data collection, data analysis, and management strategies. We recommended that the Secretary of Defense direct the Army to: establish an overarching industrial base capability assessment process that considers the attributes in this report; develop a method to efficiently collect current industrial base capability data directly from industry itself; create analytical tools that identify potential production capability problems such as those due to surge in wartime spare parts demand; and create management strategies for resolving spare parts availability problems, for example, by changing acquisition procedures or by targeting investments in material and technology resources to reduce production lead times. We reported that Air Force and contractor personnel had largely not complied with DOD and Air Force inventory control procedures designed to safeguard material shipped to contractors, placing items worth billions of dollars at risk of fraud, waste, and abuse. We recommended that the Secretary of Defense direct the Air Force to: Improve processes for providing contractor access to government- furnished material by listing specific stock numbers and quantities of material in repair contracts (as they are modified or newly written) that the inventory control points have agreed to furnish contractors; demonstrating that automated internal control systems for loading and screening stock numbers and quantities against contractor requisitions perform as designed; loading stock numbers and quantities that the inventory control points have agreed to furnish to contractors into the control systems manually until the automated systems have been shown to perform as designed; and requiring that waivers to loading stock numbers and quantities manually are adequately justified and documented based on cost- effective and/or mission-critical needs. Revise Air Force supply procedures to include explicit responsibility and accountability for generating quarterly reports of all shipments of Air Force material to distributing the reports to Defense Contractor Management Agency property administrators. Determine, for the contractors in our review, what actions are needed to correct problems in posting material receipts. Determine, for the contractors in our review, what actions are needed to correct problems in reporting shipment discrepancies. Establish interim procedures to reconcile records of material shipped to contractors with records of material received by them, until the Air Force completed the transition to its Commercial Asset Visibility system in fiscal year 2004. Comply with exiting procedures to request, collect, and analyze contractor shipment discrepancy data to reduce the vulnerability of shipped inventory to undetected loss, misplacement, or theft. All the military services extensively use cannibalization—that is, removing serviceable parts from one piece of equipment and installing them in another—as a routine aircraft maintenance practice. In fiscal years 1996 through 2000, the Navy and the Air Force reported about 850,000 cannibalizations, requiring about 5.3 million additional maintenance hours. Cannibalizations have several adverse impacts. They increase maintenance costs by increasing mechanics’ workloads, affect morale and personnel retention, and sometimes take expensive aircraft out of service for long periods of time. Cannibalizations can also create additional mechanical problems. The services have many reasons for cannibalizing aircraft and strong incentives for continuing to do so. However, with the exception of the Navy, they do not consistently track the specific reasons for cannibalizations. As a result, much of the information on causes is anecdotal. In the broadest sense, cannibalizations are done because of pressures to meet readiness and operational needs and because of shortcomings in the supply system. We recommended that the Secretary of Defense direct the Army, the Navy and the Air Force to take the following actions: Establish standardized, comprehensive, and reliable cannibalization data- collection procedures and systems for cannibalizations. Measure and report the number of maintenance hours associated with cannibalizations. Develop strategies to reduce the number of maintenance hours spent on cannibalization, ensure that cannibalized aircraft do not remain grounded for long periods of time, and reduce the adverse effects of cannibalizations on maintenance costs and personnel. At a minimum, the strategies should include criteria to determine (1) which cannibalizations are appropriate, (2) cannibalization-reduction goals, and (3) the actions to be taken to meet those goals. The services must assign responsibility for ensuring that goals are being met and allocate resources for this purpose. The Navy’s Product Quality Deficiency Reporting Program has been largely ineffective in gathering the data needed for analyses so that Navy managers can determine the full extent of spare parts quality deficiencies affecting maintenance activities. Without these data, managers lose opportunities to initiate important corrective and preventive action with parts and suppliers. We recommended that the Secretary of Defense direct the Secretary of the Navy to: increase the program’s levels of (1) training, describing what quality deficiencies to report, how to report them, and why it is important to the Navy; (2) incentives, including financial credits back to the reporting unit where appropriate to encourage participation; (3) automation support, to simplify and streamline reporting and analysis; and (4) management emphasis provided to the program, as necessary, to determine the causes, trends, and responsibilities for parts failures and achieve greater compliance with joint-service requirements, including reporting on parts that fail before the end of their design life; and require program officials to measure and periodically report to the appropriate Defense and Navy managers the results of the program in such areas as actions taken to correct parts quality deficiencies, prevent recurrences, and obtain credits or reimbursements from suppliers for deficient products. Spare parts shortages for the EA-6B and the F-14 aircraft adversely impacted the Navy’s readiness to perform assigned missions and the economy and efficiency of its maintenance activities. The shortages also contributed to problems retaining personnel. The primary reasons for spare parts shortages were that more parts were required than the Navy originally anticipated and problems in identifying, qualifying, or contracting with a private company to produce or repair the parts. We did not make any recommendations in this report because of our prior recommendations on improving the Navy’s management framework for implementing commercial practices and DOD’s efforts to develop an overarching integration plan. Aviation spare parts shortages for the Apache, Blackhawk, and Chinook helicopters adversely affected operations and led to inefficient maintenance practices that have lowered morale of maintenance personnel. Specifically, while the helicopters generally met their mission- capable goals, indicating that parts shortages have not affected their mission capability, supply availability rates and cannibalization of parts from one aircraft to another indicate that spare parts shortages have indeed been a problem. The reasons for the unavailability of the 90 parts we reviewed included actual demands for parts that were greater than anticipated, delays in obtaining parts from a contractor, and problems concerning overhaul and maintenance. For example, because a cracked gear in a Chinook transmission was discovered during an overhaul, the entire fleet was grounded in August 1999. As a result, the demand for the part has been much greater than anticipated. The Army and the Defense Logistics Agency have initiatives under way or planned that are designed to improve the availability of aviation parts. The initiatives generally address the reasons we identified for spare parts shortages. Additionally, the Army has developed a Strategic Logistics Plan that is designed to change its current approach to one that is more effective, efficient, and responsive. The plan’s initiatives for resolving spare parts shortages are linked to the asset management process under the Army’s planned change in approach. Some of these initiatives are new or in the planning stages. Once the initiatives are more fully developed, we plan to review them to determine whether there are opportunities to enhance them. Because we previously reported problems with the way the Army has implemented its logistics initiatives and recommended that it develop a management framework for its initiatives, to include a comprehensive strategy and performance plan, we did not make recommendations in this report. Spare parts shortages on the E-3 and C-5 aircraft and F-100-220 engines have adversely affected the performance of assigned missions and the economy and efficiency of maintenance activities. Specifically, the Air Force did not meet its mission-capable goals for the E-3 or C-5 during fiscal years 1996-2000, nor did it meet its goal to have enough F-100-220 engines to meet peacetime and wartime goals during that period. The majority of reasons cited by item managers at the maintenance facilities for spare parts shortages were most often related to more spares being required than were anticipated by the inventory management system and delays in the Air Force’s repair process as a result of the consolidation of repair facilities. Other reasons included (1) difficulties with producing or repairing parts, (2) reliability of spare parts, and (3) contracting issues. The Air Force and the Defense Logistics Agency have numerous overall initiatives under way or planned that may alleviate shortages of the spare parts for the three aircraft systems we reviewed. The initiatives generally address the reasons we identified for the shortages. To ensure that the initiatives are achieving the goals of increasing efficiencies in the supply system, the Air Force has developed a Supply Strategic Plan that contains specific goals and outcome-oriented measures for the initiatives. Because the Air Force’s plan is in keeping with our previous recommendations to improve overall logistics planning, we did not make recommendations in this report. We will separately review the overall approach and initiatives, once they are more fully developed, to determine whether there are opportunities to enhance these efforts. DOD’s components do not have sound analytical support for determining when it is economical to retain or dispose of the $9.4 billion in inventory the department is holding for economic reasons. The components’ decision-making approaches for retaining economic retention inventory have evolved from the use of economic models to the use of judgmentally determined levels. In addition, the department did not have sound analytical support for the maximum levels they selected. Also, although the department requires annual reviews of the analyses supporting economic retention decisions, the components have generally not done such reviews. As a result of these weaknesses, the department is vulnerable to retaining some items when it is uneconomical to do so and disposing of others when it is economical to retain them. We recommended that the Secretary of Defense direct the Secretaries of the Army, the Navy, and Air Force and the Director of the Defense Logistics Agency to: establish milestones for reviewing current and recently used approaches for making decisions on whether to hold or dispose of economic retention inventory to identify actions needed to develop and implement appropriate approaches to economic retention decisions; and annually review their approaches to meet department regulations to ensure that they have sound support for determining economic retention inventory levels. In the October-December 2000 time frame, the Army reported that it had about 35 percent of its prepositioned spare parts on hand and a $1-billion shortfall in required spare parts for its war reserves. Notwithstanding the reported shortages, we identified uncertainties about the accuracy of the Army’s requirements. For example, we identified a potential mismatch between the Army’s methodology for determining parts requirements and the Army’s planned battlefield maintenance practices. We recommended that the Secretary of Defense: Assess the priority and level of risk associated with the Army’s plans for addressing the reported shortfall in Army war reserve spare parts. Direct the Army to provide accurate calculations of the Army’s war reserve spare parts requirements by developing and using the best available consumption factors in calculating all spare parts requirements for the Army’s war reserves; eliminating potential mismatches in how the Army calculates its war reserve spare parts requirements and the Army’s planned battlefield maintenance practices; and developing fact-based estimates of industrial base capacity to provide the needed spare parts in the two major theater war scenarios time frames. Include in future industrial capabilities reports more comprehensive assessments on industry’s ability to supply critical spare parts for two major theater wars. Requirements for the 490 items we reviewed often changed after the orders were placed, which caused the items to exceed requirements. Further, because of inaccurate inventory records, 182 of the 490 items (valued at $170 million) were reported as excess, but were not actually excess to requirements. Because of the large number of inaccurate records, neither DOD nor the military components know whether managers are efficiently focusing their efforts to cancel excess inventory on order, and the department does not have an accurate view of the total value of its excess inventory on order. Each component’s process for canceling orders that exceeded requirements differs and cannot be relied on to consistently identify orders to be considered for cancellation or to terminate orders when economical. The components use different criteria for the amount of excess inventory on order they consider for cancellation. Only the Defense Logistics Agency consistently uses its computer model to determine whether it is more economical to cancel orders or not. However, of the $696 million its model referred for consideration during a 3-month period in 1999, less than $11 million in orders were canceled. The military components’ frequency in reviewing orders of excess inventory for cancellation ranges from monthly to quarterly. The longer components wait to consider an item for cancellation, the less likely cancellation will be cost-effective because they have to pay the contractor for costs incurred until the order is canceled. The components’ goals for reducing excess inventory on order vary and are not comparable. Thus, the department cannot evaluate the components’ progress in reducing excess inventory on order in a consistent way. We recommended that the Secretary of Defense, in conjunction with the Secretaries of the Army, the Navy, and the Air Force, and the Director of the Defense Logistics Agency review and improve the processes for identifying and canceling orders, focusing on areas such as the accuracy of inventory management records; the level at which the services and the Defense Logistics Agency identify excess inventory on order that is subject to cancellation review, including low-dollar excess inventory on order that is excluded from cancellation review; the timeliness and frequency of reviews for identifying excess items on- order; and the validity and use of the military components’ termination models in making economic analyses. We also recommended that the Secretary of Defense require the Secretaries of the Army, the Navy, and the Air Force, and the Director of the Defense Logistics Agency to report on the amount of all excess inventory on order, identifying inventory on order that exceeds both the requirements objective and the approved acquisition objective. Increase/ decrease since fiscal year 1999 $0.4 0.3 The Army is the only component that uses this requirement for reporting retail level requirements and inventory. It began its use in fiscal year 2000. This percentage calculation is not meaningful since comparable data were not available for fiscal year 1999. The Army did not use this requirement for fiscal year 1999. Because there was no data for fiscal year 1999, this percentage could not be computed. Differences are due to rounding. Increased usage resulted in requirements increasing by $294 million for 46 items. Usage of the items increased for a variety of reasons, including recurring demand for items increased, defective parts needing to be replaced, demands being received for items that are not normally stocked, increases in the number of ships or aircraft using items, items reaching the end of their useful life, unplanned foreign military sales, usage shifting from other items, items wearing out at a faster rate than expected, and items being new to the inventory system. For example, unfilled requisitions, safety level, repair cycle, and production and administrative lead time requirements for the hub used on the AH-1W (Cobra) helicopter increased from 24 on September 30, 1999, to 48 on September 30, 2001. During that time, many of the hubs reached the end of their 1,100-hour life and had to be replaced. As a result, demand for the $275,000 hub increased from 31 a year in 1999 to 74 a year in 2001. Changes in stock, overhaul, or operational policies resulted in requirements increases of $126 million for 36 items. For example, repair cycle requirements for a radio transmitter modulator increased from 10 in September 1999 to 22 in September 2001. The increase was a result of the Navy requiring that the transmitter modulator, valued at $136,000 each, be operational 100 percent of the time. Previously, ships were permitted to operate in a degraded status with the modulator not operational. Source and repair issues for 29 items resulted in requirements increases of $137 million. A wide variety of reasons fell into this category, including entering requirements for an item that would no longer be available to provide support for a weapon system for its remaining life, difficulties in identifying a commercial source for an item, unavailability of material needed to manufacture items, and increased time needed to repair or buy an item. For example, economic order quantity requirements for a data module used in a submarine control panel increased from 75 in September 1999 to 410 in September 2001. The item manager explained that the manufacturing source of supply for the data module was being lost, and the requirement was increased to protect the 419 on-hand modules from being subject to disposal. In August 2002, the Navy had 229 of the $10,000 modules on hand. Uncertainty of demand, lead time, and the rate at which items wear out for 22 items resulted in safety level requirements increasing by $72 million. Safety level requirements are intended to compensate for unplanned increases in demand, lead times, and the rate at which items wear out. For example, the safety level requirement for an inertial navigational unit used on several aircraft such as the AV-8B, the F-14D, and several versions of the F-18 increased from 2 in September 1999 to 15 in September 2001. The increased requirement was the result of demands for the $170,000 unit increasing from 155 to 205 a year. Requirements increases, valued at $98 million, were not valid for seven items. The reasons for the invalid requirements included overstating the 2001 requirement, understating the 1999 requirement, and inappropriately recording nonrecurring requirements. For example, the September 2001 requirements requiring replacement for an electron tube for a transmitter used on the EA-6B aircraft were overstated because the requirements were inappropriately based on demand for the tube instead of the rate at which the tube was failing and needed to be replaced. As a result, safety level, repair cycle, administrative and production lead times and economic order quantity requirements were overstated by 2,124 tubes for the $57,500 item. Data anomalies for two items resulted in a requirement increase of $2 million. For both of the items, requirements increased for unfilled requisitions. The item manager for the items explained that the items’ requirements, as of September 30, 2001, reflected back orders as of that date and that the back orders were not the result of any particular reason—just the status as of that date. The item manager explained that the back orders went away when material was shipped a few days after September 30th. Key contributors to this report were Lawson Gist, Jr., Louis Modliszewski, David Epstein, and R.K. Wild. | Changes in the Department of Defense's (DOD) mission can lead to changes in inventory requirements, which, in turn, determine the size of DOD's inventory. Since 1990, GAO has identified DOD's management of inventory as a high-risk area because levels of inventory were too high and management systems and procedures were ineffective. Furthermore, DOD has attributed readiness problems to parts shortages. In this report, GAO (1) provides information on changes in and make up of the department's inventory and (2) analyzes changes in inventory requirements, focusing on the Navy. DOD reported a $5.6 billion increase in inventory on hand and a $1.7 billion increase in inventory on order between September 30, 1999, and September 30, 2001. The reported inventory increases were primarily due to the Navy reporting aviation parts held by ships and air squadrons that were previously not reported and to overall DOD inventory requirements increases. In addition, GAO identified large imbalances in the department's inventory; as of September 30, 2001, over 1.7 million items had $38 billion of inventory that exceeded the items' current inventory operating requirements of $24.9 billion. At the same time, there were 523,000 items that needed an additional $10.4 billion of inventory to meet the items' current inventory operating requirements. Generally, inventory increases are the result of increases in inventory requirements. DOD's overall inventory requirements increased by $10.6 billion, or 26 percent, between the end of fiscal years 1999 and 2001, with some of the Navy's requirements being overstated. The Navy was responsible for the largest dollar increase, $4.7 billion of the $10.6 billion increase. A large part of the Navy increase, $3.4 billion, was attributable to a change in the way the Navy accounted for aviation parts held by ships and air squadrons. The remaining Navy increase was attributable to a variety of reasons, such as price increases; increased demand and item wear-out rates; and, in some cases, inaccurate data. Also, since 1997 the Navy has reduced the amount of administrative lead time it takes to place inventory orders (the period between when the need to replenish an item through a purchase is identified and when a contract is let), yet it has not formally updated the data used to compute those requirements. For example, the Navy reduced the administrative lead time for medium-sized sole-source contracts for repairable items from 200 days to 130 days, but it did not recognize the reduction in its requirements computations. As a result, those requirements are inaccurate and overstated. |
This section presents information on (1) the ways in which pharmaceuticals may enter drinking water, (2) pharmaceuticals in drinking water as a contaminant of emerging concern, (3) the degree to which relevant environmental statutes regulate pharmaceuticals, and (4) the establishment of the PiE workgroup. Scientists have identified numerous pathways by which pharmaceuticals may enter the environment and ultimately drinking water supplies. According to USGS scientists, the main source of human pharmaceuticals in the environment is likely treated wastewater from households, industry, and commercial facilities. Biosolids from wastewater treatment plants applied to land as fertilizer may also be a source of human pharmaceuticals in the environment. Septic systems may be a source of human pharmaceuticals in ground water. A potential source of veterinary pharmaceuticals is agricultural facilities where large numbers of food-producing animals (such as chickens, cattle, and swine) are treated with pharmaceuticals. The pharmaceuticals enter the environment either directly from waste storage structures as a result of accidents or weather conditions, or through the application of manure and liquid waste to croplands. Figure 1 illustrates the different pathways by which pharmaceuticals may enter drinking water supplies. EPA considers pharmaceuticals in drinking water to be a contaminant of emerging concern (also called emerging contaminants). The term is not defined in regulation, and EPA does not maintain a list of contaminants that are considered contaminants of emerging concern. In this report, the term refers to a wide range of contaminants for which the risk to human health and the environment associated with their presence, frequency of occurrence, or source may not be known. In some cases, the release of contaminants of emerging concern into the environment has likely occurred for a long time but may not have been recognized until new detection methods were developed. In other cases, the synthesis of new chemicals or changes in the use and disposal of existing chemicals can create new sources of contaminants of emerging concern. Other contaminants of emerging concern can include personal care products (e.g., sunscreen, antibacterial soap, synthetic musks); chemicals used in industry (e.g., flame retardants, stain resistant coatings); and chemicals used in agriculture (e.g., pesticides that may act as endocrine disrupting chemicals (EDC)). Most pharmaceuticals are not currently regulated under EPA programs implementing key environmental laws. SDWA, the Resource Conservation and Recovery Act of 1976 (RCRA) and the Clean Water Act provide EPA with authority to regulate pharmaceuticals meeting certain criteria in drinking water, waste, and wastewater discharges. Under SDWA, EPA is authorized to regulate contaminants, including pharmaceuticals, meeting certain criteria in public drinking water systems. In 1996, Congress amended SDWA to require EPA to select for consideration those unregulated contaminants that present the greatest public health concern, evaluate their occurrence and the potential health risks associated with them, and decide whether a regulation is needed for at least five contaminants every 5 years. This regulatory determination process includes EPA’s publication in the Federal Register of a preliminary decision on whether the agency will propose a drinking water regulation for each contaminant evaluated—called a preliminary regulatory determination—and provides for a public comment period, followed by a final decision, or regulatory determination, also published in the Federal Register. The 1996 amendments also require EPA to identify and publish a list every 5 years of unregulated contaminants for drinking water that may require regulation—called the Contaminant Candidate List. The Administrator must decide whether to regulate at least five of the contaminants on the candidate list every 5 years. These decisions are called regulatory determinations. SDWA specifies that EPA is to regulate a contaminant if the Administrator determines that the contaminant may have an adverse effect on the health of persons; the contaminant is known to occur or there is a substantial likelihood that the contaminant will occur in public water systems with a frequency and at levels of public health concern; and in the sole judgment of the Administrator, regulation of such contaminant presents a meaningful opportunity for health risk reduction for persons served by public water systems. Since 1996, EPA has completed two regulatory determination cycles—in 2003 and 2008. During this time, EPA conducted 20 regulatory determinations and found that none met the criteria requiring regulation. In 2011, EPA made an out-of-cycle regulatory determination, concluding that perchlorate, an ingredient in rocket fuel and other products that can interfere with the normal functioning of the thyroid gland, met the criteria requiring regulation. EPA has made no regulatory determinations for pharmaceuticals. EPA published the third candidate list in October 2009 but has not yet made any regulatory determinations or completed the third regulatory determination cycle. To determine which contaminants to include on the third candidate list, EPA developed a multistep process, based on available data, to characterize occurrence and adverse health risks a contaminant may pose to consumers of public water systems. Starting with a list of almost 26,000 unique chemicals, EPA identified a universe of about 6,000 potential drinking water contaminants for consideration based on the availability of occurrence and health effects data. Of these, 287 were pharmaceuticals. Then, using the available data, EPA employed successively more detailed evaluations—as well as expert opinions and comments from the public—to identify the 116 contaminants that it included on the third candidate list—12 of these contaminants are pharmaceuticals. Table 1 identifies the 12 pharmaceuticals. In a May 2011 report, we identified systemic limitations in EPA’s implementation of the 1996 amendments’ requirements for determining whether additional contaminants in public drinking water warrant regulation and made 17 recommendations to EPA for implementing the requirements in a way that better assures the public of safe drinking water. Among other things, we recommended that EPA (1) develop criteria and a process for identifying those contaminants on its candidate list that present the greatest public health concern and (2) develop a coordinated process for obtaining both the occurrence and health effects data that may be needed for the agency to make informed regulatory determinations on these priority contaminants. EPA did not agree to adopt these recommendations and generally took the position that no further steps are needed. RCRA established federal requirements and EPA regulatory authority for “cradle-to-grave” management of hazardous wastes, as well as a program for state oversight of nonhazardous solid waste with federal minimum regulations for landfills. RCRA and its implementing regulations establish several means by which waste may be deemed hazardous, including specifically being listed by EPA as a hazardous waste or by exhibiting one of the following four characteristics: toxicity, ignitability, corrosivity, or reactivity. According to EPA’s August 2010 draft guidance and a proposed rule concerning management of hazardous pharmaceutical wastes in the Federal Register, more than 30 active pharmaceutical ingredients are considered listed hazardous wastes under RCRA. In addition, other pharmaceuticals may be considered to be hazardous waste when disposed if they have certain characteristics (e.g., they are likely to leach concentrations of any 1 of 40 different toxic chemicals in amounts above the specified regulatory levels). Examples of these chemicals that have pharmaceutical uses include: arsenic, barium, cadmium, and chloroform. EPA has estimated that about 5 percent of all pharmaceutical waste is hazardous waste. The disposal of pharmaceuticals meeting the RCRA hazardous definition is generally subject to RCRA requirements, such as reporting, using a manifest, and disposing of the waste in approved ways, such as through hazardous waste incineration; however, household trash is exempted. Noting that implementing existing regulations may be difficult for healthcare facilities such as hospitals and nursing homes and that the streamlined requirements would help avoid mismanagement, in 2008 EPA proposed to add hazardous waste pharmaceuticals to the Universal Waste Rule, which simplifies RCRA requirements for certain hazardous wastes. Under the proposed rule, manifests would not be required and other requirements may be simplified. EPA estimated the rule could affect over 600,000 entities. According to EPA’s Web site on the proposed rule, stakeholders commenting on the proposal expressed concerns that including hazardous pharmaceutical wastes under the Universal Waste Rule would eliminate some requirements, such as notification and use of a manifest, that currently apply to such wastes. EPA officials also told us the agency has begun considering additional regulatory options to address these and other issues but that EPA has no projected date for issuing a final rule. The Clean Water Act is the primary federal law concerning pollution of the nation’s waters. Under the act, EPA is required to establish and revise national water quality criteria that accurately reflect the latest scientific knowledge about the effects of pollutants on aquatic life and human health. These criteria represent maximum concentrations that would not cause an unacceptable effect on aquatic life and represent the levels at which specific chemicals are not likely to adversely affect human health. Criteria are elements of state water quality standards, expressed as constituent concentrations, levels, or narrative statements, representing a quality of water that supports a particular use. When criteria are met, water quality will generally protect the designated use. States, or in some instances EPA, use these criteria to adopt and revise water quality standards for designated uses—such as drinking, swimming, or fishing— for water bodies. States may use EPA’s national criteria, modify them to site-specific criteria, or adopt other scientifically defensible criteria. States are required, as part of 3-year reviews, to adopt water quality standards for each of the toxic pollutants for which EPA has promulgated water quality criteria. Water quality standards play a critical role in the act’s framework, potentially affecting effluent limitations dictated by permits and requirements for state reporting and pollution control planning. Regarding permits, EPA and delegated states administer the Clean Water Act’s National Pollutant Discharge Elimination System program, which limits the types and amounts of pollutants that industrial and municipal wastewater treatment facilities may discharge into the nation’s surface waters. Facilities such as municipal wastewater treatment plants and pharmaceutical plants require a permit if they discharge into surface waters. Certain agricultural facilities—known as concentrated animal feeding operations—also need a permit, but other agricultural operations do not. EPA and delegated states issue discharge permits that are to set conditions in accordance with technology-based effluent limitations EPA established for various categories of discharges. EPA has issued effluent limitation regulations for pharmaceutical manufacturing facilities as well as pretreatment regulations applicable when these facilities discharge into a publicly owned wastewater treatment plant. These regulations currently do not include limitations for any pharmaceutical constituents in wastewater; rather, the regulations set limitations for conventional pollutants, priority toxic pollutants, and selected nonconventional pollutants—mainly solvents used in manufacturing. Similarly, EPA’s regulation for concentrated animal feeding operations does not contain specific limitations for veterinary pharmaceuticals. At present, EPA has not developed specific water quality criteria under the Clean Water Act for most pharmaceuticals; hence, there are no water quality standards for most pharmaceuticals, and permits do not contain any limitations for them. EPA’s current national criteria include one pollutant identified as being used as a pharmaceutical—lindane. In January 2010, the Center for Biological Diversity, a nonprofit environmental organization, petitioned EPA to revise its water quality criteria for lindane, and to establish water quality criteria for 34 other pharmaceutical and personal care products. EPA told us the agency is considering the petition and expects to issue a response by mid 2011. If EPA were to establish water quality criteria for one or more additional pharmaceuticals, then states would need to adopt water quality standards reflecting the new or revised criteria, and the standards would be considered in permit decisions as well as in states’ water quality management plans. In August 2010, EPA’s Office of Water released a draft guidance document for health care facilities, Best Management Practices for Unused Pharmaceuticals at Health Care Facilities. The nonbinding document recommends management practices, such as methods to reduce the quantity of unused pharmaceuticals, and explains applicable disposal requirements for those pharmaceuticals that are hazardous. EPA’s goal for the guidance document is to keep pharmaceuticals out of U.S. waters, particularly by minimizing their disposal into sewers. According to agency officials, EPA expects to issue a final guidance document by the end of 2011. The PiE workgroup was established in 2006 by the Committee on Environment, Natural Resources, and Sustainability (CENRS), Toxics and Risk Subcommittee, an executive branch entity under the National Science and Technology Council (NSTC). NSTC is a council of cabinet- level officials chaired by the President and managed by the Director of OSTP. The purpose of the workgroup was to identify and prioritize research needed to better understand the risk from pharmaceuticals in the environment and to recommend areas for federal collaboration to address those priorities. The workgroup, which was intended to be temporary, was staffed by scientists from eight federal agencies. EPA, FDA, and USGS scientists served as co-chairs. In May 2009, the PiE workgroup produced a draft report but it was never finalized because of a disagreement between OSTP and the workgroup over what should be included in the final report. Although research has confirmed the presence of pharmaceuticals in drinking water throughout the nation, the full extent of their occurrence is unknown. Research on the human health effects of exposure to these pharmaceuticals is largely unknown but the effects of some compounds have raised concern among some scientists, the public, and policy makers. Research has detected pharmaceuticals in the nation’s drinking water. National and regional studies have generally detected pharmaceuticals in source water, treated drinking water, and treated wastewater; but the full extent of occurrence is unknown. The concentrations detected were measured most frequently in parts per trillion. As part of its Toxic Substances Hydrology Program, USGS conducted four reconnaissance studies that were national in scope (national reconnaissance studies) to study the occurrence and distribution of emerging contaminants, including pharmaceuticals, in the environment. For each study, USGS chose to sample water from locations that it believed were more likely to have pharmaceuticals and other contaminants present. One study specifically focused on untreated source water used by public drinking water systems. For example, samples were collected from wells and near surface water intakes that supplied the water systems. For this study, USGS collected water samples from 74 locations in 25 states and Puerto Rico in 2001. These locations provide drinking water to populations ranging from one family to over 8 million. The study reported testing for the presence of 100 contaminants, including 36 pharmaceuticals. USGS found that 53 of the 74 locations had one or more pharmaceuticals in the water, and 40 percent of the pharmaceuticals analyzed were detected at one or more of these locations. Figure 2 shows the location of sample sites and the sites at which USGS detected pharmaceuticals. Figure 3 shows the pharmaceuticals that USGS reported detecting in its study of untreated sources of public drinking water. The other three national reconnaissance studies that USGS conducted focused on (1) surface water, (2) ground water, and (3) stream sedimentation. The four USGS national reconnaissance studies tested for a similar, but not identical, suite of pharmaceuticals and other contaminants and all of the studies reported detecting pharmaceuticals and other contaminants. In addition to its national studies, USGS has undertaken a number of local and regional studies as part of its reconnaissance effort to provide information on the sources, occurrence, and transport of contaminants of emerging concern, including pharmaceuticals. These studies have reported similar results—finding pharmaceuticals in source water. For example, in a 2009 study, USGS, in cooperation with the Oregon Department of Environmental Quality and Deschutes County Environmental Health Division, collected and analyzed water samples from ground water near La Pine, Oregon. The study reported detecting 8 of the 18 pharmaceuticals for which it tested. The study also reported testing for and finding other contaminants. In addition to USGS, other research groups have conducted studies to detect pharmaceuticals and other contaminants in source water, with results that are similar to those of USGS. Specifically: The New York City Department of Environmental Protection reported finding pharmaceuticals and personal care products in the low, part- per-trillion range in a 2010 study of the Catskill, Croton, and Delaware untreated source waters that contributed to New York City’s water supply. The National Water Research Institute funded a study testing for 50 contaminants such as pharmaceuticals and organic wastewater contaminants in three watersheds supplying drinking water to more than 25 million people in California. The study analyzed 126 samples taken from 32 locations at various points in the watershed, including upstream and downstream from wastewater treatment plant discharges over a 1-year period, from April 2008 through April 2009. Overall, at least 1 contaminant was found in all but one of the samples. The study further reported that concentrations of contaminants were higher downstream of the wastewater treatment plants and concluded that the plant discharges were likely the main source of these contaminants in the environment. Although USGS studies have focused on source water, other studies have detected pharmaceuticals and other emerging contaminants in treated drinking water. For example: A 2008 study funded by the American Water Works Association Research Foundation and the WateReuse Foundation tested for 51 potential contaminants including 20 pharmaceuticals and pharmaceutical metabolites in drinking water in 19 drinking water treatment plants across the United States. The study reported detecting 9 of the 20 pharmaceuticals and metabolites at all of the locations tested. These plants provide drinking water for over 28 million Americans. EPA funded a 2010 meta-analysis of 48 publications and found that 54 active pharmaceutical ingredients and 10 metabolites have been detected in treated drinking water. The analysis notes that of the 64 substances that have been detected, only 36 have corroborative data from at least a second study. In addition to source and treated drinking water, USGS and others have tested the effluent of wastewater treatment plants and animal feeding operations, two sources that are thought to be significant contributors of contaminants to streams and other sources of drinking water. Specifically: Treated wastewater. A 2005 study by USGS and EPA collected water samples upstream and downstream of wastewater treatment plants at 10 different locations totaling 40 sampling sites across the United States. The agency tested for the presence of 110 chemicals, including industrial wastewater compounds and pharmaceuticals and related chemicals. Specifically, the study reported finding nonprescription pharmaceuticals in over 40 percent of the samples; prescription, nonantibiotic pharmaceuticals in over 30 percent of samples; and antibiotics in fewer than 10 percent of all samples. The study’s results demonstrated an increase in the frequency of detection and concentration of most of the pharmaceuticals, and other chemical compounds, in the treatment plants’ effluent as compared to water samples collected upstream of these plants; however, the chemical concentrations and occurrences decreased downstream from the treatment plants. Animal feeding operations. A study published in 2002 reported finding concentrations of antimicrobial agents in surface and ground water near large-scale poultry and swine farms, and concluded that animal waste likely acted as a source for antimicrobial residues in nearby water resources. Specifically, the study noted that livestock receive antimicrobials both in therapeutic and nontherapeutic doses (i.e., in their feed), and that these compounds can be excreted into the environment. Pharmaceutical manufacturing facilities. A 2010 USGS study of emerging contaminants in wastewater treatment plant effluents found that wastewater treatment plants that receive discharge from pharmaceutical manufacturing facilities had 10 to 1,000 times higher concentrations of pharmaceuticals (including opioids, muscle relaxants, and a barbiturate) than typically found in wastewater effluents. Maximum concentrations of some pharmaceuticals were in the part per million range. Research has not determined the human health effects of exposure to pharmaceuticals in drinking water. However, some research has demonstrated the potential impact to human health from exposure to some pharmaceuticals found in drinking water, such as EDCs and antibiotics. Uncertainty persists regarding whether pharmaceuticals in drinking water pose a risk to human health, and research has pointed to different conclusions. For example, in its April 2008 testimony before the Senate Committee on Environment and Public Works, the Pharmaceutical Research and Manufacturers of America, a trade association for the leading research-based pharmaceutical and biotechnology companies, cited a peer-reviewed study for which it provided financial support that concluded there was no demonstrable health risk to exposure to 26 pharmaceuticals detected by USGS in one of its national reconnaissance studies. The study reached its conclusions by comparing an estimate of human exposure from drinking water and/or ingesting fish for each pharmaceutical to the acceptable daily intake (ADI) for that pharmaceutical. ADI is an estimate of the daily amount of pharmaceuticals that can be ingested by a healthy adult of normal weight and that should not result in an adverse health effect. In this instance, the ADI was derived from data developed by pharmaceutical manufacturers when testing the effectiveness and safety of a therapeutic dose of the pharmaceutical. Other research has emphasized the absence of data and lack of knowledge regarding the health effects of pharmaceuticals in the environment. For example, research funded by EPA notes that risk assessments based on benchmarks such as ADIs generally conclude that there is negligible risk from exposure to pharmaceuticals through drinking water but that benchmark levels such as ADI are orders of magnitude higher than the exposure levels and may not take into account less obvious, nontherapeutic effects. This research notes that despite the lack of empirical data linking pharmaceuticals in drinking water to adverse human health effects, the issue remains one of interest because of the unanswered questions concerning low-dose exposure to contaminants of emerging concern, including but not limited to pharmaceuticals. Some of the most significant unanswered questions identified in the research are: What is the potential for biological effects of long-term, low-dose exposure to pharmaceuticals, including for sensitive subpopulations such as children and in utero exposure? What are the effects of mixtures of pharmaceuticals, both additive and interactive? How do pharmaceuticals interact with the many other contaminants— both man-made and naturally occurring—that may be present in drinking water? Are there transgenerational effects (i.e., present in successive generations)? The human health effects of pharmaceuticals in drinking water have not been conclusively shown, but research showing an impact on aquatic life raises concerns about two classes of pharmaceuticals—EDCs and antibiotics. Some of the concern about EDCs in drinking water stem from studies that have documented the abnormalities associated with aquatic life exposed to EDCs in rivers and lakes. Specifically, scientists have expressed concern because of both the significance of the abnormalities and the effects of contaminants on animals, which can be indicative of similar effects on humans. For example: A 2007 study reported that 75 percent of male smallmouth bass in certain areas of the South Branch of the Potomac River basin had ovarian tissue in their gonads. The study concluded that a combination of EDCs was likely to have caused the feminization of the male fish. Although the authors note that the actual EDCs responsible for the abnormalities could not be determined, they suggest that a combination of contaminants could be the cause and noted that the additive effects of many EDCs have been demonstrated even when each compound present is below the threshold of detectable effects. The authors further noted that reproductive abnormalities in fish are frequently associated with human wastewater effluent, which contains synthetic estrogens found in birth control and hormone replacement medications. In another 2007 study by EPA and the Canadian government, researchers reported conducting a 7-year whole-lake experiment to test the effects on fathead minnows of chronic exposure to a synthetic estrogen used in some birth control pills. The researchers reported a collapse in the population of fathead minnows in the experimental that lake and concluded that the results from the study demonstrate continued introduction of estrogens and estrogen mimics to the aquatic environment through municipal wastewaters could decrease the reproductive success and sustainability of fish populations. According to a 2004 research study, fish exposed to effluent from a cattle feedlot in Nebraska experienced reproductive abnormalities, including reduced testes size in male fish and a lower level of estrogen in female fish. The study reported the use of androgens in growth implants in the feedlot as one possible cause of the abnormalities. Not all EDCs found in drinking water, however, are pharmaceuticals. Other contaminants, such as industrial chemicals and products, as well as naturally occurring hormones found in plants and excreted by different species, can also act as EDCs. Because other chemicals have also been shown to have potential endocrine-disrupting effects, the extent to which pharmaceutical EDCs contribute to detected abnormalities is unclear. For example, bisphenol A (BPA), a nonpharmaceutical EDC, is used to make polycarbonate plastics that are used in products such as compact disks, baby bottles, plastic dinnerware, eyeglass lenses, and toys. In its paper reporting 2003-2004 National Health and Nutrition Examination Survey findings, the Centers for Disease Control and Prevention found BPA in more than 90 percent of the urine samples representative of the U.S. population 6 years of age and older. Another commonly occurring nonpharmaceutical EDC is atrazine, the most commonly used herbicide in the United States. In a 2003 study, scientists established a probable chain of causation between exposure to small concentrations of atrazine and the formation of female reproductive organs in frog testes. A second class of pharmaceuticals that has raised concern about the potential for health effects is antibiotics. In addition, some scientists are concerned about antimicrobial resistance resulting from interactions among chemicals, genes, microbes, animals, and humans in the environment. For example, some studies have demonstrated that bacteria exposed to pharmaceutical antibiotics and other antimicrobial agents in the environment have increased resistance to pharmaceutical antibiotics. However, the studies do not identify the extent to which pharmaceuticals or other antimicrobial agents contribute to these resistant bacteria. For example, triclosan and triclocarban, which are antimicrobials found in antiseptics, can contribute to antimicrobial resistance. We recently issued a report that, among other issues, discusses scientific evidence supporting the association between antibiotic occurrence in the environment and an increase in resistance among bacteria. In addition to EDCs and antibiotics, other classes of pharmaceuticals have been found in drinking water and garnered scientific attention. Examples include chemotherapy drugs and selective serotonin reuptake inhibitors, which are a class of pharmaceuticals used to treat depression. Some states and local governments, as well as DEA, have taken actions to reduce the extent to which pharmaceuticals occur in drinking water— primarily through take-back programs to properly dispose of pharmaceuticals. These efforts are often tied to efforts to reduce drug abuse or accidental poisoning by removing expired medicines from the home. Through outreach and education on proper drug disposal, EPA has also taken steps to reduce the introduction of hazardous pharmaceutical waste into water supplies. Other countries—including Sweden and Australia—have undertaken additional efforts to reduce the occurrence of pharmaceuticals in drinking water. Federal agencies do not have comprehensive data on the number of take-back programs across the United States, but EPA and the Product Stewardship Institute, Inc. collectively identified 25 states that have had one or more take-back programs. In addition, DEA has held two nationwide take-back programs—in September 2010 and April 2011— and a third is planned for October 29, 2011. Take-back programs are organized by a wide variety of stakeholders, including environmental groups, those with interests in preventing prescription drug abuse, and government entities (app. II provides federal guidelines on the proper disposal of pharmaceuticals). According to experts and program organizers we interviewed, the goals for implementing these programs include preventing drug abuse and accidental poisoning, as well as preventing unused pharmaceuticals from entering the environment. Pharmaceuticals collected through take-back programs are incinerated. Through a survey of the literature and interviews with experts, we determined that take-back programs generally fall into one of three broad categories: (1) ongoing, (2) one-time, and (3) mail-back. To illustrate the three categories, we selected five take-back programs to review more closely. Figure 4 describes these five programs. As the figure shows, the following two programs are ongoing: Utah’s Proper Medication Disposal Program. Consumers can leave unused pharmaceuticals in drop boxes at participating law enforcement agencies. The program collected over 5,600 pounds of pharmaceuticals, including packaging, from June 2009 to June 2010. It received $70,000 in grants from EPA and the Utah Department of Environmental Quality. The program costs, not including in-kind donations, were $40,000 from May 2007 to June 2010. According to program representatives, the program will seek additional grants to continue its efforts once it has spent the money from its current grants. Washington State’s PH:ARM Pilot (Pharmaceuticals from Households: A Return Mechanism). PH:ARM began as a pilot project in 2006 with over 37 participating pharmacies in six counties. Consumers drop off their unused pharmaceuticals in secure drop boxes at pharmacies. From October 2006 to October 2008, the program collected over 15,000 pounds of pharmaceuticals, including packaging, at a cost of approximately $170,000. According to program representatives, grant funding for the initial pilot project has ended, but the pharmacies have chosen to continue to collect unused pharmaceuticals on their own. Legislation proposed in the state legislature would have required pharmaceuticals manufacturers to pay for take-back programs in the state; however, the legislation failed a state senate vote in 2011. We also identified one-time take-back events. These events are often organized by local communities and operate for a day, several days, or several weeks. For example: Bay Area Pollution Prevention Group. This group, a consortium of 43 wastewater agencies in the San Francisco Bay Area, piloted a week- long take-back program called “Safe Medicine Disposal Days” in May 2006. Consumers were invited to drop off pharmaceutical waste at 39 locations, including pharmacies, law enforcement offices, household hazardous waste facilities, and senior and civic centers. Over the course of the event, more than 1,500 residents disposed of over 3,600 pounds of pharmaceuticals. The event cost around $180,000, including administrative costs, and was funded by local agencies, cities, counties, and wastewater treatment plants. Amarillo and Canyon, Texas, “Medication Cleanout™” (MCO) program. Three 1-day events were conducted between September 2009 and July 2010. These events were organized and funded by the Texas Panhandle Poison Center of Texas Tech University Health Sciences Center School of Pharmacy, the Amarillo Independent School District’s Safe Schools Healthy Students program, and the Amarillo Police Department. Medication Cleanout provided consumers with drive-through drop-off points in order to return their unused pharmaceuticals without leaving their cars. The cost for the September 2009 event—the only date for which cost data are available—was approximately $44,000, and organizers reported that approximately 1,900 pounds of pharmaceuticals, including some packaging, were returned for all three events. Program organizers indicated that similar 1-day, drive-through events would be planned for the future. Mail-back programs allow consumers to use the Postal Service to dispose of unused pharmaceuticals. For example, in 2008, Maine implemented a 2-year mail-back pilot program—called “Safe Medicine Disposal for ME.” The program distributed postage-paid return envelopes to pharmacies and health and social service agencies across the state to be given to consumers. The envelopes contained instructions for how to properly return the pharmaceuticals, including how to remove personally identifying information from prescription bottles before mailing the unused pharmaceuticals. The pharmaceuticals were sent to the Maine Drug Enforcement Agency for proper disposal. Between May 2008 and October 2009, the program collected more than 2,600 pounds of pharmaceuticals, including packaging. Organizers reported that some of the prescriptions returned were over 20 years old. The program was initially funded with a $150,000 EPA grant and has since received $150,000 from the Fund for Healthy Maine that will allow the program to operate into 2011. Program organizers stated that their main goals for implementing the program were to prevent poisonings and drug abuse, but that 77 percent of respondents to a survey included with the envelopes distributed by the program reported that they participated because they were concerned about the environment. According to DEA, its two nationwide take-back events—in September 2010 and April 2011—collected more than 300 tons of pharmaceuticals at thousands of sites across the country. Although the U.S. take-back programs differ in how they are implemented, organizers of the events have faced similar challenges. For example, according to experts and organizers of the take-back programs we spoke with, these programs have been hampered by legal restrictions and limited funding, although the legal restrictions are being addressed. These experts and organizers told us that collecting controlled substances was resource intensive because, until recently, according to DEA the Controlled Substances Act made it was unlawful for the recipient of a controlled substance to give that substance to anyone other than law enforcement, even for the purposes of disposal. Thus, consumers were prohibited from returning unused controlled substances to their pharmacy or doctor. Any take-back program that intended to collect controlled substances had to arrange for law enforcement to receive the unused controlled substances and maintain custody of them until they were destroyed. However, in October 2010 the Secure and Responsible Drug Disposal Act was enacted amending the Controlled Substances Act. The act gives DEA the authority to issue regulations allowing communities and others to establish secure disposal programs for unused controlled substances. It also authorizes DEA to permit long-term care facilities to dispose of controlled substances on behalf of consumers who no longer need them. According to the Deputy Assistant Director of DEA’s Office of Diversion Control, DEA strongly supported this legislation and anticipates issuing a notice of proposed rulemaking in the fall of 2011. According to experts and program organizers, take-back programs are also hampered by limited funding. Programs use a combination of in-kind contributions, volunteer time, grants, and local funding sources to pay for their programs. For example, between 2004 and 2008, EPA awarded 25 grants—totaling $926,972—to support take-back programs; these grants ranged from approximately $10,000 to $150,000. In addition, at least one state has previously proposed legislation that would require pharmaceutical manufacturers to fund take-back programs. As of March 2011, no such state legislation had been enacted. In 2004, the European Union (EU) issued a directive to its member states to, among other things, ensure that appropriate collection systems are in place for medicinal products that are unused or have expired in light of the potential risks presented by these pharmaceuticals for the environment.” Three years later, in 2007, the European Federation of Pharmaceutical Industry Associations surveyed 27 EU member states on their implementation of programs to collect unused pharmaceuticals. Of the 22 national pharmaceutical associations responding to the survey, 19 reported they had a pharmaceutical waste collection program, and most of these 19 associations reported that the programs operate nationwide. In 6 of the 19 programs, the pharmaceutical industry funds all costs associated with collecting and destroying unused pharmaceuticals. Sweden is an example of an EU country that has taken additional steps to reduce the occurrence of pharmaceuticals in drinking water. Sweden’s efforts are supported by its government; pharmacies (most of which are publicly owned) are now obligated to take back all unused or expired pharmaceuticals and safely incinerate them. In 2009, 1,128 tons of pharmaceuticals, including packaging, were returned and destroyed. Sweden has also taken the following actions: Classifying pharmaceuticals according to how toxic they would be if they were released into the environment. According to a Swedish official, in 2004, officials from pharmaceutical producers and Sweden’s health care system created an environmental classification system for pharmaceuticals to provide doctors and patients with information about the environmental effects of pharmaceuticals. Sweden developed this system by using risk and hazard data submitted by pharmaceutical manufacturers on their products. These data were then evaluated by an independent consulting firm, which provided an approval or disapproval for the proposed risk and hazard levels. The pharmaceuticals’ risk and hazard determinations used the following criteria: biodegradability, potential to accumulate in the body, and toxicity to aquatic organisms. Individual jurisdictions throughout Sweden then used these results to compile lists of pharmaceuticals recommended for specific ailments, and doctors may consider these lists when prescribing pharmaceuticals. In addition, at least one pharmaceutical company has indicated that it is pursuing initiatives to produce less toxic and more environmentally friendly pharmaceuticals. Encouraging initial prescriptions in smaller amounts. According to data from Sweden, in 2005 and 2006, nearly 40 percent of the pharmaceuticals collected were unopened, and the remaining packages were still nearly two-thirds full, suggesting that patients may be buying more pharmaceuticals than they need. As a result, the public providers of healthcare encourage doctors to prescribe smaller initial prescriptions so that patients and their physician can determine if the pharmaceutical will work for the patient. This practice may reduce the amount of pharmaceuticals that are disposed of when patients switch to different pharmaceuticals. According to one knowledgeable Swedish official, Sweden adopted these policies—even though there is no scientific evidence that the occurrence of pharmaceuticals in the environment is affecting human health—as a result of its adherence to the “precautionary principle.” This principle states that action should be taken without waiting for the certainty of causation when an appropriate level of scientific evidence suggests an association between hazardous environmental exposures and ill health. According to the principle, action should be taken preventively because definitive knowledge about causation might take decades of further research. Outside of the EU, Australia has a national take-back program—”Return Unwanted Medicines” (RUM). RUM is a national, government-financed program that allows consumers to return unwanted or expired pharmaceuticals to participating pharmacies. Educational materials from the RUM program instruct consumers that they should not dispose of pharmaceuticals in the trash, in the toilet, or in the sink. According to RUM data from July 2009 through June 2010, the RUM project collected 1,075,957 pounds of pharmaceutical waste, including packaging, that might otherwise have been disposed of through wastewater or in the trash and risk contaminating the environment. A program representative stated that RUM has been an integral component of Australia’s efforts to advise consumers on all aspects of pharmaceutical consumption and disposal. EPA faces challenges in obtaining sufficient occurrence and health effects data to support analyses and decisions about which pharmaceuticals to include on the Contaminant Candidate List as well as to make regulatory determination decisions. EPA is collaborating with other agencies on research to help obtain these data for use in developing future candidate lists, but these efforts are largely informal and EPA has not established a formal mechanism to sustain these collaborative efforts. We previously reported key practices for enhancing and sustaining collaboration among federal agencies that may be an option to help institutionalize an approach for conducting research that leverages resources among the agencies. We recommended that the Director of the Office of Management and Budget continue to encourage interagency collaboration by among other things, promoting and collaboration practices identified in GAO’s report; the Office of Management and Budget agreed with the recommendation. EPA faces significant data gaps concerning both the occurrence and health effects of pharmaceuticals. Sufficient occurrence and health effects data are critical for EPA to assess pharmaceuticals for possible regulatory determinations under the criteria established by SDWA. The difficulties EPA experienced in evaluating pharmaceuticals to include on its most recent Contaminant Candidate List, in 2009, illustrate the challenges EPA faces in obtaining these data. To evaluate pharmaceuticals for inclusion on its 2009 Contaminant Candidate List, EPA identified two general types of occurrence data: first, data on the actual detection of pharmaceuticals in source and treated drinking water, and second, data on environmental releases and production volumes of pharmaceuticals developed by industry and government. Source and treated drinking water: EPA occurrence data on pharmaceuticals detected in untreated source water came from USGS’s national reconnaissance study on surface water and related efforts. These efforts provided data on 123 contaminants, including pharmaceuticals. The data contain measurements of contaminants in water but the data were from sample sites often chosen because they were predicted to be the most likely place that pharmaceuticals and other emerging contaminants would enter the environment (e.g., downstream from wastewater treatment plants). The sample sites are not statistically representative of average conditions across the nation. However, the sites were geographically distributed and included a mix of characteristics that were intended to provide a basic understanding of whether pharmaceuticals and other contaminants are in the nation’s waterways. According to EPA, the most relevant occurrence data are for treated drinking water, but these data are often not available. EPA told us it evaluated the available studies from the scientific literature that included occurrence data for pharmaceuticals from treated drinking water, but there were only a limited number of studies available and the majority of these studies only sampled a limited number of drinking water systems. Thus, to identify pharmaceuticals for inclusion on the most recent candidate list, EPA instead relied on data on untreated source water. Most Americans consume treated drinking water. Environmental release and production volumes: EPA also obtained occurrence data on pharmaceuticals from the Toxics Release Inventory and the High Production Volume Chemical List. The Toxics Release Inventory contains industry- and government-reported information on chemical releases into the environment—air, land, and water; the High Production Volume Chemical List contains production volume information for chemicals manufactured or imported into the United States in quantities greater than certain threshold amounts. However, EPA considered these data sources to provide less meaningful information on a chemical’s potential to occur in drinking water than sources that actually detect the presence of chemicals in the environment, such as the USGS data that it did use. For the 12 pharmaceuticals that it included on its 2009 Contaminant Candidate List, EPA reported it does not have comprehensive occurrence data for treated drinking water for any of them and does not have an analytic method suitable for conducting national drinking water studies for 7 of them. For the remaining 5 pharmaceuticals, EPA reports that it has or is developing a suitable analytic method. According to the Federal Register notice for the draft 2009 Contaminant Candidate List, the primary source of health effects information on pharmaceuticals in drinking water was the FDA database on maximum recommended daily doses. This FDA database includes the recommended doses for the “average adult patient” for over 1,200 pharmaceuticals and is based on human clinical trials of daily exposure, usually for 3 to 12 months. The maximum recommended daily dose is an estimated upper dose beyond which a pharmaceutical is not more effective and/or adverse effects begin to outweigh beneficial effects. However, according to EPA-sponsored research, extrapolating health effects data from data on the therapeutic doses of individual pharmaceuticals does not address, among other issues, the following two areas of concern about pharmaceuticals in drinking water: the health effects of (1) long-term, low-dose exposure to pharmaceuticals and (2) exposure to mixtures of pharmaceuticals. Effects of long-term, low-dose exposure to pharmaceuticals. According to the EPA-sponsored research, the health effects of long- term, low-dose exposure to a pharmaceutical may not be predictable by extrapolating from an observed effect of shorter-term exposure to much higher concentration of that pharmaceutical. The research indicates that further complications arise when trying to predict the effects of exposure on sensitive sub-populations. For example, a child in the age group between birth and 1 month might be particularly sensitive to a contaminant during this life stage, during which the child experiences rapid growth, weight gain, and immature immune system function, among other characteristics, which can influence a child’s susceptibility to a particular chemical. Effects of exposure to mixtures of pharmaceuticals. Also according to the EPA-sponsored research, the simultaneous exposure to multiple pharmaceuticals could result in an additive or interactive effect. In particular, studies on occurrence data have found more than one contaminant in a single water sample. For example, the USGS national reconnaissance study on surface water that EPA used to identify contaminants for the most recent candidate list found that there was a median of 7, and as many as 38, of the tested contaminants in a given sample. For the 12 pharmaceuticals that it included on the most recent candidate list, EPA reported that it has substantial data needs on health effects for 8 of them. For the remaining 4 pharmaceuticals, EPA reports that information exists or there is an ongoing assessment. Furthermore, as we recently reported, EPA has not identified the drinking water contaminants of greatest public health concern. In many cases, gathering sufficient data to make a regulatory determination has taken EPA more than 10 years, and obtaining data on other contaminants on the current list may well take decades. We made recommendations regarding the need for EPA to develop criteria to identify contaminants that pose the greatest health concern and a process to obtain data to support regulatory determinations; EPA did not agree to adopt these recommendations and generally took the position that no further steps are needed. EPA is collaborating with other federal agencies to collect occurrence and health effects data on pharmaceuticals and other contaminants that could support decisions about which contaminants to include on future candidate lists as well as regulatory determinations. As the following examples demonstrate, collaboration is helping EPA leverage the resources and expertise of other agencies to obtain results that may have been more difficult for it to achieve on its own. EPA and USGS are jointly developing occurrence data for over 230 contaminants, more than half of which are pharmaceuticals, in a study designed to provide EPA with data for future candidate lists. The agencies’ joint study will sample treated drinking water and source water in about 25 drinking water treatment plants across the nation. These plants were selected because they draw water from streams, lakes, reservoirs, or ground-water aquifers affected by a variety of waste sources (e.g., municipal waste, septic systems, livestock production). EPA is providing expertise to analyze micro-organisms, and has experience with drinking water treatment facilities and their design. USGS is providing its expertise in the logistics of operating a nationwide water sampling project. Both agencies have expertise in detecting low concentrations of pharmaceuticals and other contaminants of emerging concern. The study is expected to conclude in September 2012. EPA is working with FDA to develop a methodology to more efficiently assess the health effects of pharmaceuticals in drinking water by addressing groups of related pharmaceuticals, such as selective serotonin reuptake inhibitors, instead of individual pharmaceuticals. FDA is providing health effects data, and EPA plans to use the methodology to support decisions about which pharmaceuticals to include on future candidate lists. This effort is part of a larger EPA initiative to better implement SDWA by focusing on assessing risk from exposure to groups of contaminants instead of individual contaminants. According to EPA officials, there is no formal mechanism, such as a long- term strategy or formal agreement, to manage and sustain these collaborative efforts. Agency officials and former members of the PiE workgroup told us that interagency efforts such as those described above are the result of informal collaborative relationships among agency personnel, particularly those fostered by the PiE workgroup. As one official from EPA’s Office of Water noted, the current interagency collaboration is “ad hoc.” In 2008 and 2010, we reported that by using informal coordination mechanisms, agencies may rely on relationships with individual officials to ensure effective collaboration, but these informal relationships could end if the responsible staff are not available to continue the efforts. We recommended that those agencies develop clear guidance for interagency planning efforts in the 2008 report, and that roles and responsibilities be identified to support collaboration in the 2010 report; the agencies generally agreed with these recommendations. The purpose of the PiE workgroup was to identify and prioritize research needed to better understand the risk from pharmaceuticals in the environment and to recommend areas for federal collaboration to address those priorities. Its draft report was neither approved by NSTC nor publicly released. According to OSTP officials, the draft report was not approved or released because the workgroup did not address OSTP’s concerns, including that the report did not specifically outline how agencies would coordinate research and other long-term activities identified in the draft report once the workgroup expired. For example, OSTP officials stated that the draft report did not clarify collaborating agencies’ roles and responsibilities by identifying which agencies are best positioned to address specific issues identified in the report and which existing or new programs would be most appropriate for addressing these issues. OSTP officials told us that providing this additional information was consistent with the purpose of the workgroup. The workgroup co- chairs told us that OSTP did not present the workgroup with its written concerns until June 2010, about a year after the draft report was approved by the Subcommittee on Toxics and Risk, and after the workgroup had expired. According to the co-chairs, addressing OSTP’s concerns would have required the workgroup to update the scientific data included in the draft report and would have required the workgroup to provide additional information regarding agencies’ roles and responsibilities that was beyond the purpose of the workgroup. Thus, the draft report was never finalized although, according to the co-chairs, the interagency activities begun by the workgroup continued. In an October 2005 report, we identified key practices for enhancing and sustaining collaboration among federal agencies. Three of these practices may help clarify how EPA and other agencies could coordinate their research efforts. Establish roles and responsibilities: We reported that collaborating agencies should work together to define and agree on their respective roles and responsibilities, including how the collaborative effort will be led. In doing so, agencies can clarify who will do what, organize their joint and individual efforts, and facilitate decision making. Leverage resources: We reported that collaborating agencies should identify the human, information technology, physical, and financial resources needed to initiate or sustain their collaborative effort. Collaborating agencies bring different levels of resources and capacities to the effort. By assessing their relative strengths and limitations, collaborating agencies can look for opportunities to address resource needs by leveraging each other’s resources, thus obtaining additional benefits that would not be available if they were working separately. Establish mechanisms for monitoring, evaluating, and periodically reporting results of the collaborative research efforts: We reported that federal agencies engaged in collaborative efforts need to create the means to monitor and evaluate their efforts to enable them to identify areas for improvement. Reporting on these activities can help key decision makers within the agencies to obtain feedback for improving both policy and operational effectiveness. There are basic questions about the potential health risks from exposure to pharmaceuticals in the nation’s drinking water. Other contaminants also have been detected in drinking water including personal care products, and chemicals used in industry and agriculture, including some that may act as EDCs. Some of these other contaminants may work in tandem with pharmaceuticals to affect human health through additive or interactive effects. Also of concern to government scientists are the health effects of long-term low-dose exposure to pharmaceuticals and exposure to mixtures of pharmaceuticals. Since the 1996 amendments to SDWA, EPA has been required to publish a list of currently unregulated contaminants including pharmaceuticals that may require regulation in drinking water, and to make determinations on whether or not to regulate at least 5 of the contaminants on the list every 5 years. In 2009, EPA issued its third Contaminant Candidate List, which consists of 116 contaminants, 12 of which are pharmaceuticals. However, EPA continues to experience difficulty obtaining sufficient occurrence and health effects data for making determinations on (1) which contaminants present the greatest public health concern to include on the list and (2) whether or not to regulate any of the contaminants on the candidate list. It will continue to be difficult for EPA to prioritize contaminants on the candidate list without the necessary information on health effects and occurrence to determine the contaminants that present the greatest public health concern. In many cases, gathering sufficient data to address contaminants awaiting determinations has taken EPA more than 10 years, and obtaining data on other contaminants on the current list may well take decades. To collect occurrence and health effects data on pharmaceuticals and other contaminants that could support decisions about which contaminants to include on future candidate lists, EPA is collaborating informally with USGS and FDA, but does not have a formal mechanism for sustaining such collaboration in the future. Furthermore, the PiE workgroup, which pulled together the scientific expertise from eight federal agencies, has expired and its draft report was neither finalized nor released. However, neither EPA’s informal collaboration efforts nor the strategy proposed by the PiE workgroup details how agencies could coordinate their future interagency collaboration efforts. We have previously reported on key practices for enhancing and sustaining interagency collaboration efforts, such as (1) establishing roles and responsibilities, including how the collaborative effort will be led; (2) identifying the expertise and other resources that each agency can bring to bear on the issue, and (3) establishing a process for monitoring, evaluating, and reporting to the public the results of the collaborative research efforts. To collect the pharmaceutical occurrence and health effects data necessary to better implement SDWA, and to address the broader issue of pharmaceuticals and their relationship to other contaminants in the nation’s waterways, we are making the following recommendation to the Administrator of EPA: Establish a workgroup or other formal mechanism that includes the relevant federal agencies to collaborate and coordinate research on pharmaceuticals and, as appropriate, other contaminants in drinking water that present the greatest public health concern. In establishing this mechanism, EPA should: (1) define roles and responsibilities, including how the collaborative effort will be led; (2) identify the expertise and other resources that each agency can bring to bear on the issue; and (3) develop a process for monitoring, evaluating, and reporting to the public the results of the collaborative research efforts. We provided a draft of this report to EPA, the Department of the Interior (DOI), the Department of Health and Human Services (HHS), OSTP, and the Department of Justice (DOJ) for review and comment. In written comments, EPA agreed with our findings and recommendation and noted that the extent of interagency collaboration may be dependent upon available resources. EPA also provided clarifying language regarding the responsibilities, accomplishments, and activities of the PiE workgroup which, according to EPA, reflects clarification provided by the PiE workgroup co-chairs. We modified our draft accordingly. EPA’s comments are reprinted in appendix III. EPA also provided technical clarifications and comments, which we incorporated as appropriate. DOI also provided written comments on a draft of this report and stated that it generally agrees with the findings and recommendation in the report. Additionally, DOI provided clarifying language regarding the PiE workgroup. DOI’s comments are reprinted in appendix IV. Additionally, USGS, an agency within DOI, provided technical clarifications and comments, which we incorporated as appropriate. DOJ, HHS, and OSTP did not provide written comments but provided technical clarifications and comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution for 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Administrator of EPA, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov . If you or your staff members have any questions on this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The objectives of this study were to (1) provide information on the extent to which pharmaceuticals occur in drinking water and the effects, if any, that their occurrence has on human health; (2) describe the approaches taken in the United States and in other countries to reduce the extent to which pharmaceuticals occur in drinking water; and (3) identify challenges, if any, that the Environmental Protection Agency (EPA) faces in determining whether any pharmaceuticals should be regulated under the Safe Drinking Water Act (SDWA), actions EPA is taking to address these challenges, and options for addressing such challenges in the future. To identify the extent to which pharmaceuticals occur in drinking water, we reviewed federal and peer-reviewed reports, including (1) studies by the U.S. Geological Survey (USGS), (2) articles in peer-reviewed journals by federal scientists and others, and (3) the Pharmaceuticals in the Environment (PiE) workgroup’s draft report. We also selected a nonprobability sample of scientific studies to review in our report. The data from these studies are not generalizable beyond the scope of these studies. We selected these studies on the basis of certain criteria, including the source of the study (e.g., a peer-reviewed journal); the geographic scope of the study; and whether the study focused on source water, treated drinking water, or wastewater. We also discussed the subject with scientists at USGS and other federal agencies as well as with representatives from academia, trade associations, the environmental community, and the pharmaceutical industry. To identify the effects, if any, that the occurrence of pharmaceuticals in drinking water has on human health, we also reviewed federal and peer- reviewed reports, including articles in peer-reviewed journals by federal scientists and others; and the PiE workgroup’s draft report. We discussed the subject with federal scientists and representatives from academia, the environmental community, and the pharmaceutical industry. We also attended an October 2009 academic conference on hormones and related compounds in the environment that was hosted by Tulane University. To describe the approaches taken in the United States to reduce the extent to which pharmaceuticals occur in drinking water; we reviewed literature and spoke with officials from federal agencies including the Drug Enforcement Administration (DEA), EPA, and the Food and Drug Administration (FDA), as well as experts from academia, industry and nonprofit organizations that have ongoing work addressing pharmaceuticals in the environment; from these efforts, we identified consumer take-back programs as the primary approach to reducing occurrence. We also determined that take-back programs could be grouped into three broad categories based on common characteristics— mail back, one-time, and ongoing. We selected a nonprobability sample of five programs to represent the three categories. The information from these programs is not generalizable to all take-back programs. We selected the programs because they provided geographic diversity and exemplified certain characteristics. For example, we selected one program, in part, because it was pharmacy-based. We did not attempt to evaluate the programs. We collected information on each program through a survey, follow-up interviews, and, where appropriate, additional documentation. To describe approaches taken by other countries to reduce the extent to which pharmaceuticals occur in drinking water, we chose to describe efforts in Sweden and Australia. We selected Sweden because it is undertaking a variety of stewardship activities. We selected Australia because it has a nationwide take-back program. We obtained information on each country’s efforts though interviews with knowledgeable officials and, where appropriate, additional documentation. To identify challenges, if any, that EPA faces in determining whether any pharmaceuticals should be regulated under SDWA, actions EPA is taking to address these challenges, and options for addressing such challenges in the future, we reviewed agency documents and interviewed relevant agency officials. Specifically, to identify challenges, we reviewed EPA’s documentation on the process it used to develop the 2009 Contaminant Candidate List under the authority of SDWA. We also reviewed some of the sources of data that EPA relied upon to identify pharmaceuticals for inclusion on the candidate list. To identify actions that EPA is undertaking to address challenges we identified, we interviewed agency officials from EPA, FDA, and USGS and, where appropriate, obtained and reviewed additional documentation. To identify options to address these challenges in the future, we obtained and reviewed a 2009 draft report produced by the PiE workgroup. We also interviewed several of the workgroup members, including the three co-chairs. We also reviewed our own work on practices that can help enhance and sustain collaboration among federal agencies. We conducted this performance audit from January 2010 through August 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. According to FDA and the White House Office of National Drug Control Policy, consumers are encouraged to properly dispose of unused pharmaceuticals to avoid harm to others. In general, consumers should not flush prescription pharmaceuticals down the toilet or sink drain unless the label or accompanying patient information specifically instructs consumers to do so. However, in some instances, it may be necessary to dispose of unused pharmaceuticals by flushing. For a list of pharmaceuticals that are recommended to be flushed, consumers should visit FDA’s Web site. Several disposal options are available to consumers for prescription pharmaceuticals that are not specifically labeled to be flushed. For example, other than the pharmaceutical take-back programs presented in this report, programs such as household hazardous waste collection events, which collect pharmaceuticals at a central location, can provide consumers with proper disposal of unused pharmaceuticals. Organizations such as the Product Stewardship Institute have information on such events across the nation. In addition, FDA and the White House Office of National Drug Control Policy recommend that consumers consider the following steps to dispose of unused pharmaceuticals: 1. Take prescription pharmaceuticals out of their original containers. 2. Mix pharmaceuticals (do NOT crush tablets or capsules) with an undesirable substance, such as cat litter or used coffee grounds. 3. Place the mixture into a disposable container with a lid, such as an empty margarine tub, or into a sealable bag. 4. Conceal or remove any personal information, including Rx number, on the empty containers by covering it with black permanent marker or duct tape, or by scratching it off. 5. Place the sealed container with the mixture, and the empty pharmaceutical containers, in the trash. In addition to the contact above, Diane B. Raynes, Assistant Director; Elizabeth R. Beardsley; Mark A. Braza; Tanya L. Doriss; Charles T. Egan; Brynne Keith-Jennings; Amanda M. Leissoo; Carol Herrnstadt Shulman; John B. Stephenson; and Kiki Theodoropoulos made key contributions to this report. Also contributing to this report were Sandra J.G. Kerr, Katherine M. Raheb, and Nicholas L. Weeks. | Drinking water in some metropolitan areas contains concentrations of pharmaceuticals, raising concerns about their potential impact on human health. The Safe Drinking Water Act (SDWA) authorizes the Environmental Protection Agency (EPA) to regulate contaminants, including pharmaceuticals, in public drinking water systems if they may adversely affect human health among other criteria. Pharmaceuticals may enter drinking water supplies from several pathways, including discharge from wastewater facilities. GAO was asked to provide information on the (1) extent to which pharmaceuticals occur in drinking water and their effects, if any, on human health; (2) U.S. and other countries' approaches to reducing their occurrence; and (3) challenges, if any, that EPA faces in determining whether to regulate pharmaceuticals. GAO reviewed federal and peer-reviewed reports, and surveyed a nonprobability sample of five U.S. programs designed to properly dispose of pharmaceuticals. We selected these programs based on geographic diversity and program characteristics. We also researched such programs in two countries, and interviewed scientists and agency officials. Research has detected pharmaceuticals in the nation's drinking water. National and regional studies by the U.S. Geological Survey, EPA, and others have detected pharmaceuticals in source water, treated drinking water, and treated wastewater; but the full extent of occurrence is unknown. The concentrations detected for any one pharmaceutical were measured most frequently in parts per trillion. Research has not determined the human health effects of exposure to these concentrations of pharmaceuticals in drinking water. However, federal research has demonstrated the potential impact to human health from exposure to some pharmaceuticals found in drinking water, such as antibiotics and those that interfere with the functioning and development of hormones in humans. Some states and local governments as well as the Drug Enforcement Administration have taken actions that could reduce the extent to which pharmaceuticals occur in drinking water. These efforts have primarily been through drug take-back programs to encourage proper control and disposal of pharmaceuticals. Additional efforts have been adopted in Europe following the European Union's directive in 2004 requiring member states to have appropriate collection systems for unused or expired medicinal products. In addition to collection systems, Sweden also encourages actions such as writing small initial prescriptions to reduce the amount of pharmaceuticals that are disposed of if patients switch to a different pharmaceutical course. EPA faces challenges in obtaining sufficient occurrence and health effects data on pharmaceuticals and other contaminants in drinking water to support analyses and decisions to identify which, if any, pharmaceuticals should be regulated under SDWA. EPA is collaborating with the Food and Drug Administration and U.S. Geological Survey on research to help obtain such data but these efforts are largely informal. EPA officials said there is no formal mechanism, such as a long-term strategy or formal agreement, to manage and sustain these collaborative efforts. A recently expired interagency workgroup, which EPA co-chaired, initiated work on a research strategy to identify opportunities that will enhance collaborative federal efforts on pharmaceuticals in the environment, but its draft report did not contain key details about how the agencies will coordinate such collaborative efforts. GAO previously identified key practices for enhancing and sustaining collaboration among federal agencies, some of which may help clarify such coordination, such as establishing the roles and responsibilities of collaborating agencies; leveraging their resources; and establishing a process for monitoring, evaluating, and reporting to the public the results of the collaborative research efforts. GAO recommends that the Administrator of EPA establish a workgroup or other formal mechanism to coordinate research on pharmaceuticals and other contaminants in drinking water. EPA agreed with the recommendation. |
ATSA was enacted on November 19, 2001, in response to the September 11, 2001, terrorist attacks. ATSA established the TSA and charged it with responsibility for strengthening security in all modes of transportation, including aviation. One of the most significant changes mandated by ATSA was the shift from the use of private-sector screeners to perform airport screening operations to the use of federal screeners. Prior to ATSA, passenger and checked baggage screening had been performed by private screening companies under contract to airlines. ATSA required TSA to create a federal workforce to assume the job of conducting passenger and checked baggage screening at commercial airports. The federal workforce was to be in place by November 2002. At the same time, ATSA mandated that TSA establish a 2-year pilot program using qualified private screening companies to screen passengers and checked baggage, with TSA oversight. Pursuant to section 108 of ATSA, TSA selected five airports, one from each airport security category, to participate in the pilot program. TSA also competitively selected four contractors (one contractor serves two airports) to conduct screening at the pilot airports. Table 1 lists the airports and private screening contractors that participated in the pilot program. Section 108 further permitted the more than 400 commercial airports using federal passenger and checked baggage screeners to apply to TSA to use private rather than federal screeners at the conclusion of the pilot. Beginning on November 19, 2004, all commercial airports with federal security screening became eligible to apply to opt-out of using federal screeners through the newly established SPP. An airport operator may submit to TSA an application to have the screening of passengers and checked baggage at an airport be carried out by the screening personnel of a qualified private screening company, under a contract entered into between the private screening contractor and TSA. In addition to assessing airport applications for using private screeners, as part of the SPP, TSA plans to select qualified private screening companies that apply and meet ATSA and TSA requirements to conduct screening, including airports that seek to apply to serve as the private screening contractor. The five airports selected to participate in the pilot program have applied and been accepted to the SPP. TSA awarded, on a non-competitive basis, new extension contracts (that replaced the original pilot program contracts) to the incumbent private screening contractors at the five pilot program airports effective November 19, 2004. The contracts enable the four private screening contractors to continue performing screening operations through May 18, 2006. As in the original pilot program contracts, the new contracts require private screening contractors to adhere to several ATSA provisions, including that: the level of screening services and protection provided at the airport under the contract will be equal to or greater than the level that would be provided at the airport by federal government personnel; the private screening company be owned and controlled by a citizen of the United States; the private screening company, at a minimum, meet employment standards, compensation and benefits rates, and performance requirements that apply to federal screeners; and all private screener candidates meet the same minimum qualifications as federal screeners, including U.S. citizenship (or being a national of the United States), high school diploma or equivalent, English proficiency, and pass a criminal background check. TSA will make the final decision to approve any application submitted for participation in the SPP and reserves the right to consider airport specific threat intelligence and an airport’s record of compliance with security regulations and security requirements to determine the timing of any transition to private screening. TSA may also impose a delay on when an airport can transition to private screening based on such factors as peak travel season and the total cost of providing screening services at an airport. During the period November 2004 through January 2006, 7 out of the more than 400 commercial airports had applied to participate in the SPP. In addition to the five airports that participated in the pilot program, as of January 30, 2006, two additional airports that did not participate in the pilot program had applied to use private screeners—Elko Regional Airport in Nevada and Sioux Falls Regional Airport in South Dakota. However, after discussions with TSA officials, Elko Regional Airport submitted a letter to TSA on September 30, 2005, seeking to withdraw its application on the grounds that the City of Elko could not qualify as a private screening company, thereby mooting its intention that the airport would serve as the contractor. On October 17, 2005, TSA replied back to Elko, acknowledging Elko’s withdrawal of its application to participate in the SPP. Table 2 provides information on the two airports that applied to the SPP, as of January 2006. We did not attempt to identify the reasons that only 7 of more than 400 commercial airports that were eligible to participate in the SPP had submitted an application. However, in our November 2004 report on the SPP, we reported that of the 26 airport operators we interviewed, 20 said their airport would not apply to participate in the SPP in the first year of the program, 5 were uncertain whether to apply for the 2004 cycle, and 1 said his airport planned to apply, but only for its international passenger terminal. Among the 20, 16 said they were satisfied with federal screeners or did not see any benefit to applying to participate in the SPP and 13 cited concerns about airport liability in the event of a terrorist attack. In May 2005, TSA approved the SPP application for the Sioux Falls airport, and in December 2005, TSA awarded a contract for passenger and checked baggage screening services at Sioux Falls to a private screening contractor. In February 2006, this award enabled Sioux Falls airport to transition from TSA federal screeners to private screeners employed by the contractor. According to the contractor, it will use 30 FTEs, 7 less than TSA’s screener allocation for Sioux Falls airport, without compromising security or customer service. The contractor expects to achieve operational efficiencies and cost savings for its screening operations at this airport due to the reduction in FTEs. In addition, during February 2006, TSA awarded a contract to a private screening contractor at one of the five pilot program airports. TSA is in the process of awarding contracts to the remaining four airports that applied to use private screeners. As of February 28, 2006, TSA received proposals from private screening companies for the Greater Rochester International, Tupelo, Kansas City, and San Francisco International airports. TSA also released the request for proposals for San Francisco International airport. TSA also approved 34 private screening companies for listing on a qualified vendors list, which identifies that these companies are eligible to perform passenger and checked baggage screening services in the SPP. DHS’s fiscal year 2006 appropriations provides nearly $2.54 billion to fund the screener workforce—about $2.4 billion for federal passenger and checked baggage screener full-time equivalents and an additional $139.6 million to pay for screening contractors at the five pilot program airports. In accordance with its appropriations, TSA plans to fund the SPP from the same budget line item as federal screening operations to provide flexibility on the number of airports that can participate in the program. In this manner, the costs for contracts with private screening contractors are to be funded by the cost of the federal operations that are being displaced. The SAFETY Act, enacted as part of the Homeland Security Act of 2002, offers liability and other protections to sellers of qualified anti-terrorism technologies. According to DHS, services, such as screening services, are eligible to receive liability protection under the SAFETY Act if designated as qualified anti-terrorism technologies, thus limiting liability risks for the private screening contractor and its subcontractors, suppliers, vendors, and customers. SAFETY Act protection pertains to “claims arising out of, relating to, or resulting from an act of terrorism” where qualified anti- terrorism technologies have been deployed. According to DHS, the SAFETY Act reflects the intent of Congress to ensure that the threat of liability does not deter the potential manufacturers or sellers of anti- terrorism technologies from developing and commercializing technologies that could significantly reduce the risks or mitigate the effects of large- scale acts of terrorism. The SAFETY Act does not offer indemnification (compensation for losses incurred) to sellers of qualified anti-terrorism technology but rather limits, and in some instances may completely bar, claims brought against sellers of anti-terrorism technologies that have been deployed in defense against or response or recovery from a terrorist incident. If a seller of a potential anti-terrorism technology wishes to be awarded SAFETY Act protections, the seller must formally apply to the department using the forms provided by DHS, furnish the entire requisite supporting data and information, and successfully demonstrate compliance with the act’s requirements. TSA awarded one of two types of contracts for extending contractor performance at the five pilot program airports. Both these types of contracts were awarded on a non-competitive basis to the private screening contractors. TSA awarded the first type of contract—cost-plus- award-fee contracts (a type of cost-reimbursement contract)—to the four private screening contractors providing screening services at four of the five pilot program airports. These contracts, which are generally used when the costs are not known, provide for payment of allowable incurred costs, to the extent prescribed in the contract. A cost-plus-award-fee contract provides for a fee consisting of (1) a base amount that is a percentage of the estimated cost fixed at inception of the contract and (2) an award amount that the contractor may earn in whole or in part during the contract period and that is sufficient to provide motivation for excellence in such areas as quality, timeliness, technical ingenuity, and cost-effective management. The actual award amount is based upon an evaluation by TSA compared against criteria spelled out in the contract. This determination and the methodology for determining the award fee are unilateral decisions made solely at the discretion of the government. TSA awarded the second type of contract—a fixed-price-award-fee contract—to one private screening contractor. This type of contract is generally used when the requirements are reasonably known and a reasonable basis for firm pricing by the contractor exits. A fixed-price award fee contract establishes a fixed price (including normal profit) for the effort, which will be paid for satisfactory contract performance, and an award fee. The award fee earned (if any) will be paid in addition to that fixed-price based on periodic evaluations of the contractor’s performance against an award-fee plan. TSA awarded a fixed-price-award-fee contract to a private screening contractor at Tupelo airport, a security category IV airport, the smallest airport. According to the Federal Acquisition Regulation (FAR), which generally governs federal government procurement activities, the negotiation of contract type and price (or estimated cost and fee) should result in reasonable contractor risk and provide the contractor with the greatest incentive for efficient and economical performance.A firm-fixed-price contract, which best utilizes the basic profit motive of a business enterprise, shall be used when the contractor risk involved is minimal or can be predicted with an acceptable degree of certainty. The FAR provides that when a reasonable basis for firm pricing does not exist, other contract types (such as cost reimbursement) should be considered, and negotiations should be directed toward selecting a contract type that will appropriately tie profit to contractor performance. As a service continues to be contracted over time, however, and after experience provides a basis for firmer pricing, the FAR advises that cost risk should shift to the contractor and a fixed-price contract should be considered. The FAR specifically states that contracting officers should avoid protracted use of a cost-reimbursement contract after experience provides a basis for firmer pricing. Additionally, under the FAA acquisition policy followed by TSA, “he use of fixed-price contracts is strongly encouraged whenever appropriate.” DHS awarded some of the liability protections available under the SAFETY Act to three of the four private screening contractors that applied for it and stated that it will decide the status of future applications on a case-by-case basis in accordance with criteria described in the act. However, DHS cannot award the most extensive level of protection under the SAFETY Act, certification status, until it can determine whether contractors will perform as intended—a criterion that must be satisfied before awarding such coverage. DHS officials stated that DHS has not been able to award SAFETY Act certification status to contractors because TSA has not yet finalized performance standards for assessing whether contractors have performed as intended. While all four current screening contractors we interviewed stated that SAFETY Act protection was important, they did not state that they would be unwilling to participate in the SPP without certification under the SAFETY Act. For example, one contractor said it had too much time and money invested in providing private screening services to not participate in the SPP. Congress has since granted legal protection from lawsuits to all airports where TSA conducts or oversees passenger and checked baggage screening. Specifically, the fiscal year 2006 DHS appropriations act shields airports from, among other things, virtually all liability related to negligence or wrongdoing by private screening contractors, their employees, or federal screeners. In addition, TSA made an effort to improve the screener hiring process by granting contractors and FSDs more input and flexibility in the hiring process, though some contractors, as well as FSDs at airports with federal screeners, remain concerned about the timing of the assessments and the length of time the assessment process takes. TSA has also taken steps to clarify SPP roles and responsibilities between federal and private sectors, but the four private screening contractors we interviewed still had questions about the roles and responsibilities of TSA staff at the airports they served. Officials at DHS’ Science and Technology Division stated that all anti- terrorism technologies submitted to the department for protection under the SAFETY Act are evaluated—including screener services—on a case- by-case basis, in accordance with the criteria defined by the act’s two- tiered protection status, as follows: Designation status. Designation status protects a seller of anti-terrorism technology in the event the technology fails to thwart an act of terrorism by limiting the type and amount of damages a plaintiff may recover such that a seller’s potential liability cannot exceed the amount of insurance coverage maintained by the seller. To receive designation status, anti- terrorism technologies, including screening services, must be evaluated by DHS against the seven criteria set out in the SAFETY Act: (1) prior U.S. government use or demonstrated substantial utility and effectiveness; (2) availability of the technology for immediate deployment in public and private settings; (3) existence of extraordinarily large or unquantifiable risk of exposing the seller or other provider of such anti-terrorism technology to potential liability; (4) substantial likelihood that the technology will not be deployed unless the risk management protections of the SAFETY Act (limited liability) are conferred; (5) the magnitude of risk to the public if the technology is not deployed; (6) evaluation of all scientific studies that can be feasibly conducted to assess the capability of the technology to substantially reduce risks of harm; and (7) anti-terrorism technology that would be effective in facilitating the defense against acts of terrorism, including technologies that prevent, defeat or respond to such acts. Certification status. Once designated, qualified anti-terrorism technologies become eligible for certification under the SAFETY Act, which gives the seller the legal status of a government contractor and renders the seller virtually immune from any claims that might arise in the event the technology fails to thwart an act of terrorism, provided the seller does not act fraudulently or with willful misconduct in submitting information to DHS. To certify, DHS must determine if the qualified anti- terrorism technology will (1) perform as intended, (2) conform to the seller’s specifications, and (3) be safe for use as intended. Certification status cannot be awarded unless these three criteria have been met. DHS places certified anti-terrorism technologies and services on an Approved Product List for Homeland Security. DHS determined that the four private screening contractors serving the five pilot program airports were eligible for SAFETY Act protection. Once this determination was made, after November 2004, three of the four current private screening contractors applied for and were provided designation status under the SAFETY Act. Contractors that apply to the SPP in the future are to be evaluated individually, as their applications to the program are processed. TSA awarded a contract for private screening services at Sioux Falls and Jackson Hole airports. The status of SAFETY Act coverage for these airports, if the contractors apply for coverage, will be determined at a later point in time. As of January 2006, one issue pertaining to how the SAFETY Act would be applied to contractors remained unresolved and is a cause for concern for one of the four contractors we interviewed. Specifically, DHS officials stated that they have not been able to award SAFETY Act certification status to contractors because TSA has not yet finalized performance standards for assessing whether contractors have performed as intended. DHS SAFETY Act officials stated that once TSA finalizes its performance standards, the contractors that previously received designation status may submit an application for SAFETY Act certification. The application is to include evidence demonstrating that they are meeting the TSA-defined performance standards. DHS will evaluate the material submitted by the applicant against the TSA standards. According to DHS officials, assuming the applicant is able to demonstrate that it is performing as TSA intends, there should be no impediment to granting certification. When the three contractors that already have SAFETY Act designation status were asked to comment on whether they would continue to participate in the SPP without certification status, two contractors told us they would. One of these two contractors said it had too much time and money invested in providing private screening services to not participate in the SPP. The third contractor said that its company’s $50 million in general liability insurance coverage excludes acts of terrorism, thus the company believed it “would remain exposed to serious liability concerns related to terrorist threats risks.” This contractor did not, however, explicitly state that it would not participate in the SPP going forward, if certification status were not awarded. In general, contractors may offset potential liability arising from acts of terrorism by purchasing commercially available liability insurance. Two of the four private screening contractors currently under contract to TSA purchased insurance policies that protect them from acts of terrorism. Both contractors stated that their policies were inadequate to cover the liability resulting from a major terrorism attack and that SAFETY Act protection was, therefore, additionally necessary to provide protection to the contractor. As to the importance of SAFETY Act protection to potential future participants in the SPP, in November 2004, we reported that five of six prospective SPP private screening contractors we interviewed—those not currently serving airports—stated that the issue of whether they would receive liability protection was important and would greatly affect whether they would participate in the SPP if selected by TSA as a qualified contractor. In addition, officials with two aviation associations representing hundreds of airports, whom we interviewed, stated that their members believed that SAFETY Act protection—both designation and certification—was necessary for contractors to participate in the SPP. The status of SAFETY Act coverage for airports, and liability coverage in general for airports using private screeners, differs from coverage for contractors. While DHS has determined that contractors performing screening services are eligible to receive liability protection under the SAFETY Act, the department has not determined whether airports that do not perform screening services are eligible for liability coverage under the act. In October 2005, however, Congress enacted legislation that granted airports legal protection from lawsuits. Specifically, section 547 of the Department of Homeland Security Appropriations Act, 2006, shields airport operators from virtually all liability relating to the airport operator’s decision on whether or not to apply to opt-out of using federal screening, and any acts of negligence, gross negligence, or intentional wrongdoing by either a qualified private screening company under contract to DHS, its employees, or by a federal screener. Prior to the enactment of this act, three of the seven airport operators we interviewed expressed concerns about whether the government would extend liability protection to them. They were concerned that if a security incident arose that resulted in litigation, they may become a party to a lawsuit. Officials with two aviation associations, whom we also interviewed at that time, also expressed concerns about airport liability. After the enactment of the 2006 appropriations act, one of the three airport operators that had expressed concerns about liability told us that the protection available under section 547 of the act had addressed its concerns about its airport’s liability. A second airport operator that we contacted after enactment of section 547 had not yet reviewed the provision and stated that it could not confirm whether its airport would be protected from liability. TSA made an effort to improve the screener hiring process by granting contractors and FSDs more input and flexibility in the hiring process, including more frequent assessments of screener candidates and two options for performing these assessments. Prior to November 2004, TSA had scheduled candidate assessment forums on a regional basis 1 to 2 times a year to evaluate a pool of candidates interested in screener positions. At that time, private screening contractors, like FSDs at airports with federal screeners, had to rely on TSA to authorize the hiring of screeners and establish candidate assessment forums—a process that could take several months. Beginning in November 2004, as part of the contract extensions, TSA began requiring the pilot program private screening contractors to submit annual hiring plans to TSA for review, indicating their anticipated screener staffing needs. The intention was to use this information to plan for more frequent and timely screener assessments conducted regionally and locally—up to 6 times a year. Despite TSA’s planned increase in the frequency of assessments, private screening contractors, as well as FSDs at airports with federal screeners, remain concerned about their inability to conduct hiring on an as needed basis because TSA still controls the scheduling of assessment forums. One contractor, for example, stated that despite the scheduling of more frequent assessment forums, it still could not fully implement its hiring plan because the assessments did not necessarily coincide with its hiring periods. In response to contractor concerns about the candidate assessment process, in November 2004, TSA began allowing private screening contractors two options for evaluating screener candidates: Option 1: Contractors may draw screener candidates from a pool developed by a private company under contract with TSA, which is responsible for assessing potential screener candidates. This company administers a computer-based aptitude test, mental and physical tests, and conducts background checks at regional assessment centers. The contractor is under no obligation to accept these applicants. Option 2: Contractors may use TSA’s assessment company for the aptitude test alone, and develop and implement additional assessment activities on their own, provided they meet ATSA requirements and TSA guidance. According to TSA officials, all four private screening contractors have selected from these two options for hiring screeners. The contractors’ views about the hiring process were mixed. For example, one contractor we interviewed said that because TSA has allowed it to conduct its own assessments, the length of the entire assessment process has been reduced from several months to 2 weeks. This contractor, which was using option two, is now conducting key parts of the assessment process. According to this contractor, its use of option two has resulted in a more efficient, effective, and significantly less costly process. A second contractor using option two stated that it had established a new hire recruitment, assessment, and training program. According to this contractor, its use of option two has resulted in its ability to identify more qualified screener candidates, improve screener retention, and fill screener vacancies on an as needed basis. However, the other two contractors remained concerned about the length of time the assessment process lasts. One of these contractors stated that the duration of the process was still so long that potential screeners found other jobs first and dropped out of consideration. This contractor, which was using option two, proposed using FSD staff to conduct the assessments to streamline and shorten the assessment process. According to TSA officials, TSA did not accept this suggestion because TSA’s Office of Human Resources determined it would have been too costly to allow FSD staff to conduct the assessments. TSA officials stated that they offered the contractor the same assessment options that are available to all airports with federal screeners. Officials further stated that they will continue to examine all aspects of the assessment process in an effort to offer greater efficiency and flexibility in screener hiring for both federal and contract screeners. While TSA has defined the roles and responsibilities for FSDs, FSD staff, and private screening contractors, among others, in its August 2005 SPP transition plan, the details contained in this plan have not been communicated to or shared with private screening contractors. TSA and SPP procurement officials stated that they consider the transition plan to be an internal document that TSA does not intend to distribute outside of the agency. However, officials stated that the information on roles and responsibilities under the SPP would be available to prospective private screening contractors as part of the SPP contracting process. Additionally, TSA SPP officials stated that they presumed that FSDs had communicated this information to the current private screening contractors. Further, TSA officials stated that TSA’s June 2004 guidance on the SPP provides information on roles and responsibilities of SPP stakeholders. However, our review of the guidance found that it did not clearly delineate the roles and responsibilities of TSA airport staff and the private screening contractors. For example, the guidance did not include any information on the roles and responsibilities of some TSA airport staff, such as screening managers and training coordinators, and did not clarify how their roles and responsibilities would differ from those of the private screening contractors. Additionally, the four private screening contractors we interviewed had questions about the roles and responsibilities of TSA staff at the airports they served, including screening managers and stated that, in their view, TSA had not clearly defined the roles and responsibilities of TSA staff at airports participating in the SPP. When asked whether FSD and FSD staff roles and responsibilities were clear, one contractor stated that he did not believe that TSA had recognized that the roles of training managers and screening managers at airports using federal or private screeners are different. A second contractor stated that, in his view, TSA had not standardized the roles and responsibilities of TSA airport staff across the five airports currently using private screeners. Similarly, a third contractor stated that TSA roles and responsibilities need to be more well- defined, particularly the role of FSD airport staff and TSA local contract staff. Finally, the fourth contractor stated that the separation of roles and responsibilities has been a major challenge on a daily basis in part because TSA staff at screening checkpoints assert control and impose operational changes at the checkpoints—tasks that the contractor believes it is responsible for, rather than TSA staff. This contractor identified the need for TSA to clearly define the roles of the various stakeholders involved in the SPP and to establish guidelines on TSA’s oversight and regulatory responsibilities at airports participating in the SPP. In September 2005, in reporting on the ability of FSDs to address airport security needs, we stated that TSA airport stakeholders (including airport operators) at some of the airports we visited stated that the FSD’s role was not sufficiently clear, and at least one stakeholder at every airport we visited said such information had never been communicated to them. We recommended that DHS direct TSA to communicate the authority of the FSD position, as warranted, to FSDs and all airport stakeholders. In response, TSA agreed to update the role of the FSD and communicate this information to airport stakeholders. As of December 2005, however, TSA had not yet implemented this recommendation, but stated that the matter is being considered by a TSA steering committee. In addition, a consulting firm that evaluated the private screening pilot program in April 2004 recommended that TSA clearly delineate the roles and responsibilities among federal and private screening managers and their staff and include this information in its contracts with the private screening contractors. Based on our review of the June 2004 guidance on the SPP and the contracts awarded to the current private screening contractors in November 2004, TSA had not included this information. TSA officials stated that they plan to clearly delineate roles and responsibilities of the FSD, FSD staff, and private screening contractors in the forthcoming SPP contracts. According to our standards for internal controls, agency management should ensure there are adequate means of communicating with external stakeholders on issues that may have a significant impact on the agency’s ability to achieve its goals. By not sharing detailed information on roles, responsibilities, and authorities described in the SPP transition plan with all FSDs and private screening contractors, TSA may be missing an opportunity to support the effective performance and management of essential functions related to the screening process. Additionally, without clear and specific information on roles and responsibilities under the SPP, it may be difficult for prospective SPP contractors to develop an informed estimate of the costs of providing screener services. Through its contracts, TSA offers the private screening contractors some incentives to decrease costs. Specifically, TSA’s cost reimbursement contracts for screening services at four of the five airports currently using private screeners provide some incentives in the form of an award fee tied in part to the contractor’s ability to achieve cost efficiencies and innovations. However, despite TSA’s use of cost-savings as a basis for a portion of the award fees, opportunities for government cost-savings may be limited because under the cost-reimbursement contracts the government bears most of the cost risk—the risk of paying more than it expected. TSA plans to shift more cost risk to contractors by competitively awarding fixed-price-award fee contracts for screening services at the four smallest airports that will participate in the SPP. TSA officials said they also plan to competitively award fixed-price contracts for screening services at larger airports, but will not do so for another 1 to 2 years— when they believe that screening costs at larger airports will be better known. TSA expects that the SPP will operate at a cost that is competitive with equivalent federal operations and will achieve cost-savings, where possible. However, opportunities for cost savings are somewhat limited because of various requirements that contractors must meet in performing the contract. Specifically, under ATSA, private screening companies must provide compensation and other benefits to contract screeners at a level not less than that provided to federal screeners. Further, the contracts require that contractors ensure that security checkpoints are staffed in accordance with TSA’s standard operating procedures and other government requirements and that the screeners have the qualifications and training established by the government. While these government airport security standards must be met, TSA has structured its existing contracts to provide some incentives to contractors for cost savings. Specifically, over the last 3 years, TSA has awarded cost-reimbursement contracts with an award fee component for screening services at four of the five airports currently using private screeners. These contracts provide for payment of allowable incurred costs, to the extent prescribed in the contract (typically up to a specified cost ceiling). In addition, the government also agrees to award a separate amount (base fee) fixed at inception of the contract and an award amount (award fee) that the contractor may earn in whole or in part during performance that is sufficient to provide motivation for excellence in such areas as quality, technical approach, and cost-effective management. The amount of the award fee to be paid is determined by the government’s judgmental evaluation of the contractor’s performance in terms of the criteria stated in the contract. Because cost-savings and contract management account for 20 percent of the award fee determination for the current screening services contracts, these contracts do provide some incentive for contractor cost efficiency. Specifically, the award fee plan establishes the expectation that contractors will provide screening services with cost efficiencies and innovation, while meeting the security standards, mission objectives, and compensation levels required by ATSA and TSA, respectively. These cost and contract management factors include: Overtime/personnel costs-–evaluates the contractor’s ability to control overtime and personnel costs. Innovation/continuous improvement-–evaluates the contractor’s ability to build on previous experiences/accomplishments and utilize innovative approaches, techniques and tools. Other direct/indirect cost-–evaluates the contractor’s ability to control direct labor cost and overtime costs and its ability to effectively manage its subcontract costs through use of competition to the greatest extent practicable and through documented cost analysis substantiating the reasonableness of subcontract costs. Indirect cost control evaluates the contractor’s ability to control its indirect costs. Despite TSA’s use of cost-savings as a basis for a portion of the award fees, opportunities for government cost-savings may be limited in part because under the cost-reimbursement contracts the government bears most of the cost risk—the risk of paying more than it expected. Specifically, under cost-reimbursement contracts, the government must reimburse the contractor for all allowable costs as provided in the contract. TSA plans to shift most cost risk to its contractors by moving to a fixed-price-award fee contract in the next 1 to 2 years. A fixed-price type of contract places upon the contractor the maximum risk and full responsibility for all costs and resulting profit or loss, providing maximum incentive for the contractor to control costs and perform efficiently. Further, in a competitive environment, pricing by contractors for a fixed- price contract would be subject to marketplace pressures that would provide incentives for the contractor to control costs and reduce prices in order to win the contract. The award fee component of a fixed-price contract is used to motivate the contractor by relating the amount of profit or fee payable under the contract to the contractor’s performance in the areas of operations, management, contract compliance, and human resources. Because the contract is fixed-priced, the award fee portion does not assess cost management. TSA has awarded a fixed-price-award-fee contract to the contractor providing screening services at the smallest of the five airports using private screeners, while the contracts for screening services for the other four airports remain as cost-reimbursement contracts. TSA officials stated that the fixed-price contract was awarded for the one airport (on a noncompetitive basis) because costs there were considered predictable and therefore a reasonable basis for firm pricing by the contractor existed. Table 3 provides information on TSA’s contracts with current private screening contractors. TSA officials stated that as of February 2006, TSA had awarded, was planning to award, or was in the process of awarding, additional fixed- price contracts on a competitive basis for screening services at three other small airports (categories II, III, and IV) under the SPP. TSA officials acknowledged that cost-reimbursement contracts place most of the cost risk on the government, rather than the contractor, but said the agency would not award fixed-price contracts for screening services at the two larger airports using private screening contractors for another 1 to 2 years. TSA officials stated that they would not award fixed-price contracts to these contractors because they did not know the costs of screening at the larger airports, where they believe costs are variable, and therefore they believe that TSA would be at greater risk of awarding a contract for a higher cost than might actually be necessary. TSA officials acknowledged that TSA already had, through an independent cost-data study, identified and collected some cost and performance data on passengers and checked baggage screening operations at 15 airports with private and federal screeners, including four category X and four category I airports and all five pilot program airports. This study, which was completed in October 2004, looked in particular at cost drivers—factors that contribute to overall expenses. Moreover, an April 2004 studyconducted for TSA by a consulting firm estimated how much TSA spent for screening operations at each of the five pilot program airports—including contract payments as well as costs borne by TSA—and compared the results with estimates of how much TSA would have spent had it actually conducted the screening operations at those airports. TSA officials stated that the cost information identified in these two studies provided useful data to help determine the costs of screening at airports currently using private screeners, but said additional information is needed to assist in transitioning to fixed-price contracts for screening services at larger airports. Specifically, TSA officials stated that additional cost information based on the actual costs of participating in the SPP is needed for the larger airports because the SPP contracts differ in two key ways from the pilot and extension contracts that TSA previously awarded. First, the SPP contracts will include specific performance measures and targets that the contractors must meet. Second, the contracts will allow for contractors to recommend and, if approved, implement innovations, and to select among options for assessing screener candidates and training screeners. The officials stated that it would therefore be difficult for prospective SPP contractors for the larger airports to accurately estimate the costs of providing screening services for a fixed-price contract for larger airports. As a result, TSA officials stated that they needed up to 2 additional years to determine estimated costs in order to potentially transition to fixed-price contracts, and therefore would continue using cost-reimbursement contracts with the largest airports (categories X and I) for that period. By using competitive bidding procedures to award fixed-price contracts to qualified firms, as TSA contemplates, TSA will also help to bring marketplace pressures to bear on competitors’ proposed costs and fees or prices and could enable TSA to maximize contractors’ incentives to control costs and ensure that the contractor, rather than the government, will bear more of the cost risk associated with performance of private screening operations. TSA has developed performance goals and draft measures and targets to assess the performance of private screening contractors under the SPP, but DHS has not yet approved them or established a time frame for doing so. Until DHS approves these measures, TSA cannot finalize and implement them to assess performance. Performance goals are measurable objectives against which actual achievement can be compared. Performance measures are the yardsticks to assess an agency’s success in meeting performance goals, while a performance target is a desired level of performance expressed as a tangible, measurable objective, against which actual achievement will be compared. Together, these performance metrics are used to assess an agency’s progress toward achieving the results expected. We reported in April 2004 that without data to assess the performance of private screening operations, TSA and airport operators have limited information from which to plan for the possible transition of airports from a federal screening system to a private system. In our November 2004 report, we stated that TSA had begun drafting performance measures for this purpose. In the current contracts that TSA awarded to the four private screening contractors, TSA established an award fee process to motivate contractor performance. These contracts were modified in February and March 2005 to implement the award fee process. TSA’s draft quality assurance surveillance and award fee plan for the SPP, dated October 2005, identifies the performance measures TSA plans to use to assess the performance of private screening contractors against TSA’s major goals for the program. According to TSA, the five goals for the SPP in the areas of security, customer service, costs, workforce management, and innovation, are: Ensure security. Provide world class customer service. Implement cost efficiencies. Respect the screening workforce. Create a partnership that leverages strengths of the private and public sector. TSA’s draft quality assurance surveillance and award fee plan for the SPP includes planned performance measures in 14 areas that are to be applied to all private screening companies that participate in the SPP. These performance measures, in addition to an innovation measure, are to be used to determine the award fee provided to contractors that participate in the SPP. Table 4 describes the performance measures. TSA established draft performance targets for 10 of the 14 measures, which all SPP contractors will be required to meet. TSA officials said individual contractors that are accepted as SPP participants in the future will be required to meet TSA’s performance targets for the remaining four measures—passenger screening threat image projection (TIP) detection rate; passenger screening TIP false alarm rate; screener recertification pass rates; and customer satisfaction. The performance indicators for the four measures for which targets have not yet been set by TSA are to be specific to each airport participating in the SPP. TSA stated that it has established baseline data for these four performance measures describing how federal screeners or private screening contractors have actually performed at individual airports, over time, as well as an overall average of performance. Using the baseline data as a starting point, performance targets would then be set for each airport. For example, if a baseline shows that historically, all airports met the performance measure for screener recertification pass rates 70 percent of the time, TSA would set the target measure at or above 70 percent. TSA officials stated that they are currently working to identify incentives to encourage better results at airports that have historically not met TSA’s performance standards for passenger and checked baggage screening. TSA is considering providing financial incentives for a limited time in an effort to quickly move its airports to meet TSA’s baseline level of performance. TSA officials stated that DHS must approve the draft performance goals, measures, and targets before they can be finalized, but as of January 2006, DHS had not yet done so, and had not set a deadline for doing so. We asked TSA and DHS officials which office within DHS was responsible for approving these performance metrics, but the officials were not able to provide us with the information. Until the draft performance measures are finalized by DHS, TSA will not be able to implement its performance measures for the SPP. According to our standards for internal controls, agencies must have systems in place for measuring, reporting, and monitoring program performance. In addition, as we have reported in our prior work on the importance of using the Government Performance Results Act (GPRA) to assist with oversight and decision making, credible performance information is essential for the Congress and the executive branch to accurately assess agencies’ progress toward achieving their goals. Further, the draft measures and targets TSA developed for the SPP will also be used by DHS to determine whether to award private screening contractors certification status under the SAFETY Act. Until the SPP measures and targets are finalized, DHS officials stated that they cannot determine whether contractors will perform as intended—criteria that must be satisfied before awarding certification status. Since initiating the SPP in November 2004, DHS and TSA have taken steps to develop a legal, contractual, and programmatic framework that enables the private sector to provide passenger and checked baggage screening services, with federal oversight in place to help ensure that security and screener performance is consistent and comparable at airports, whether federal or private screeners are used. As of January 2006, only 7 of over 400 airports that were eligible to apply to participate in the SPP had submitted an application. While we did not attempt to identify the reasons for the small number of applicants to the SPP, a contributing factor may be airports’ concerns about liability. In November 2004, we reported that half of the airport operators we interviewed (13 of 26) were concerned about airport liability in the event that a private screener failed to detect a threat object that led to a terrorist incident. Aviation associations that represent hundreds of airports have also identified liability as a major concern among airports. Although Congress’ recent effort to shield airports from liability in the fiscal year 2006 DHS appropriations act may address this concern, there have been no additional applicants since the act was passed. Furthermore, by extending a level of federal liability protection through the SAFETY Act to current private screening contractors, DHS has laid the groundwork for future contractors to potentially receive comparable protection. Ongoing concerns among prospective participants in the SPP regarding the availability of the most extensive level of protection under the SAFETY Act—certification—may be alleviated once TSA finalizes a standard of performance for private screening contractors that DHS can utilize to determine if contractors have demonstrated that they will perform as intended. As TSA moves forward with implementing the SPP, several opportunities exist for strengthening the management and oversight of the program. First, SPP applicants need clear information on what their roles and responsibilities are to be at airports where a privatized screener workforce operates with federal oversight. The absence of such guidance may affect the ability of responsible officials to effectively and efficiently manage screening checkpoints and the screener program in general. Additionally, without clear and specific information on roles and responsibilities of private screening contractors under the SPP, it may be difficult for prospective SPP contractors to develop an informed estimate of their personnel needs and associated costs under the SPP—information that is needed for the competitive bidding process. Second, in addition to concerns about liability protection, airports in the past expressed concerns about the degree of management control they would have over various aspects of screening services. Since then, TSA has provided additional operational flexibilities to private screening contractors, such as granting contractors and FSDs more input and flexibility in the screener hiring process. Contractors have reported efficiencies they have achieved as a result of these flexibilities, including using fewer screeners than authorized by TSA. Although steps have been taken to address concerns regarding airports’ liability and the need for contractors to have additional management control over various aspects of screening services, only two additional airports, in addition to the five pilot program airports, applied to participate in the SPP. We believe that identifying the underlying reasons for the small number of applicants to the SPP may be helpful to TSA and others in assessing what, if any, changes may be needed to the program. Third, while TSA is not required to adhere to the Federal Acquisition Regulation with respect to contracting practices, it has acknowledged the advantages of fixed-price contracts in situations where costs are reasonably understood. To this end, TSA has begun the contract award process for the four smaller airports using a fixed-price type of contract. TSA has decided to continue to use cost-plus-award-fee contracts rather than fixed-price contracts with private screening contractors providing services at the larger airports for at least an additional 1 to 2 years so that it can continue to collect information on the costs of screening operations at these airports. Using fixed-price contracts, as TSA plans to do, would result in the contractors assuming substantial cost responsibility from the government related to screener operations and help ensure that private screening contractors deliver the most cost-effective services, while ensuring that TSA and ATSA requirements related to maintaining airport security are met. The use of competitively awarded fixed-price contracts should provide a built-in incentive for contractors to identify cost-saving opportunities and innovations, which in turn may help reduce costs of screening contracts at the larger airports using private screeners. TSA could make these cost-savings opportunities available to airports with federal screeners, as appropriate, thereby transferring the efficiencies identified by the private sector to the federal government. Finally, until DHS approves the performance goals, measures, and targets for the SPP, it will not have a mechanism in place beyond the ongoing contracts for assessing the performance of private screening contractors. Without these performance goals, measures, and targets it may be difficult for TSA to identify areas of screener operations that contractors may be able to improve. To strengthen its administration of the SPP and to help address stakeholder concerns, we recommend that the Secretary of DHS direct the Assistant Secretary, TSA, to take the following actions: Formally document and communicate with all FSDs, current private screening contractors, and entities that apply to the SPP, the roles and responsibilities of all stakeholders that participate in the SPP, pertaining to the management and deployment of screening services. To help ensure the completion of a performance management framework for the SPP so that TSA can assess SPP contractors and to promote accountability of SPP contractors for achieving desired program outcomes, we recommend that the Secretary of the Department of Homeland Security take the following action: Establish a time frame for completing its review of the performance goals, measures, and targets for the SPP so that TSA may apply them at the earliest possible opportunity. We provided a draft of this report to DHS and TSA for review and comment. On March 10, 2006, we received written comments on the draft report, which are reproduced in full in appendix II. DHS generally concurred with the findings and recommendations in the report, and stated that efforts to implement our recommendations will help them develop a more effective, efficient, and economical administration of TSA’s SPP. With regard to our recommendation that TSA formally document and communicate with all FSDs, current private screening contractors, and entities that apply to the SPP, the roles and responsibilities of all stakeholders that participate in the SPP pertaining to the management and deployment of screening services, DHS identified steps that TSA is taking to this end. Specifically, DHS stated that TSA updated its SPP transition plan, which, among other things, further clarifies the relationships among the FSD, FSD staff, and private screening contactors, and other stakeholders as they relate to SPP program management. DHS also stated that TSA has assembled a transition team to work closely with SPP stakeholders to foster awareness and ensure communication regarding the roles and responsibilities of all involved parties working under the SPP. TSA’s successful implementation of these ongoing efforts should address the concerns we raised regarding documenting and communicating roles and responsibilities under the SPP. In addition, regarding our recommendation that DHS establish a time frame for completing its review of the performance goals, measures and targets for the SPP so that TSA may apply them at the earliest possible opportunity, DHS stated that TSA had established performance metrics and had provided it to DHS for its review. However, DHS did not specify a time frame for completing its review. We continue to believe that it is important for DHS to establish a time frame for completing its review of the performance goals, measures, and targets for the SPP. Without these performance metrics, TSA will not have a mechanism in place beyond the ongoing contracts for assessing the performance of private screening contractors. Further, until performance metrics are finalized, it remains unlikely that DHS will award such contractors certification under the SAFETY Act. DHS also provided updated information on the status of the SPP, which we incorporated where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days from the date of this report. At that time, we will send copies of this report to the Secretary of the Department of Homeland Security and the Administrator of the Transportation Security Administration and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3404 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix IV. To assess the Transportation Security Administration’s (TSA) efforts to implement the Screening Partnership Program (SPP), we analyzed (1) the status of federal efforts to determine whether and to what extent liability protection will be provided to private screening contractors and airports, and actions taken on other stakeholder concerns related to participation in the SPP; (2) how TSA has determined it will achieve cost-savings goals for screener operations through the SPP, specifically through choice of contract used and contract terms; and (3) TSA’s progress in developing and implementing performance goals, measures, and targets to assess the performance of the private screening contractors who will be participating in the SPP. To assess the status of federal efforts to determine whether and to what extent liability protection should be provided to private screening contractors and airports, and actions taken on other stakeholder concerns related to participation in the SPP, we reviewed Department of Homeland Security (DHS) and TSA documentation on the Support of Anti-terrorism by Fostering Effective Technologies Act (SAFETY Act) and the SPP. Specifically, we reviewed DHS guidance posted on its Web site related to the SAFETY Act and other DHS documentation on this act. We also reviewed TSA’s written responses to stakeholders frequently asked questions about the SPP, including information on liability protection; TSA’s transition plan for the SPP, which documents internal guidance on transitioning an airport from a federal screener workforce to a private screener workforce; TSA’s communications plan for the SPP; airport applications to the SPP; and other guidance-related materials TSA developed for airports and private-screening contractors. Additionally, we reviewed relevant legislation, such as Aviation and Transportation Security Act and the SAFETY Act, our prior reports that addressed issues related to the SPP and the use of private sector screeners, and testimony at congressional hearing on the SPP. Further, we interviewed DHS officials regarding the SAFETY Act and TSA headquarters officials responsible for implementing the SPP to determine efforts underway to address stakeholder concerns regarding the SPP. We also conducted semi- structured telephone interviews with the four private screening contractors currently providing passenger and checked baggage screening services, the airport directors at the seven airports that applied to participate in the SPP (the five pilot program airports and two airports that applied to participate in the SPP, namely, Elko Regional Airport in Elko, Nevada and Sioux Falls Regional Airport in Sioux Falls, South Dakota), and the federal security directors (FSD) at each of these airports to obtain their views on TSA’s efforts to implement the SPP and to address stakeholders concerns. Finally, we interviewed officials from two aviation associations—the American Association of Airport Executives and the Airports Council International, and a major liability insurance provider— to obtain information on the type of insurance available to private screening contractors. To assess the status of TSA’s efforts to achieve cost-savings in screener operations through the SPP, specifically with respect to the choice of contract used, and contract terms, we reviewed TSA’s contracts for screening services for the four contractors currently providing passenger and checked baggage screening services. We did not review the contracts that TSA awarded in early 2006 to two contractors to provide private screening services in the SPP. Additionally, we reviewed TSA’s acquisition policies and procedures, the Federal Acquisition Regulation, and the Federal Aviation Administration’s acquisition management system, to identify standards and guidance for contracting practices of TSA and the federal government. Further, we reviewed TSA’s transition plan and other SPP guidance to identify TSA’s current and planned approaches for identifying screening program costs. We also reviewed TSA’s activity- based costing study that assessed the cost of passenger and checked baggage screening operations at 15 airports, including the 5 that participated in the 2-year pilot program using private screeners. We determined that the results of the activity-based costing study were sufficiently reliable for the purpose of our review. Finally, to gather perspectives on opportunities for cost-savings under the SPP, we interviewed TSA SPP and contracting officials, the four contractors currently providing screening services, the seven airport directors who applied to the SPP and the FSDs at these airports, and representatives of the American Association of Airport Executives and the Airports Council International. We did not review TSA’s actual determination of the amount of contractor award fee. Nor did we review the conduct of TSA’s performance evaluation boards or fee determining official in evaluating contractor performance against award fee criteria (including cost-savings) and determining the amount of the contractors’ award fee. However, we did verify that TSA had evaluated contractor performance (including cost- savings) in making award fee determinations. To assess TSA’s progress in developing and implementing performance goals, measures, and targets to assess the performance of the private screening contractors who will be participating in the SPP, we reviewed the terms of TSA’s award fee process specified in the current contracts and TSA’s draft quality assurance surveillance and award fee plan. We also reviewed TSA’s June 2004 guidance on the SPP, other guidance-related material TSA developed for private screening contractors and airports regarding the SPP, TSA’s contracts for the private screening contractors currently providing screening services, TSA testimony at congressional hearings, and our prior reports that addressed issues related to the SPP and the use of private-sector screeners. A listing of our prior reports is contained in appendix IV. Additionally, we interviewed TSA headquarters officials responsible for the SPP. We performed our work from March 2005 through March 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, David Alexander, C. Jenna Battcher, Chuck Bausell, Amy Bernstein, David Hooper, Lara Laufer, Thomas Lombardi, Hugh C. Pacquette, Lisa Shibata, Maria Strudwick, and Adam Vodraska made key contributions to this report. Aviation Security: Significant Management Challenges May Adversely Affect Implementation of the Transportation Security Administration’s Secure Flight Program. GAO-06-374T. Washington, D.C.: February 9, 2006. Aviation Security: Federal Air Marshal Service Could Benefit from Improved Planning and Controls. GAO-06-203. Washington, D.C.: November 28, 2005. Aviation Security: Federal Action Needed to Strengthen Domestic Air Cargo Security. GAO-06-76. Washington, D.C.: October 17, 2005. Transportation Security Administration: More Clarity on the Authority of Federal Security Directors Is Needed. GAO-05-935. Washington, D.C.: September 23, 2005. Aviation Security: Flight and Cabin Crew Member Security Training Strengthened, but Better Planning and Internal Controls Needed. GAO-05-781. Washington, D.C.: September 6, 2005. Aviation Security: Transportation Security Administration Did Not Fully Disclose Uses of Personal Information During Secure Flight Program Testing in Initial Privacy Notes, but Has Recently Taken Steps to More Fully Inform the Public. GAO-05-864R. Washington, D.C.: July 22, 2005. Aviation Security: Better Planning Needed to Optimize Deployment of Checked Baggage Screening Systems. GAO-05-896T. Washington, D.C.: July 13, 2005. Aviation Security: TSA Screener Training and Performance Measurement Strengthened, but More Work Remains. GAO-05-457. Washington, D.C.: May 2, 2005. Aviation Security: Secure Flight Development and Testing Under Way, but Risks Should Be Managed as System Is Further Developed. GAO-05-356. Washington, D.C.: March 28, 2005. Aviation Security: Systematic Planning Needed to Optimize the Deployment of Checked Baggage Screening Systems. GAO-05-365. Washington, D.C.: March 15, 2005. Aviation Security: Measures for Testing the Impact of Using Commercial Data for the Secure Flight Program. GAO-05-324. Washington, D.C.: February 23, 2005. Transportation Security: Systematic Planning Needed to Optimize Resources. GAO-05-357T. Washington, D.C.: February 15, 2005. Aviation Security: Preliminary Observations on TSA’s Progress to Allow Airports to Use Private Passenger and Baggage Screening Services. GAO-05-126. Washington, D.C.: November 19, 2004. General Aviation Security: Increased Federal Oversight Is Needed, but Continued Partnership with the Private Sector Is Critical to Long-Term Success. GAO-05-144. Washington, D.C.: November 10, 2004. Aviation Security: Further Steps Needed to Strengthen the Security of Commercial Airport Perimeters and Access Controls. GAO-04-728. Washington, D.C.: June 4, 2004. Transportation Security Administration: High-Level Attention Needed to Strengthen Acquisition Function. GAO-04-544. Washington, D.C.: May 28, 2004. Aviation Security: Challenges in Using Biometric Technologies. GAO-04-785T. Washington, D.C.: May 19, 2004. Nonproliferation: Further Improvements Needed in U.S. Efforts to Counter Threats from Man-Portable Air Defense Systems. GAO-04-519. Washington, D.C.: May 13, 2004. Aviation Security: Private Screening Contractors Have Little Flexibility to Implement Innovative Approaches. GAO-04-505T. Washington, D.C.: April 22, 2004. Aviation Security: Improvement Still Needed in Federal Aviation Security Efforts. GAO-04-592T. Washington, D.C.: March 30, 2004. Aviation Security: Challenges Delay Implementation of Computer- Assisted Passenger Prescreening System. GAO-04-504T. Washington, D.C.: March 17, 2004. Aviation Security: Factors Could Limit the Effectiveness of the Transportation Security Administration’s Efforts to Secure Aerial Advertising Operations. GAO-04-499R. Washington, D.C.: March 5, 2004. Aviation Security: Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges. GAO-04-385. Washington, D.C.: February 13, 2004. Aviation Security: Challenges Exist in Stabilizing and Enhancing Passenger and Baggage Screening Operations. GAO-04-440T. Washington, D.C.: February 12, 2004. The Department of Homeland Security Needs to Fully Adopt a Knowledge-based Approach to Its Counter-MANPADS Development Program. GAO-04-341R. Washington, D.C.: January 30, 2004. Aviation Security: Efforts to Measure Effectiveness and Strengthen Security Programs. GAO-04-285T. Washington, D.C.: November 20, 2003. Aviation Security: Federal Air Marshal Service Is Addressing Challenges of Its Expanded Mission and Workforce, but Additional Actions Needed. GAO-04-242. Washington, D.C.: November 19, 2003. Aviation Security: Efforts to Measure Effectiveness and Address Challenges. GAO-04-232T. Washington, D.C.: November 5, 2003. Airport Passenger Screening: Preliminary Observations on Progress Made and Challenges Remaining. GAO-03-1173. Washington, D.C.: September 24, 2003. Aviation Security: Progress Since September 11, 2001, and the Challenges Ahead. GAO-03-1150T. Washington, D.C.: September 9, 2003. Transportation Security: Federal Action Needed to Enhance Security Efforts. GAO-03-1154T. Washington, D.C.: September 9, 2003. Transportation Security: Federal Action Needed to Help Address Security Challenges. GAO-03-843. Washington, D.C.: June 30, 2003. Federal Aviation Administration: Reauthorization Provides Opportunities to Address Key Agency Challenges. GAO-03-653T. Washington, D.C.: April 10, 2003. Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T. Washington, D.C.: April 1, 2003. Airport Finance: Past Funding Levels May Not Be Sufficient to Cover Airports Planned Capital Development. GAO-03-497T. Washington, D.C.: February 25, 2003. Transportation Security Administration: Action and Plan to Build a Results-Oriented Culture. GAO-03-190. Washington, D.C.: January 17, 2003. Aviation Safety: Undeclared Air Shipments of Dangerous Goods and DOT’s Enforcement Approach. GAO-03-22. Washington, D.C.: January 10, 2003. Aviation Security: Vulnerabilities and Potential Improvements for the Air Cargo System. GAO-03-344. Washington, D.C.: December 20, 2002. Aviation Security: Vulnerability of Commercial Aviation to Attacks by Terrorists Using Dangerous Goods. GAO-03-30C. Washington, D.C.: December 3, 2002. Aviation Security: Registered Traveler Program Policy and Implementation Issues. GAO-03-253. Washington, D.C.: November 22, 2002. Airport Finance: Using Airport Grant Funds for Security Projects Has Affected Some Development Projects. GAO-03-27. Washington, D.C.: October 15, 2002. Commercial Aviation: Financial Condition and Industry Responses Affect Competition. GAO-03-171T. Washington, D.C.: October 2, 2002. Aviation Security: Transportation Security Administration Faces Immediate and Long-Term Challenges. GAO-02-971T. Washington, D.C.: July 25, 2002. Aviation Security: Information Concerning the Arming of Commercial Pilots. GAO-02-822R. Washington, D.C.: June 28, 2002. Aviation Security: Vulnerabilities in, and Alternatives for, Preboard Screening Security Operations. GAO-01-1171T. Washington, D.C.: September 25, 2001. Aviation Security: Weaknesses in Airport Security and Options for Assigning Screening Responsibilities. GAO-01-1165T. Washington, D.C.: September 21, 2001. Homeland Security: A Framework for Addressing the Nation’s Efforts. GAO-01-1158T. Washington, D.C.: September 21, 2001. Aviation Security: Terrorist Acts Demonstrate Urgent Need to Improve Security at the Nation’s Airports. GAO-01-1162T. Washington, D.C.: September 20, 2001. Aviation Security: Terrorist Acts Illustrate Severe Weaknesses in Aviation Security. GAO-01-1166T. Washington, D.C.: September 20, 2001. | In November 2004, as required by law, the Transportation Security Administration (TSA) began allowing all commercial airports to apply to use private screeners in lieu of federal screeners as part of its Screening Partnership Program (SPP). GAO's prior work found that airports and potential private screening contractors had concerns about the SPP, including whether they would be liable in the event of a terrorist attack and how roles and responsibilities would be divided among TSA airport staff and private screening contractors. This report addresses TSA's efforts to (1) provide liability protection to private screening contractors and airports and address other SPP stakeholder concerns; (2) achieve cost-savings through the SPP; and (3) establish performance goals and measures for the SPP. DHS and Congress have begun to address whether liability protection may be offered to current and prospective private screening contractors and airports using private screeners. DHS has already provided some liability protection to three of the four current private screening contractors. However, DHS officials stated that they cannot provide additional coverage, which would render contractors virtually immune from all pertinent claims, because TSA has not finalized performance standards that would allow DHS to determine if contractors will perform as intended--a criterion that must be satisfied before providing such additional protection. Recently enacted legislation shields airports from virtually all liability resulting from the negligence or wrongdoing committed by a private screening company or its employees. TSA has also taken action to improve the screener hiring process by granting contractors and TSA airport officials more input and flexibility in the hiring process. Additionally, TSA has defined the roles and responsibilities for SPP stakeholders--TSA airport staff and private screening contractors, among others--in its August 2005 SPP transition plan. However, the details in this plan have not been shared with private screening contractors, and all four contractors we interviewed were unclear about TSA staff roles and responsibilities at the airports they served. TSA has stated that the SPP will operate at a cost that is competitive with equivalent federal operations and will achieve cost-savings where possible. Over the last 3 years, TSA has awarded cost-reimbursement contracts with an award fee component for screening services at four of the five airports currently using private screeners. The award fee is based, in part, on contractor cost-savings. However, opportunities for TSA cost-savings may be limited because under the cost-reimbursement contracts TSA bears most of the cost risk--the risk of paying more than it expected. TSA plans to shift more cost risk to contractors by competitively awarding fixed-price-award fee contracts for screening services at the four smallest airports that will participate in the SPP. TSA also plans to competitively award fixed-price contracts for screening services at larger airports, but stated that they cannot do so for up to 2 years--when officials believe that screening costs at larger airports will be better known. TSA has developed performance goals and has begun drafting related measures and targets to assess the performance of private screening contractors under the SPP in the areas of security, customer service, costs, workforce management, and innovation. For example, one of the measures would require contractors to ensure that new hires receive required training. TSA's related target for this measure is that 100 percent of new hires will complete required training. These same measures and targets will also be used by DHS to assess whether to award full liability coverage under the SAFETY Act. TSA officials stated that DHS must approve the draft performance measures and targets before they can be finalized. As of January 2006, DHS had not yet completed its review. |
Business incentives are inducements that state and local governments can offer to attract or retain businesses and jobs. Incentives offered by state and local governments may be in the form of a direct payment to a business to locate or remain in a certain area. Or incentives may be less direct; for example, they can be in the form of exemptions from state and local taxes; loans on favorable terms, through industrial revenue bonds, or direct loans from state and local agencies; or state-subsidized job training. Over the past two decades, the variety of incentives offered by states and local governments has grown. One estimate by the state of Ohio shows that state and local governments annually spend billions of dollars to motivate businesses to relocate within their jurisdiction or to keep businesses from moving out. The use of incentives and their effectiveness and impact in luring businesses and jobs from one location to another have become the issue of much debate in recent years. Proponents of incentives maintain that they are a cost-effective way to promote economic development and that incentives have become a necessity because of the economic competitiveness that exists between regions and states. Proponents believe that business incentives have a positive effect on business-location decisions. On the other hand, opponents contend that relocating businesses from one area or state to another is a zero-sum game that, in the aggregate, creates little, if any, economic benefit. Opponents also contend that the dollars spent to provide incentives would be better used if applied to other services believed to be more important in economic development, such as improvements to the infrastructure and investments in human resources and education. Concern also exists that federal programs are being used and, in some cases, misused to provide incentives for luring businesses into relocating from one area to another. Federal programs provide state and local governments with loans and grants that can be used for transportation projects, waste treatment facilities, worker training, and other types of services that can be used as incentives. Newspaper articles have chronicled stories of how one community allegedly used federal funds to lure jobs from another community or how a community allegedly used federal funds to provide low-interest loans for retaining a business within its jurisdiction only to see the business move out of the community at a later date. The eight federal programs provide loans and grants that states, communities, and others can use for funding a variety of activities for which the economic development of an area or of individuals is the intended or possible offshoot benefit. An overview of each program that describes the program’s objectives and eligible activities, the types of funding provided, the entities eligible to receive program funding, the funding provided in fiscal year 1996, and a description of projects funded follows. Appendix I describes the programs in greater detail. Objectives, eligible activities, and types of funding provided: EDA’s Public Works and Development Facilities Program provides grants for helping finance projects in distressed communities to attract new industry, encourage business expansion, and generate long-term, private-sector jobs. Grants can be used for a variety of projects, including water and sewer systems serving primarily industrial and commercial users; access roads and other industrial park infrastructure improvements; port facilities; railroad sidings; tourism facilities; and vocational schools used primarily to train unemployed and underemployed adults. Funds can be used to acquire and develop land for these facilities and to construct, rehabilitate, alter, or expand them. Projects must be located within an EDA-designated redevelopment area. Entities eligible to receive funding: States, cities, counties and other political subdivisions, Indian tribes, commonwealths and territories, and private and public nonprofit organizations representing redevelopment areas are eligible to receive grants. Funding provided in fiscal year 1996: Public Works and Development Facilities grants normally cover up to 50 percent of a project’s cost; the remainder of a project’s funding is provided by the grantee. In fiscal year 1996, EDA awarded 158 Public Works and Development Facilities grants totaling $164.9 million. Description of projects funded: The largest share of Public Works and Development Facilities grant dollars awarded in fiscal year 1996—about $77 million, or 47 percent—went for projects involving water and sewer facilities. Other types of projects funded in fiscal year 1996 included industrial parks, industrial buildings, streets and roads, harbor development, and airports. Objectives, eligible activities, and types of funding provided: HUD’s CDBG Program provides communities with grants for activities that will benefit low- and moderate-income people, prevent or eliminate slums or blight, or meet urgent community development needs. CDBG funds can be used for a variety of activities, including to acquire, construct, or reconstruct public facilities, such as hospitals, nursing homes, and water and sewer facilities; provide new or expand existing crime prevention, child care, and other public services; rehabilitate housing; carry out special economic development projects, including the construction or reconstruction of commercial or industrial facilities; and provide community organizations with assistance for economic development and neighborhood revitalization projects. Entities eligible to receive funding: The Entitlement Communities Program, which provides grants to large cities—those that are a central city of a metropolitan area or any other city within a metropolitan area that has a population of 50,000 or more—and to urban counties—counties within a metropolitan area with populations of 200,000 or more (excluding the population of metropolitan cities included therein)—and the State and Small Cities Programs, which provide states with grants for distribution to the smaller, nonentitled communities, are the major components of the CDBG Program. Grants are based on formulas that consider population, the extent of poverty, the extent of overcrowding, and the age of housing of the entitled community or state. Because low- and moderate-income persons are the principal beneficiaries of CDBG funds, at least 70 percent of CDBG expenditures must be for activities primarily benefiting such persons. Funding provided in fiscal year 1996: Of the $4.6 billion in funds appropriated for the CDBG Program for fiscal year 1996, the Entitlement Communities and the State and Small Cities Programs together received about $4.4 billion. Description of projects funded: HUD’s data for 1993—the most recent year for which complete data are available—show that housing rehabilitation was the most prominent activity funded by entitled communities, accounting for about 31 percent of the funds spent during the year. Water and sewer activities were the most prominent activity funded by nonentitled communities, accounting for about 29 percent of the funds spent during the year. CDBG funds were used for, among other things, 3,000 projects to improve water, sewer, flood control, and drainage systems; 3,700 projects to repair or maintain roads, bridges, and sidewalks; and over 8,200 projects to construct or rehabilitate public facilities, such as facilities for abused and neglected children and child care and senior-citizen centers. HUD estimates that about 115,000 jobs were created in 1993 through the CDBG Program. Objectives, eligible activities, and types of funding provided: The EZ/EC program is a 10-year program administered by HUD and Agriculture that targets federal grants and provides tax and regulatory relief for helping distressed urban and rural communities overcome their economic and social problems. Funding for the EZ/EC Program is provided primarily by HHS’ SSBG Program. EZs and ECs can use EZ/EC SSBG grants to fund a range of economic and social development activities that are identified in their strategic plans. The strategic plan, developed in conjunction with residents and other stakeholders in the community, outlines the community’s vision for revitalizing its distressed areas and the activities and projects planned to accomplish this task. In addition to the EZ/EC SSBG funds, EZs can receive special tax incentives and other assistance, while ECs qualify only for the special tax incentives. Entities eligible to receive funding: In December 1994, HUD and Agriculture designated 104 communities as either EZs or ECs. Funding provided before and during fiscal year 1996: In 1994 and 1995, HHS allocated $100 million in EZ/EC SSBG grants for each of the 6 urban EZs, $40 million for each of the 3 rural EZs, and just under $3 million for each of the 95 urban and rural ECs for use over the 10-year life of the program. EZs and ECs draw down EZ/EC SSBG funds through the state or their cognizant state agency as needed for specific projects. As of June 30, 1997, EZs and ECs had requested $119.9 million of the $1 billion in total SSBG funds allocated by HHS. Description of projects funded: Projects funded by urban EZs and ECs include (1) a partnership in the Chicago EZ with a local college to prepare students for the General Educational Development tests, (2) a school-based program to reduce alcohol- and drug-related violence in the Detroit EZ, and (3) buying sites for a supermarket and retail stores in the Philadelphia EZ to create jobs for residents. Projects that rural EZs and ECs plan to fund include (1) establishing family service centers in the Central Savannah River Area EC in Georgia to provide recreation and leadership classes for youth and adult literacy classes, (2) refurbishing retail business facades in the City of Watsonville EC in California to improve the downtown area, and (3) building and equipping four rural fire stations in the Kentucky Highlands EZ. Objectives, eligible activities, and types of funding provided: Labor’s Employment and Training Assistance for Dislocated Workers Program, authorized under title III of JTPA, as amended, provides states and substate organizations with grants to help dislocated workers qualify for and find new jobs. Dislocated workers are those who have lost jobs because of mass layoffs or plant closings and include the long-term unemployed and unemployed self-employed workers and those individuals who have been laid off or notified of a layoff and who are unlikely to return to their previous occupation or industry. State and substate grantees can tailor the services for dislocated workers to meet participants’ needs. Services that can be provided for dislocated workers include (1) retraining services, including classroom and on-the-job training, basic and remedial education, and instructions in English; (2) basic readjustment services, such as job counseling, job placement assistance, labor market information, and supportive services, including child care and commuting assistance; and (3) needs-related payments to eligible dislocated workers who have exhausted their unemployment compensation and who require such assistance to participate in a job training program. Entities eligible to receive funding: Eighty percent of the title III funds provided for the Employment and Training Assistance for Dislocated Workers Program are allotted to states on the basis of a formula that considers the states’ unemployment levels. The remaining 20 percent of funds are retained by Labor and used to provide assistance to territories, to fund multistate projects, to provide assistance to workers dislocated by natural disasters, and to supplement state grants when they are not sufficient to provide services for workers dislocated by mass layoffs, including those resulting from federal actions, such as reductions in defense spending or compliance with Clean Air Act requirements. Funding provided during fiscal year 1996: Of the approximately $1.09 billion in total Employment and Training Assistance for Dislocated Workers Program grants for program year 1996, about $880 million was allotted to the states and about $214 million was retained in reserve by Labor. Description of projects funded: Data provided by Labor show that during program year 1995, the 268,000 individuals who completed training and left the program received assistance through the Employment and Training Assistance for Dislocated Workers Program, including about 125,000 who received basic readjustment services only, about 119,000 who received occupational training, and about 78,000 who received supportive services. Objectives, eligible activities, and types of funding provided: HHS’ CSBG Program provides states with grants to alleviate the causes of poverty by helping low-income individuals and families obtain adequate jobs, education, and housing. CSBG funds can be used to provide (1) a range of services and activities having a major impact on the causes of poverty; (2) activities designed to assist low-income participants; (3) supplies, services, and nutritious food on an emergency basis to counteract the conditions of malnutrition among the poor; and (4) coordination and linkage between governmental and other social services programs to ensure the effective delivery of such services to low-income individuals. Entities eligible to receive funding: CSBG funds are allocated to the states, and the states must pass through at least 90 percent of the funds they receive to locally based community action agencies that provide CSBG services. The states may use the remaining funds for antipoverty projects in the state and to administer the program. Funding provided during fiscal year 1996: HHS provided states with $389.6 million in CSBG funding for fiscal year 1996. Description of projects funded: HHS’ data for fiscal year 1994—the latest year for which complete data are available—show that CSBG funding totaled $357.4 million and that the largest share of funds—$98.4 million—was spent on activities to target and coordinate the array of local services and programs available to combat poverty. About $73.1 million was spent for emergency services, such as shelter and food assistance; $43.8 million for nutrition programs; and $35.4 million and $25.7 million, respectively, for education and employment activities. Objectives, eligible activities, and types of funding provided: EPA’s Clean Water State Revolving Fund Program provides the states, including Puerto Rico, with annual funds to help capitalize revolving funds established by the states to finance wastewater treatment facilities and other water quality projects needed to improve water quality and protect public health.Grants are allotted to the states generally according to percentages specified in the Clean Water Act. States must match grants at a rate of at least $1 for every $5 received. States can use their revolving funds to provide loans and other assistance (but not grants) for (1) constructing publicly owned wastewater treatment facilities; (2) implementing programs to control nonpoint sources of water pollution, such as agricultural runoff; and (3) developing and implementing plans to conserve and manage estuaries. Entities eligible to receive funding: State, municipal, tribal, intermunicipal, and interstate agencies are eligible for loans and other assistance for state revolving funds. Individuals can also receive assistance for activities to control nonpoint sources of water pollution and to conserve and manage estuaries. Wastewater treatment projects financed by the revolving funds must be on the state-prepared project priority list. The list identifies and ranks treatment facilities that the state expects to fund. Activities that a state intends to fund to control nonpoint sources of water pollution and to protect estuaries must be included in the state’s annual plan identifying the state’s intended use of the fund. Funding provided during fiscal year 1996: During fiscal year 1996, EPA awarded about $1.7 billion in capitalization grants to the states. EPA anticipates that grants to capitalize state revolving funds will continue until 2004. According to the Director, EPA State Revolving Fund Branch, wastewater treatment facilities account for about 95 percent of the dollars in assistance provided by state revolving funds. Description of projects funded: Data from EPA’s State Revolving Fund Management Information System show that as of June 30, 1996, facilities for the secondary treatment of wastewater represented about 50 percent of the total projects funded by state revolving funds; other types of projects included combined sewer overflow projects, facilities to handle and treat sludge at water treatment plants, and projects to protect or restore streams, wetlands, and estuaries. Objectives, eligible activities, and types of funding provided: The Water and Waste Disposal Program provides loans and grants for rural communities with populations of 10,000 or less to develop water and waste disposal systems that will improve the quality of life and promote economic development in rural areas. Assistance can be in the form of direct loans and/or grants from Agriculture or loans from commercial sources that are guaranteed against loss by Agriculture. Grants are provided for reducing water and waste disposal costs to a reasonable level for projects serving financially needy communities. Water and Waste Disposal loans and grants may be used to construct, repair, improve, expand, or modify rural water, sanitary sewage, solid waste disposal, and storm wastewater disposal systems. Facilities that may be funded include reservoirs, pipelines, wells, pumping stations, and sewer and storm sewer systems. Funds can be used to acquire land and water rights and to pay legal, engineering, and other fees associated with developing facilities. Entities eligible to receive funding: Assistance is available to municipalities, counties, Indian tribes, special purpose districts, and nonprofit corporations. Applicants must be unable to obtain other financing at reasonable rates and terms. Funding provided during fiscal year 1996: In fiscal year 1996, Agriculture provided about $963 million in Water and Waste Disposal direct loans and grants. In addition to the direct loans and grants, Agriculture also guaranteed about $59 million in loans. Description of projects funded: Of the $963 million obligated in fiscal year 1996, 617 direct loans worth about $389 million and 435 grants worth about $198 million were provided for rural water projects. Agriculture also provided 278 loans worth about $214 million and 233 grants worth about $162 million for rural waste disposal projects. Objectives, eligible activities, and types of funding provided: Transportation’s Surface Transportation Program (STP) provides states with grants for a variety of highway, mass transit, pedestrian, bikeway, and intermodal transportation projects. STP funds are apportioned on the basis of historical federal funding that indirectly includes factors such as postal route mileage, land area, and the urban and rural population of each state. Each state must reserve 10 percent of its STP allotments for safety construction activities, such as rail-highway grade crossings, and 10 percent for transportation enhancements, such as the control and removal of outdoor advertising. Of the remaining funds, the state must distribute 62.5 percent between urbanized areas that have populations exceeding 200,000 and the remaining areas of the state in proportion to their relative share of the state’s population. States can use the remaining 37.5 percent in any area of the state. Entities eligible to receive funding: Localities, especially larger communities, are given an unprecedented level of control to select the surface transportation solutions that best fit their needs and preferences. Projects that can be funded with STP grants include (1) highway and bridge construction, reconstruction, and rehabilitation projects; (2) transit projects, including publicly owned intracity or intercity bus terminals and facilities; (3) car pool projects, corridor parking facilities, bicycle transportation, and pedestrian walkways; (4) highway and transit safety improvements, projects to mitigate hazards caused by wildlife, and rail-highway grade crossings; and (5) capital and operating costs for traffic monitoring, management, and control facilities and programs. Funding provided during fiscal year 1996: Transportation apportioned about $3.4 billion in STP funds to the states for fiscal year 1996. Description of projects funded: In fiscal year 1996, Transportation obligated STP funds for a variety of activities, including $416.6 million for safety construction activities, $426.9 million for transportation enhancement projects, $1 billion for projects in urbanized areas with populations exceeding 200,000, and $2.8 billion for state-discretionary projects in any area of the states. Three of the eight programs—the Public Works and Development Facilities Program, the Employment and Training Assistance for Dislocated Workers Program, and the EZ/EC Program—have restrictions against using program funds to relocate businesses if the relocations result in the loss of jobs in other areas. In May 1997, the House of Representatives passed legislation that would, among other things, prohibit using HUD’s CDBG funds to relocate businesses if the relocation results in plant closings or job losses in other areas where the business is operating. As of August 1, 1997, this legislation was pending in the Senate. A second bill that would also prohibit using CDBG funds to relocate jobs was pending in the Senate at that time. The remaining four programs—HHS’ Community Services Block Grant Program, EPA’s Clean Water State Revolving Fund Program, Agriculture’s Water and Waste Disposal Program, and Transportation’s Surface Transportation Program—do not address the issue of using program funds to relocate jobs. EDA’s current regulations generally prohibit using Public Works and Development Facilities grants to assist employers who transfer one or more jobs from one commuting area to another. EDA’s nonrelocation requirement is applicable only to firms relocating to EDA-funded project areas until the time that EDA approves a grant; EDA’s nonrelocation requirement does not apply after the grant is approved. This change was made in October 1995, when EDA revised its regulations. EDA’s 1995 revision to the regulations also allowed for exclusions from the nonrelocation requirement for businesses that (1) relocate to the area prior to the applicant’s request for EDA’s assistance; (2) have moved or will move primarily for reasons that have no connection to EDA’s assistance; (3) will expand employment in the area where the project is located substantially beyond employment in the area where the business was originally located; (4) are relocating from technologically obsolete facilities; (5) are expanding into the new area by adding a branch, affiliate, or subsidiary while maintaining employment levels in the old areas; or (6) are determined by EDA to be exempt from the requirement. Before the October 1995 changes, EDA’s prohibition against relocating jobs applied for a period of 48 months from the date when EDA awarded the grant, and the nonrelocation requirement could be waived only with the written consent of EDA’s Assistant Secretary. EDA’s Acting Chief Counsel told us that several factors contributed to EDA’s 1995 changes to the nonrelocation requirement. This official told us that EDA spent a great deal of time and effort monitoring projects funded by EDA but found very few cases where businesses relocated jobs after EDA had approved a grant. He said that as a result, EDA saw no need to continue monitoring projects after funding was approved when such monitoring was not needed and thus would consume valuable EDA resources that could be used more effectively elsewhere. In addition, EDA’s Acting Chief Counsel told us that the exclusion provision enables EDA to better use Public Works and Development Facilities grants to achieve their purpose of creating jobs in distressed area. He said that firms may decide to relocate simply because the area where they are located cannot accommodate planned expansion and growth. He said that in the past, such firms, which are going to relocate anyway, could not relocate to an EDA project site without penalty to the project grantee because of the prohibitions in effect under the old regulations. However, this official said that with the exclusion provision that EDA adopted in October 1995, firms may decide to locate in distressed areas, where jobs are needed, rather than move to an area that may not need the jobs as much. Section 141 of JTPA, as amended, prohibits using any JTPA funds to encourage or induce a business to relocate if the relocation results in the loss of employment for any employee at the business’ original location.This section also provides that if a business relocates and the relocation results in the loss of any employee’s job at the business’ original location, JTPA funds cannot be used by the relocating business for customized or skill training, on-the-job training, or company-specific assessments of job applicants or employees for the first 120 days after the business commences operation at its new location. A relocating business is one that moves any operation from a facility in one labor market to a new or expanding facility in another market. The Congress adopted these two prohibitions when it amended JTPA in 1992. Prior to the 1992 amendments, JTPA provided that funds could not be used to help relocate establishments from one area to another unless the Secretary determined that such relocations would not increase unemployment in the area where the company was originally located. The conference report on the 1992 amendments to JTPA do not explain Congress’ rationale for these amendments. However, according to Labor’s Employment and Training Administration, the 120-day provision was added because the language in the legislation before 1992 was broad and ambiguous and because it was difficult to determine whether there was an impact on local unemployment. The Omnibus Budget Reconciliation Act of 1993 provides that the strategic plans prepared by EZs and ECs may not include any assistance to relocate businesses into an EZ or EC if (1) the establishment of a new branch, affiliate, or subsidiary will increase unemployment in the area of the business’ original location or (2) there is reason to believe that the new branch, affiliate, or subsidiary is being established with the intention of closing down the operations of the existing business entity at its original location or in any other area where the business is operating. The strategic plan, which is developed with input from community stakeholders such as residents, businesses, financial institutions, and local governments, outlines how an EZ or EC plans to achieve its goal of revitalizing an area. HUD and Agriculture have incorporated these relocation restrictions into their implementing regulations. Introduced in January 1997, the Housing Opportunity and Responsibility Act of 1997 (H.R. 2) proposes to reform the nation’s public housing programs. A section in this legislation would prohibit using CDBG funds for any activity that is intended or likely to facilitate the relocation or expansion of any industrial or commercial plant, facility, or operation from one area to another if the relocation or expansion will result in the loss of employment in the area from which the relocation or expansion occurs. On May 14, 1997, the House of Representatives passed H.R. 2. As of August 1, 1997, the legislation was pending in the Senate. In February 1997, the Prohibition of Incentives for Relocation Act (S. 300) was introduced in the Senate to specifically prohibit the use of CDBG funds for relocating jobs. The legislation proposes that no CDBG funds can be used for any activity that is intended or likely to facilitate the closing or substantial reduction of operations of a plant at one location and the relocation or expansion of the plant at another location. This Congress—the 105th—was the third consecutive Congress in which this legislation had been introduced. The legislation was first introduced in 1994 after a major corporation announced its plans to relocate 2,000 jobs from a city in Wisconsin to other locations, including two areas that had used community development funds to expand their operations. As of August 1, 1997, this legislation was pending before the Senate Committee on Banking, Housing and Urban Affairs. To ensure compliance with its nonrelocation requirement, EDA relies on assurances from applicants and certifications from businesses that the businesses’ relocation to project areas funded by Public Works and Development Facilities grants will not result in the transfer of jobs from other areas to the project area. Similarly, to document compliance with JTPA’s nonrelocation requirement, Labor’s regulations require that prior to training workers for jobs in businesses, the substate grantees and businesses complete preaward reviews to verify that the businesses are not relocating jobs from one labor market to another. HUD and Agriculture rely on their determinations that EZs’ and ECs’ strategic plans do not help relocate businesses to ensure compliance with the Departments’ nonrelocation requirement. EDA’s current regulations, which were adopted in October 1995, provide that applicants for Public Works and Development Facilities grants must notify EDA of any business that will benefit from the project funded with the EDA grant. The regulations also require that each business identified by the applicant submit a nonrelocation certification to EDA as part of the application package certifying that (1) the business does not intend to transfer one or more jobs (not persons) from other commuting areas to the one where the project is located and (2) the business has not located and will not locate to the project area before EDA’s approval of the grant in order to avoid the restrictions of the nonrelocation certification. If a business has already relocated jobs from another commuting area to the commuting area where the project will be located or has plans to do so, it must provide EDA with a full explanation so that EDA can determine if the business qualifies for an exclusion from the nonrelocation requirement. Under EDA’s regulations, EDA will determine compliance with the nonrelocation requirement prior to its grant award on the basis of information provided by the applicant during the project selection process. Labor relies primarily on the states to ensure compliance with JTPA’s nonrelocation requirement. Labor’s regulations require that as a prerequisite to providing a new or expanding business with JTPA’s assistance for worker training, a standardized preaward review, developed by the state, must be completed to verify that the business is not relocating jobs from one labor market area to another. The review is to be completed and documented jointly by the substate grantee or other organization providing the job training assistance contracts and the business that is providing the on-the-job training or for which other customized training is being provided. To assist the states in carrying out their preaward reviews, Labor’s regulations identify the minimum information that such reviews should cover, including the name under which the facility does business, the name and address of the facility in the other area that is being closed or from which business is being transferred, the nature of the products or the business being transferred, the date that the new or expanded facility will commence operation, and a statement from the employer about job losses at the old location. Labor’s regulations require that the Secretary of Labor investigate any alleged violations of the relocation prohibition but does not require Labor’s periodic monitoring of state activities. In addition, the regulations do not require that the states submit the preaward reviews to Labor. HUD and Agriculture have incorporated into their program regulations the provision in the Omnibus Budget Reconciliation Act against using EZs’/ECs’ assistance to relocate businesses. As under the act, HUD’s and Agriculture’s regulations prohibit the EZs’ or ECs’ strategic plans from containing any language stating that assistance will be provided for relocating businesses from non-EZ or non-EC areas. In a November 1996 memorandum from HUD to the Atlanta EZ, HUD advised the EZ that after conferring with HHS, it was determined that the prohibition in the law does not prohibit an EZ, during the implementation of its plan, from using EZ/EC SSBG funds to finance activities that may assist a business relocating to the EZ. The memorandum went on to state that the section of the act dealing with the nonrelocation prohibition relates to a business relocation tactic included in a strategic plan submitted during the application phase of the program and that the section says nothing about actions that occur during implementation. The memorandum also stated that the language in the act relating to business retention occurs in the section of the act dealing with the designation of an area as an EZ or EC and not in the portion of the act authorizing the use of EZ/EC SSBG funds. In our initial meeting with HUD officials to discuss this memorandum, the Deputy Director, Office of Economic Development, told us that the memorandum was prepared by his office following consultation with and guidance from HHS. In a subsequent meeting, HUD’s Deputy Assistant Secretary responsible for the program told us that the memorandum was being withdrawn. This official also stated that it has always been HUD’s position that EZ/EC SSBG funds should not be used to relocate jobs. This official told us that HUD plans to issue guidance in the very near future that will (1) clarify HUD’s position that EZ/EC SSBG funds should not be used to relocate jobs and (2) outline HUD’s intent to withhold funds if EZs and ECs do not comply with the policy. In commenting on this report, HHS disagreed with HUD’s portrayal of the role that HHS played in the development of the November 1996 memorandum to the Atlanta EZ. HHS stated that it did not provide HUD with guidance stating that the statutory language would have an effect only during the application process. Rather, HHS stated that HUD personnel conferred with HHS staff and asked them to agree with HUD’s interpretation of the statute. HHS stated that because HUD is the lead agency for the urban EZ/EC program, HHS staff deferred to HUD on this policy decision. Tax concessions, financial assistance, and other benefits may be used by states and communities to attract and keep businesses. The extent to which these incentives are paid by the federal government or by state and local governments is difficult to ascertain. Local economic development organizations may receive money from state programs that commingle state and federal dollars. Even if the ultimate source of the funding for business incentives is from state or local governments, federal expenditures may influence the level of incentives offered by state and local governments. If federal funds are used for an activity that the state or community would have undertaken anyway, money is freed up for states or communities to use for such activities as the provision of business relocation incentives. The use of state and local incentives expanded during the late 1970s and throughout the 1980s. However, the growth has slowed in the 1990s. The National Association of State Development Agencies stated that in 1994, over 500 different incentive programs were in use by states. A 1997 report by the Council of State Governments shows that over 40 states now offer tax and financial programs to create, retain, or lure jobs. But 32 states plan to hold the line or cut spending over the next 5 years. These states cited many reasons for not expanding their incentives, including a feeling that (1) current levels were sufficient, hence, little marginal impact could be expected from expanding the programs and (2) there was little payoff from bidding wars between states to lure businesses. Also, some states report a change in emphasis from attracting new firms to retaining existing ones. Many studies have examined the relationship between state and/or local business incentives and changes in economic activity. Most early studies found that incentives had little or no effect on an area’s economic development. For instance, studies done in 1979 and 1983 found that state business incentives had little influence on the stimulation of new business, measured either by the number of firms or by the firm’s size. Wage rates, energy costs, and the availability of skilled labor were all found to be more important influences on the creation and expansion of business firms. Later work has refined this conclusion. Incentives may produce an impact in some industries, such as manufacturing. A 1991 study found that manufacturing and capital-intensive industries are more affected by tax considerations than are other businesses. Also, state and local business policy may have stronger influences on where firms locate within a region. The selection of a region for an investment may be driven by economic criteria, such as the availability of labor and transportation and proximity to markets, while the selection of a particular site within a region may be influenced by the availability of incentives. A 1983 study, for instance, found that property taxes were an important disincentive to firms relocating within the Detroit metropolitan area. The magnitude of these impacts and the set of industries for which they might occur is still the subject of debate in academic publications. Surveys of businesses, such as the Fortune Market Research Survey, also find that relocation incentives are not the most important factors in plant location decisions. The Fortune Market Research Survey ranks financing inducements 15th in importance, far behind such factors as worker productivity, efficient transportation, and the state or local government’s attitude toward businesses. Both academic researchers and government officials who administer economic development programs cite several factors that may limit the effectiveness of business incentives. The value of the incentives offered is often small in relation to the differences between locations in the costs incurred by firms, such as labor and transportation costs. Incentives may represent a zero-sum gain, in which the value of one state’s incentives is offset by the incentives offered by competing states. Finally, incentives offered to new businesses to locate in an area may give them a competitive edge over existing businesses, thus causing a shift in economic activity from established firms to new firms but causing little change in a region’s overall economic activity. We provided Agriculture, Commerce, HHS, HUD, Labor, Transportation, and EPA with a draft of this report for review and comment. Agriculture, Transportation, and EPA informed us that they had no comments. Commerce, HHS, HUD, and Labor provided us with written comments in which they generally agreed with the report’s observations. Commerce, however, took issue with a statement in our draft report that EDA’s programs are among the federal programs that are frequently cited as being used in incentive packages. Commerce pointed out that EDA is unaware of any citations or complaints about its Public Works Program or any of its other programs being used for relocation purposes. The statement in our draft report was based on discussions with associations representing states and communities and was not meant to infer that particular programs had been used to relocate jobs. As we note in our report, incentive packages are used to attract new or expanding jobs to an area, which is often the purpose of EDA’s assistance. However, because of the concern and confusion that this statement has caused, we have deleted it from our final report. Commerce also noted that it has requested proposals for the development of a tool to evaluate state incentive programs and recommendations on the appropriate federal role in locational incentives. It expects the final report on this project in June 1998. HHS and HUD provided comments relevant to the section of the report that discusses EZs/ECs and the use of SSBG funds for the relocation of jobs. HHS took issue with HUD’s portrayal of the role that HHS played in the development of the November 1996 memorandum that HUD sent to the Atlanta EZ. In that memorandum, HUD advised Atlanta that the law does not prohibit an EZ, during the implementation stages of its plan, from using SSBG funds to relocate businesses to the EZ. HHS stated that the memorandum implies that HHS provided HUD with guidance about interpreting the statutory language to have an effect only during the application process. HHS stated that HUD personnel conferred with HHS staff and asked them to agree with HUD’s interpretation of the statute. According to HHS, because HUD is the lead agency for the urban EZ/EC program, HHS staff deferred to HUD on this policy decision. Because the exact role that each agency played in developing this policy is unclear, we have included language reflecting HHS’ position in the sections of the report that discuss the November 1996 memorandum. In its comments, HUD reiterated that it will soon issue guidance clarifying its policy on the use of SSBG funds for the relocation of jobs and included the draft guidance as an enclosure. Labor pointed out that while our report deals with title III of JTPA and the services that are available to eligible dislocated workers, section 141 of JTPA deals with all training programs under the act, including those involving disadvantaged youths and adults, migrant and seasonal farm workers, Native Americans, and older Americans. We agree and make this point in the report. The sections of our report that discuss the relocation prohibition under section 141 of JTPA state specifically that this prohibition applies to all funds provided under JTPA. Labor also commented that in addition to the relocation prohibition, there is a prohibition against using JTPA funds for economic development or employment-generating activities and that it is important to put this into the context of the training services that are available to help dislocated workers return to the workforce. We agree with Labor and have added language to the report to reflect these prohibitions. Commerce, HUD, and Labor also included attachments/enclosures with clarifying language and technical corrections for their respective programs, which we incorporated into the report where appropriate. The written comments from Commerce, HHS, HUD, and Labor and our responses appear in appendixes II through V, respectively. To determine the economic development activities that eight major federal programs fund for the benefit of states and communities, we reviewed the 1996 Catalog of Federal Domestic Assistance, program legislation and regulations, budget information, and annual and other program reports prepared by the agencies administering the programs and by others. We developed descriptive information for each program, including the program’s purpose and objectives, the type(s) of financial assistance provided, the entities that are eligible to receive program assistance, the types of projects and activities that can be funded, and the amount of assistance provided in fiscal year 1996. To determine (1) which programs have legislative or regulatory restrictions on using program funds to relocate existing businesses and jobs (2) for those programs with restrictions, the procedures that federal agencies have established to ensure that states and communities comply with such restrictions, we reviewed program legislation and regulations, congressional reports accompanying program legislation, agencies’ operating procedures, and other documents relating to the relocation prohibitions. We discussed the restrictions and procedures with agency officials as well as how the restrictions and procedures have changed and the reasons for any changes. We did not assess whether states and communities are complying with these restrictions or the adequacy of agencies’ procedures in ensuring that program assistance is not used to relocate existing businesses and jobs. To obtain information on the nonfederal economic incentives available to states and communities to attract businesses and jobs, we (1) analyzed reports written by industry and government associations and (2) interviewed industry experts. We summarized the types of state and local incentives available and the role that incentives may play in a business’ decision to relocate. We also discussed with industry experts the issue of federal funds freeing up state funds and the impact that state-provided incentives may have on a state’s ability to provide other state services. We conducted our work from January through July 1997 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days from the date of this report. At that time, we will send copies to the appropriate congressional committees; the Secretaries of Agriculture, Commerce, HHS, HUD, Labor, and Transportation and the Administrator, EPA; the Director, Office of Management and Budget; and other interested parties. Copies will be made available to others upon request. If you have any questions, please call me at (202) 512-7632. Major contributors to this report are listed in appendix VI. This appendix provides information regarding the purpose and objectives, type of assistance provided, eligible activities, flow of funds from federal agencies to recipients, recipients’ role in project selection, amount of assistance provided during fiscal year 1996, and types of projects funded for each of the following programs: The Department of Commerce’s Economic Development Administration’s (EDA) Public Works and Development Facilities Program. The Department of Housing and Urban Development’s (HUD) Community Development Block Grant (CDBG) Program. The Empowerment Zone and Enterprise Community (EZ/EC) Program, administered by the Department of Agriculture and HUD and funded primarily by HHS. The Department of Labor’s Employment and Training Assistance for Dislocated Workers Program. The Department of Health and Human Services’ (HHS) Community Services Block Grant (CSBG) Program. The Environmental Protection Agency’s (EPA) Clean Water State Revolving Fund Program. Agriculture’s Water and Waste Disposal Program. The Department of Transportation’s Surface Transportation Program. The Public Works and Development Facilities Program, authorized by title I of the Public Works and Economic Development Act of 1965, as amended, is a key federal program for stimulating economic development in distressed communities. Administered by the Department of Commerce’s EDA, the program provides distressed communities with grants to help attract new industry, encourage business expansion, and generate long-term, private-sector jobs. Public Works and Development Facilities grants can be used to finance a variety of projects including water and sewer systems serving primarily industrial and commercial users, access roads and other infrastructure improvements for industrial parks, port facilities, railroad sidings and spurs, tourism facilities, and vocational schools. Grant funds can be used to acquire and develop land for these facilities as well as to construct, rehabilitate, alter, or expand them. Projects funded with Public Works and Development Facilities grants must fulfill a pressing need of the area and must (1) improve the opportunities to successfully establish or expand commercial plants or facilities, (2) assist in creating additional long-term employment opportunities, and (3) benefit the long-term unemployed and underemployed and members of low-income families. Also, projects must be located within an EDA-designated redevelopment area or economic development center and must be consistent with the Overall Economic Development Program (OEDP) approved by EDA for the area and have an adequate local share of matching funds. States, cities, counties, and other political subdivisions, Indian tribes, commonwealths and territories of the United States, and private and public nonprofit organizations representing redevelopment areas are eligible for Public Works and Development Facilities grants. Corporations and associations organized for profit are not eligible for grant assistance. The first step in obtaining an EDA Public Works and Development Facilities grant usually is for the applicant and community leaders to meet with an Economic Development Representative or other appropriate EDA official to explore the applicability of the proposed project for EDA funding. If the proposed project appears to be feasible, the applicant will prepare a brief project proposal, which is submitted to an EDA regional office for review. If the regional office finds that the proposed project qualifies, it will notify the applicant to submit a formal grant application to EDA. The application, among other things, describes the project in detail, discusses how the project will affect the economic development of the community, and provides information on the project’s costs, its projected payroll, and the amount of private capital to be invested. Submitting a formal application to EDA does not guarantee that the project will be funded. Public Works and Development Facilities grants normally cover up to 50 percent of the estimated cost of a project, and the remainder is provided by local sources. Projects located in highly depressed areas may receive a supplementary grant from EDA that brings the federal share of the project up to 80 percent, while Indian tribes are eligible for up to 100-percent funding from EDA. Funds appropriated for Public Works and Development Facilities grants in fiscal year 1995 totaled $195 million and totaled $165.2 million in fiscal 1996 and fiscal 1997. In fiscal year 1995, EDA approved 182 Public Works and Development Facilities grants totaling $194.5 million and 158 grants totaling $164.9 million in fiscal 1996. Table I.1 shows the types of projects funded with grants made during fiscal year 1996. EDA, as of mid-April 1997, had awarded 54 Public Works grants totaling $48.6 million for fiscal year 1997, which represented about 29 percent of the $165.2 million appropriated for Public Works and Development Facilities grants for fiscal 1997. Section 202 of the Public Works and Economic Development Act of 1965, as amended, prohibits using EDA’s financial assistance to assist businesses in relocating from one area to another. Although this section of the act pertains specifically to EDA’s business development assistance program, EDA has applied the nonrelocation requirement to all of its financial assistance programs, including Public Works and Development Facilities grants. EDA’s current regulations, adopted in October 1995, prohibit using Public Works and Development Facilities grants (and other EDA financial assistance) to assist employers who transfer one or more jobs from one commuting area to another. EDA’s regulations provide that the nonrelocation requirement shall not apply to businesses that relocated to the area prior to the applicant’s request for EDA’s assistance; have moved or will move into an area primarily for reasons that have no connection to EDA’s assistance; will expand employment in the area where the project is located substantially beyond employment in the area where the business had been originally located; are relocating from technologically obsolete facilities to be competitive; are expanding into the new area by adding a branch, affiliate, or subsidiary while maintaining employment levels in the old area or areas; or are determined by EDA to be exempt. Up until the time that EDA adopted its current regulations in October 1995, EDA’s prohibition against using Public Works and Development Facilities grants or other financial assistance to relocate jobs applied for a period of 48 months from the date that EDA approved the grant or other assistance. In addition, the nonrelocation requirement could be waived only with the written consent of EDA’s Assistant Secretary, and EDA was required to terminate the financial assistance of any recipient found violating the nonrelocation requirement. And any recipient violating the requirement had to repay any financial assistance received from the date of the violation. According to EDA’s Director, Public Works Division, EDA adopted the 48-month period in the mid-1980s; before then, EDA’s nonrelocation requirement applied for a period of 2 years prior to EDA’s approval of financial assistance for a project, and no time limit existed afterward. The Federal Register notice announcing EDA’s revised regulations did not explain EDA’s rationale for the changes to its nonrelocation requirement. However, EDA’s Acting Chief Counsel told us that several factors contributed to the changes. He said that EDA spent a great deal of time and effort monitoring projects funded with Public Works and Development Facilities grants and other EDA assistance when the nonrelocation requirement applied for a period of 48 months after EDA approved financial assistance for a project. Yet, according to this official, EDA found very few cases where a business violated the nonrelocation requirement and relocated jobs to the area where the project was located after EDA approved financial assistance. He said that, as a result, EDA saw no need to keep in place a requirement that would require EDA to monitor projects after financial assistance had been approved, thus consuming dwindling agency resources that EDA could use more effectively elsewhere. The Acting Chief Counsel also told us that the exclusions from the nonrelocation requirement allowed by the 1995 revision to EDA’s regulations will help Public Works and Development Facilities grants to achieve their goal of creating jobs in distressed area. He said that, often times, firms that wish to expand their operations must relocate to another area simply because the area where they are located cannot accommodate expansion and growth. He said that such firms could not relocate to an EDA-funded facility without a penalty to the project grantee under the old regulations because of the relocation prohibition. However, he said that the exclusion provision of EDA’s new regulations will enable firms to locate to distressed areas where jobs are needed rather than have the firms locate to an area that does not need the jobs as much. EDA’s current regulations require that applicants for Public Works and Development Facilities grants must notify EDA of any employer that will benefit from an EDA grant. The regulations require that each business identified by the applicant submit a nonrelocation certification to EDA as part of the application package certifying that (1) the business does not intend to transfer one or more jobs (not persons) from other commuting areas to the one where the project is located and (2) the business has not located and will not locate to the project area prior to EDA’s approval of the grant in order to avoid the restrictions of the nonrelocation certification. If a business already has relocated jobs from another commuting area or has plans to do so, it must provide EDA with a full explanation so that EDA can determine if the business qualifies for an exclusion from the nonrelocation requirement. EDA’s regulations provide that EDA will determine compliance with the nonrelocation requirement prior to its award of the grant on the basis of information provided by the applicant during the project selection process. EDA’s current regulations do not require applicants and businesses to comply with EDA’s nonrelocation requirement after EDA approves a grant. The primary objective of the CDBG Program is to develop viable urban communities by providing decent housing and a suitable living environment and by expanding economic opportunities, principally for persons of low and moderate income. Administered by HUD’s Office of Community Planning and Development, the CDBG Program provides communities with federal grants to assist them in funding local community development needs. The Entitlement Communities Program, which directly provides large cities and urban counties with grants, and the State and Small Cities Programs, which provide states with grants for distribution to smaller, nonentitled communities, are the major components of the CDBG Program. The CDBG Program also provides grants for certain populations, such as Historically Black Colleges and Universities, and for assisting community development efforts in five designated insular areas—American Samoa, Guam, the Northern Mariana Islands, Palau, and the Virgin Islands. Activities funded by the CDBG Program must meet at least one of three national objectives: benefit low- and moderate-income persons; prevent or eliminate slums or blights; or meet urgent community development needs. The Entitlement Communities Program is the largest component of the CDBG Program, receiving about 70 percent of the funds appropriated each year for the CDBG Program. Entitled communities are metropolitan cities and urban counties that are (1) local municipal governments with 50,000 or more residents, (2) other jurisdictions designated as central cities of Metropolitan Statistical Areas (MSAs), or (3) counties with populations of over 200,000 in MSAs, excluding the populations of entitled cities within the county boundaries. HUD allocates entitlement grants to the communities on the basis of two statutory formulas that consider population, poverty, the extent of overcrowding, and the age of housing. According to the Director, HUD’s Office of Block Grant Assistance, 834 cities and 141 counties qualified as entitled communities in fiscal year 1997. Entitled communities develop their own programs and funding priorities and can use CDBG grants for various activities, including to acquire, construct, reconstruct, or rehabilitate public facilities, including hospitals, nursing homes, halfway houses, battered spouse shelters, and water and sewer facilities; preserve historical sites and remove architectural barriers in public facilities that restrict the movement of the handicapped and the elderly; establish new or expand existing public services, including those involving employment, crime prevention, child care, health, drug abuse, education, and welfare; rehabilitate housing and other publicly owned residential buildings; provide direct assistance to expand home ownership opportunities for low- and moderate-income homebuyers, including subsidizing interest rates and acquiring guarantees for mortgage financing from private lenders; carry out special economic development projects, including acquiring, constructing, and reconstructing commercial or industrial buildings and facilities, and providing loans, grants, loan guarantees, or other types of assistance to private for-profit businesses for activities to carry out an economic development project; provide relocation assistance for individuals, families, and businesses displaced by CDBG activities; and provide loans, grants, or other assistance to community-based development organizations to carry out community economic development, neighborhood revitalization, and energy conservation projects. In addition, a grantee may use up to 20 percent of a CDBG grant (plus program income) for planning and general administrative activities. Entitled communities may contract with local agencies or nonprofit organizations to carry out all or part of their programs. Because low- and moderate-income persons are the principal beneficiaries of CDBG funds, at least 70 percent of CDBG expenditures over a 1-, 2-, or 3-year period must be for activities that principally benefit such persons. CDBG funds may not be used for certain activities, including constructing or rehabilitating facilities used for religious activities, financing facilities or equipment used for political purposes, or constructing or rehabilitating facilities used for the general conduct of government business. In addition, purchasing fixtures, equipment, furnishings, or other personal property; providing subsistence payments to individuals for items such as food, housing, clothing, and utilities; or constructing new, permanent residential housing are also generally ineligible activities. Although CDBG grants are entitlements, entitled communities must submit a Consolidated Plan, including an annual action plan, to HUD in order to receive their grants. The Consolidated Plan, developed with the input of citizens and community groups, serves as an application for CDBG and other HUD formula grants and lays out the local priorities of the community, as well as the 3- to 5-year strategy, that the community will follow in implementing the various HUD programs. The annual action plan provides the basis for assessing the performance and results of the CDBG and other HUD-funded programs. The State and Small Cities Programs receive approximately 30 percent of CDBG appropriations to support community development efforts in communities that do not qualify for assistance under the Entitlement Program. Communities eligible for CDBG funds under the State and Small Cities Programs are (1) municipalities with fewer than 50,000 residents, except designated central cities of MSAs, and (2) counties that are not considered urban counties (generally those with populations of 200,000 or less, excluding any entitled cities contained within the counties). HUD allocates grants to the states using a similar statutory formula as used to allocate entitlement grants. The states, like entitled communities, must submit to HUD Consolidated Plans that describe the states’ community development objectives and method of distributing funding among eligible communities. Also, states must annually submit to HUD Performance and Evaluation Reports that include information on communities receiving CDBG funds, the amount of their grants, the types and purpose of activities being funded, and the national objectives being met by each activity. The states develop funding priorities and criteria for selecting projects and awarding CDBG grants exclusively to units of general local government that carry out community development activities. The local governments are responsible for considering local needs, preparing and submitting grant applications to the state, carrying out funded activities, and complying with federal and state requirements. The states are responsible for ensuring that recipient communities comply with applicable state and federal laws and requirements. As with the Entitlement Program, at least 70 percent of the CDBG grant funds spent by communities under the State and Small Cities Programs must be for activities that primarily benefit low- and moderate-income people. Nonentitled communities can fund the same types of activities as the entitled communities. According to HUD’s Office of Block Grant Assistance, approximately 3,000 small cities and counties in nonentitlement areas receive grants each year. According to the Director, HUD Office of Block Grant Assistance, CDBG Program appropriations totaled $4.6 billion in fiscal years 1995, 1996, and 1997. Of this amount, the Entitlement Communities Program and the State and Small Cities Programs received $4.49 billion in fiscal year 1995, $4.37 billion in fiscal 1996, and $4.31 billion in fiscal 1997. According to HUD’s Office of Block Grant Assistance, housing rehabilitation is the single most important activity funded by the Entitlement Communities Program. Data provided by HUD’s Office of Block Grant Assistance show that for 1993—the most recent year for which complete community development data are available—housing rehabilitation accounted for about 31 percent of the funds spent by entitlement communities. Overall, housing rehabilitation and other housing-related activities such as enforcement of housing codes and new housing construction accounted for about 36 percent of the funds spent by entitlement communities. For the State Program, the HUD data show that water and sewer activities accounted for almost 29 percent of the total funds spent for 1993, while economic development activities accounted for about 20 percent and housing rehabilitation accounted for about 16 percent of the expenditures. HUD’s fiscal year 1996 annual report on the CDBG Program shows that in 1993, the CDBG Program provided funding for thousands of public improvement and service projects in entitled and nonentitled communities including 3,000 projects that improved water, sewer flood control, and drainage 3,700 street improvement projects that helped communities to repair and maintain roads, bridges, and sidewalks; and over 8,200 projects to construct and rehabilitate public facilities, including child care centers, facilities for abused and neglected children, youth and senior centers, and other community buildings. According to HUD’s Office of Block Grant Assistance, about 115,000 jobs were created in 1993 through the CDBG Program. While grantees are not prohibited from using CDBG Program funds to relocate jobs from one area to another, several efforts are under way in the Congress to impose a nonrelocation requirement on the use of CDBG funds. Introduced in January 1997, the Housing Opportunity and Responsibility Act of 1997 (H.R. 2) proposes to reform the nation’s public housing programs. A section in this legislation proposes to amend the Housing and Community Development Act of 1974 to prohibit the use of CDBG funds for any activity that is intended or likely to facilitate the relocation or expansion of any industrial or commercial plant, facility, or operation from one area to another if the relocation or expansion will result in the loss of employment in the area from which the relocation or expansion occurs. The House of Representatives passed H.R. 2 on May 14, 1997, and forwarded it to the Senate. As of August 1, 1997, H.R. 2 was pending before the Senate Committee on Banking, Housing and Urban Affairs. In February 1997, the Prohibition of Incentives for Relocation Act (S. 300) was introduced specifically to prohibit the use of CDBG funds to relocate businesses. The legislation proposes that no CDBG funds can be used for any activity that is intended or likely to facilitate the closing—or the substantial reduction of operations of an industrial or commercial plant at one location and the relocation—or the expansion of a plant at another area. The 105th Congress was the third consecutive Congress in which this legislation had been introduced. The first time that this legislation was introduced was in 1994 after a major corporation announced that it planned to relocate 2,000 jobs from a Midwest state to other locations, including two areas that had used community development funds to expand their operations. In commenting on the 1996 legislation, Senator Kohl from Wisconsin noted that the need to prohibit using federal funds to relocate jobs is no less significant now than in 1994. He referred to statements made by a Michigan state official that Michigan would aggressively pursue Wisconsin companies to relocate to Michigan. As of August 1, 1997, this legislation was also pending before the Senate Committee on Banking, Housing and Urban Affairs. The EZ/EC Program targets federal grants to distressed urban and rural communities for social services and community redevelopment and provides tax and regulatory relief for attracting or retaining businesses in these communities. The Omnibus Budget Reconciliation Act of 1993, which established the EZ/EC Program, authorized the designation of 104 communities as either EZs or ECs. Federal funding for the EZs and ECs is provided primarily through the Social Services Block Grant (SSBG) Program, which is administered by HHS. In December 1994, the Secretaries of HUD and Agriculture designated 104 EZs and ECs—6 urban EZs, 3 rural EZs, 65 urban ECs, and 30 rural ECs. The Omnibus Budget Reconciliation Act of 1993 established the EZ/EC Program’s eligibility criteria, designation procedures, and benefits. The act specified that an area could not be selected as an EZ or EC unless it (1) met specific criteria for characteristics, such as geographic size and poverty rate, and (2) prepared a strategic plan for implementing the program. The strategic plan, developed in conjunction with residents, financial institutions, and other stakeholders in the community, outlines an EZ’s or EC’s vision for revitalizing its distressed areas and the activities and projects planned to accomplish this task. The act also authorized the Secretaries of HUD and Agriculture to designate the EZs and ECs in urban and rural areas, respectively; set the length of the designation at 10 years; required that nominations be made jointly by the local and state governments; and authorized the Secretaries to prescribe any regulations needed to carry out the program. The 1993 act also amended title XX of the Social Security Act to authorize the special use of SSBG funds for the EZ/EC Program. Historically, SSBG funds could be used only for social service activities, such as programs to assist and feed children. The use of SSBG funds was expanded to (1) cover a range of economic and social development activities, including such things as constructing child care facilities, initiating job training programs, beginning 911 emergency response services, improving public facilities, and providing drug and alcohol prevention and treatment programs, or (2) be used in accordance with the strategic plan developed by the EZ or EC. As with other SSBG funds, HHS grants the funds for the EZ/EC Program to the states, which serve as fiscal intermediaries for the grants. HHS’ regulations covering block grants provide states with maximum fiscal and administrative discretion. HHS encourages the states to carry out their EZ/EC funding responsibilities with as few restrictions as possible under the law. After the state grants the funds to the EZ or EC, it can draw down the funds through the designated state agency for specific projects over the 10-year life of the program. In 1994 and 1995, HHS allocated $1 billion in SSBG funds to the 104 EZs and ECs for use over the 10-year life of the program. Each urban EZ was allocated $100 million and each rural EZ was allocated $40 million in EZ/EC SSBG funds. In addition, a new category of tax-exempt financing—using state and local bonds—was created to assist new businesses. Furthermore, businesses located in the EZ would be eligible for (1) tax credits on wages paid to employees who live in the EZ and (2) increased deductions for depreciation. Each urban and rural EC was allocated just under $3 million of the EZ/EC SSBG funds and qualified only for the tax-exempt bonds. As of June 30, 1997, of the $1 billion in total SSBG funds allocated by HHS to the EZs and ECs, $119.9 million had been drawn down to fund specific projects. According to HUD, 5 of the 6 urban EZs and 62 of the 65 urban ECs have made progress in implementing the EZ/EC Program. The communities that had reported little progress in implementing the program were warned that they were at risk of decertification, which would terminate their EZ/EC status. Some of the projects that have been funded include (1) the creation of a partnership with a local college in the Chicago EZ to prepare students for the General Educational Development tests, (2) the starting of a school-based program designed to reduce alcohol- and drug-related violence in the Detroit EZ, and (3) the buying of sites for a supermarket and retail stores in hopes of creating jobs for residents in the Philadelphia EZ. According to Agriculture, rural communities’ implementation of the EZ/EC Program has varied. All 33 rural EZs and ECs had established the basic organizational structures and procedures necessary to implement their strategic plans. In terms of implementing the specific projects contained in these plans, some communities had made considerable progress, and some had made very little. Some of the rural projects that the rural EZs/ECs plan to fund include (1) establishing family service centers in the Central Savannah River Area EC in Georgia to provide youth with recreation and leadership classes and adult literacy classes, (2) improving the downtown area in the City of Watsonville EC in California by refurbishing retail businesses’ facades, and (3) building and equipping four rural fire stations within the Kentucky Highlands EZ. As stated above, HHS regulations provide states with maximum fiscal and administrative discretion. While fiscal responsibility for the program lies with HHS, HUD and Agriculture are assigned programmatic responsibilities for the communities within their jurisdiction. As of June 1997, both HUD and Agriculture had completed their initial reviews of their respective EZs/ECs to evaluate each area’s progress toward achieving the goals that it set out in its strategic plan. The Omnibus Budget Reconciliation Act of 1993 states that the EZ/EC area’s strategic plan may not include any assistance to relocate businesses into an EZ/EC area unless (1) the establishment of a new branch, affiliate, or subsidiary will not result in a decrease in employment in the area of original location or in any other area where the existing business entity conducts business operations or (2) there is no reason to believe that the new branch, affiliate, or subsidiary is being established with the intention of closing down the operations of the existing business entity in the area of its original location or in any other area where the existing business entity conducts business operations. HUD and Agriculture have both incorporated this restriction against assisting the relocation of businesses into their implementing regulations. Like the act, HUD’s and Agriculture’s regulations prohibit the EZ’s or EC’s strategic plan from containing any provisions for providing assistance to relocate businesses if jobs are lost or expected to be lost because of the relocation. In a November 1996 memorandum from HUD to the Atlanta EZ, HUD advised the EZ that after conferring with HHS, it was determined that the prohibition in the law does not prohibit an EZ, in the implementation stages of its plan, from using SSBG funds to finance activities that may assist a business relocating to the EZ. The memorandum went on to say that the section of the act dealing with the nonrelocation prohibition relates to a business relocation tactic included in a strategic plan submitted during the application phase of the program and that the section says nothing about actions that occur during implementation. The memorandum also stated that the language in the act relating to business retention occurs in the section dealing with designation—not in the portion authorizing the use of SSBG funds. In our initial meeting with HUD officials to discuss this memorandum, the Deputy Director, Office of Economic Development, told us that the memorandum was prepared by his office following consultation with and guidance from HHS. In a subsequent meeting, HUD’s Deputy Assistant Secretary responsible for the EZ/EC Program told us that the memorandum was being withdrawn. This official also stated that it has always been HUD’s position that EZ/EC SSBG funds should not be used to relocate jobs. This official told us that HUD plans to issue guidance in the very near future that will (1) clarify HUD’s position that EZ/EC SSBG funds should not be used to relocate jobs and (2) outline HUD’s intent to withhold funds if EZs and ECs do not comply with the policy. In commenting on this report, HHS disagreed with HUD’s version of the role that HHS played in the development of the November 1996 memorandum to the Atlanta EZ. HHS stated that it did not provide HUD with guidance about HUD’s policy of interpreting the statutory language to have an effect only during the application process. Rather, HHS stated that HUD personnel conferred with HHS staff and asked them to agree with HUD’s interpretation of the statute. HHS stated that because HUD is the lead agency for the urban EZ/EC program, HHS staff deferred to HUD on this policy decision. The Job Training Partnership Act (JTPA), as amended, provides employment and training services for economically disadvantaged adults and youths, dislocated workers, and others who face significant employment barriers in an attempt to move such individuals to self-sustaining employment. Title III of the act, administered by the Department of Labor’s Office of Worker Retraining and Adjustment Programs, provides states with grants to support state and local training and employment assistance and other services to eligible dislocated workers. Dislocated workers are (1) those who have lost their job because of mass layoffs or plant closings, long-term unemployed persons, and self-employed workers who have lost their job because of general economic conditions or natural disasters as well as (2) those individuals who have been laid off or notified of a layoff and who are unlikely to return to their previous occupation or industry. Title III also provides funds for federal activities and aid to specific groups of workers dislocated because of mandates in the Clean Air Act and reductions in defense spending. Under title III-A of the act, funds are allotted to the states in the form of grants that are to be used to directly help eligible dislocated workers return to work. Labor allots 80 percent of the title III funds provided for Employment and Training Assistance for Dislocated Workers to the states using the following distribution formula: one-third of the amount is based on the relative number of unemployed individuals who reside in each state as compared with the total number of unemployed individuals in all states; one-third is based on the relative excess number (over 4.5 percent) of unemployed individuals who reside in each state as compared with the total excess number of unemployed in all the states; and one-third is based on the relative number of individuals unemployed for 15 or more weeks and who reside in each state as compared with the total number of such individuals in all the states. In order to receive program funds, states must submit a detailed biennial plan to the Department of Labor describing the programs and activities that will be assisted. The plan must also ensure compliance with a variety of constraints and requirements specified in the law. The states allocate, by formula, JTPA title III funds to designated substate areas, which are determined by the state’s governor. Each substate area must have a designated substate grantee to administer the program and receive funds. Examples of eligible substate grantees include private industry councils, private nonprofit organizations, local government offices, or community colleges. Each substate area is also required to submit a plan similar to the state plan for review at the state level. States may reserve up to 40 percent of the JTPA title III funds they receive for Employment and Training Assistance for Dislocated Workers for state activities, such as state administration and technical assistance of the program, statewide projects, rapid response activities, coordination with the unemployment compensation system, and basic readjustment and retraining services. At least 50 percent of the funds received by the states must be awarded immediately to substate areas. The formula used to allocate the funds is determined by the governor of each state and should be based on data on (1) insured unemployment, (2) unemployment concentrations, (3) plant closings and mass layoffs, (4) declining industries, (5) farmer-rancher economic hardships, and (6) long-term unemployment. States may reserve an additional 10 percent of the funds but must distribute these funds on the basis of the needs of substate grantees within 9 months. States and substate grantees can tailor the services they provide for dislocated workers to meet participants’ needs. The substate grantees are to use title III grants to directly aid dislocated workers by providing basic readjustment services, retraining services, support services, needs-related payments, relocation assistance, and rapid-response assistance. Basic readjustment services include the development of individual readjustment plans for program participants, job or career counseling, job placement assistance, and labor market information. Retraining services include classroom, occupational skills, and on-the-job training; out of area job search; and basic skills and remedial education. Supportive services include child care, commuting assistance, and financial and personal counseling. Needs-related payments are funds provided for an eligible dislocated worker who is unemployed and does not qualify or has ceased to qualify for unemployment compensation and who requires such assistance to participate in a job training program. Relocation assistance is the cost of relocating an eligible dislocated worker and family to another location when there are no job opportunities in the worker’s occupation in the area of residence but where the participant has accepted a job with a reasonable expectation that it will be permanent. Rapid-response assistance includes establishing on-site contact with an employer and employee representatives within 48 hours after becoming aware of a permanent closure or substantial layoff, providing financial and technical advice, and disseminating information throughout the state on the availability of services and activities carried out by the dislocated worker unit or office. Under title III-B of the act, the remaining 20 percent of Employment and Training Assistance for Dislocated Workers funds provided under title III are retained by Labor in a National Reserve Account. These funds are used to provide assistance for territories; fund demonstration projects, multistate projects, and industrywide projects; and provide assistance for workers dislocated from their jobs as a result of natural disasters. In addition, National Reserve Account funds may be granted to states or other eligible entities to supplement formula grants provided for states when state grants are not sufficient to provide services for dislocated workers affected by mass layoffs, including those resulting from federal actions, such as defense downsizing or compliance with the Clean Air Act. Funding for title III programs totaled approximately $1.2 billion for program year 1995 ($983 million for state grants and $246 million for the National Reserve Account). Funding decreased in program year 1996 to approximately $1.09 billion ($880 million for state grants and $214 million for the National Reserve Account). In program year 1997, funding increased by about 18 percent to approximately $1.29 billion ($1.03 billion for state grants and $252 million for the National Reserve Account). Funds allotted to the states in program year 1997 ranged from a low of about $815,000 for South Dakota to a high of almost $227 million for California. Basic readjustment services (only) In 1992, section 141 of JTPA was amended to prohibit the use of any funds provided under JTPA to encourage or induce the relocation of a businessif the relocation results in the loss of employment for any employee at the company’s original location. Prior to the 1992 amendments, the act stated that no funds may be used to assist in relocating establishments from one area to another unless the Secretary determines that such relocation will not increase unemployment in the area of the company’s original location or in any other area. The amendments also prohibit providing any JTPA assistance to a relocating business for customized or skill training, for on-the-job training, or for a company-specific assessment of job applicants or employees, if the relocation results in a loss of employment at the original site, until 120 days after operations begin at the new location. JTPA was further amended in 1992 to require the Secretary to investigate any alleged violations of the relocation prohibitions. If the Secretary determines that a violation has occurred, the state or substate area must repay twice the amount expended in creating the violation. In addition to prohibiting the use of JTPA funds to encourage or induce businesses to relocate, section 141 of JTPA also prohibits using JTPA funds for economic development and employment-generating activities. The conference report accompanying the 1992 amendments did not explain why the act was amended to allow a relocating business to receive JTPA assistance 120 days after operations commence at the new location. With regard to the relocation prohibition, the conference report states only that (1) the House bill amends the current law to prohibit the use of funds for the relocation of any business establishment and (2) the conference agreement requires the Secretary of Labor to investigate allegations that JTPA funds have been improperly used and to determine whether a violation occurred. According to Labor’s Employment and Training Administration, the 120-day provision was added because the language in the legislation before 1992 was broad and ambiguous and that it was difficult to determine whether there was an impact on local unemployment. The states are primarily responsible for overseeing the use of title III funds, and Labor’s regulations require the states to assure that they will comply with all statutory and regulatory requirements of the act. Furthermore, the regulations require that as a prerequisite to providing JTPA assistance to a new or expanding business for worker training, the substate area and the establishment must jointly document that employment is not being relocated from another area. The substate area and the establishment do this by completing a standardized preaward review, which is developed by the state. To assist the states in carrying out their preaward reviews, Labor’s regulations identify the minimum information that such reviews should cover, including the name under which the facility does business, the name and address of the facility in the other area that is being closed or from which business is being transferred, the nature of the products or the business being transferred, the date that the new or expanded facility will commence operation, and a statement from the employer about job losses at the old location. The states are not required to submit the preaward reviews to Labor, and a Labor official noted that such preaward reviews may be no more than the establishment certifying to the state that no jobs have been relocated. Furthermore, according to Labor’s Employment and Training Administration, even if a business relocates and displaces workers at the original location, assistance to train workers with JTPA funds can be provided 120 days after a business begins operations at the new location. The CSBG Program, established by the Omnibus Budget Reconciliation Act of 1981, replaced the following three programs administered by the Community Services Administration under the Economic Opportunity Act of 1964: Community Action/Local Initiatives, Senior Opportunities and Services, and Community Food and Nutrition. The CSBG Program, administered by HHS’ Office of Community Services (OCS), is intended to alleviate the causes of poverty by helping needy individuals obtain adequate jobs, education, and housing. Under this program, states receive grants from HHS and are required to pass through most of the funds to designated local entities, commonly known as community action agencies. The community action agencies provide services for low-income individuals and families. According to OCS officials, there are about 980 community action agencies nationwide. In order to receive CSBG funds, states are required to submit an annual plan and application to HHS. The plan must describe the manner in which the state will ensure compliance with the CSBG Act and the proposed use and distribution of the block grant funds. The plan should also include the state’s goals and objectives, information on the specific types of activities it will support, and the criteria and method used for the distribution of funds. In addition to this plan, the state’s application must include a prior-year report that describes how the state met its goals and objectives and information on the types of projects supported with the prior-year CSBG funds. Furthermore, states must certify that CSBG funds will be used to (1) provide a range of services and activities having a measurable and potentially major impact on the causes of poverty; (2) provide activities designed to assist low-income participants; (3) provide supplies, services, and nutritious food on an emergency basis to counteract the conditions of malnutrition among the poor; and (4) coordinate and establish linkages between governmental and other social services programs to ensure the effective delivery of such services to low-income individuals. States are required to pass through at least 90 percent of their block grant funds to locally based nonprofit community action agencies and may use the remaining funds to, among other things, make discretionary grants to nonprofit organizations for antipoverty projects and to cover administrative costs at the state level. Prior to providing funds for the community action agencies, states must obtain a community action plan from the agencies that includes a community needs assessment and descriptions of (1) the service delivery system, (2) how funding will be coordinated with other public and private resources, and (3) outcome measures to be used to monitor success in promoting self-sufficiency. The community action agencies and other eligible organizations may use the funds for employment, education, housing, health, and self-sufficiency activities. For example, community action agencies may provide job counseling, child development classes, community garden projects, and alcohol and drug abuse counseling. Community action agencies can also use CSBG funds for economic development activities. But in a meeting with the Director, OCS, and other OCS officials, we were told that pressure from within the communities to provide social services would probably prevent this. The Secretary of HHS may reserve between 0.5 and 1 percent of the amount appropriated for the CSBG Program for training, technical assistance, planning, evaluation, and data collection activities. Such activities may be carried out through grants, contracts, or cooperative agreements with eligible entities or with organizations or associations whose membership is composed of eligible entities or agencies that administer programs for eligible entities. One-half of 1 percent of the amount appropriated is apportioned on the basis of need among Guam, American Samoa, the Virgin Islands, the Northern Mariana Islands, and the Trust Territory of the Pacific Islands. Of the remaining amount, each state (excluding the above but including the District of Columbia and the Commonwealth of Puerto Rico) is allotted an amount that bears the same ratio as the amount that the state received for fiscal year 1981 (under section 221 of the Economic Opportunity Act of 1964) bore to the total amount received by all states for fiscal year 1981 under section 221. Funding totaled $389.6 million in fiscal years 1995 and 1996 and increased to $478.3 million in fiscal 1997. In fiscal year 1997, grants provided for the states ranged from a low of $1.9 million for Alaska to a high of $43.6 million for California. In fiscal year 1994, the community action agencies spent approximately $357.4 million for a broad array of services. The largest share of the spending—$98.4 million—was for “linkages” among the various programs and services provided in the community. Linkage involves the coordination among programs, facilities, and shared resources in the community. The next largest category of spending—$73.1 million for emergency services—includes assistance to meet immediate or urgent individual or family needs, such as shelter, clothing, and medical help. Approximately $43.8 million was spent on nutrition programs, while education and employment initiatives received $35.4 million and $25.7 million, respectively. In addition, about $26 million was expended on formalized self-sufficiency programs, and about $25.4 million was spent on housing-related activities. Approximately $15.5 million was committed to health-related programs, and $14.1 million was devoted to income management programs. The laws and regulations governing the CSBG Program are silent on whether program funds may be used to relocate businesses. EPA serves as the leader of the nation’s environmental policy and is responsible for, among other things, providing state and local agencies with technical and financial assistance for antipollution activities. The Clean Water State Revolving Fund Program, administered by EPA, is a key federal program for improving water quality and protecting public health. The Clean Water State Revolving Fund Program, established by the Congress in 1987, provides that each state, including Puerto Rico, establish revolving funds that would serve as independent and permanent sources of financing for wastewater treatment facilities and other water quality projects in the state. Under the program, EPA provides states with annual grants to help capitalize the revolving funds. The states are required to match federal capitalization grants at a rate of at least $1 for every $5 in federal funds received by the state. All 50 states and Puerto Rico have established state revolving funds. The Clean Water State Revolving Fund Program replaced EPA’s Construction Grants Program, which provided nonrepayable grants primarily for wastewater treatment facility construction. The Clean Water Act provides that states can use their revolving funds for three activities: to finance the construction of publicly owned wastewater treatment facilities; to implement programs to control nonpoint sources of water pollution, such as agricultural, rural, and urban runoff; and to develop and implement plans to conserve and manage estuaries. State, municipal, intermunicipal, and interstate agencies are eligible to receive assistance from state revolving funds. Individuals are also eligible to receive assistance to carry out activities to control nonpoint sources of water pollution and to conserve and manage estuaries. States can use their revolving funds to make loans and to provide other types of assistance, such as refinancing local debt obligations to lower the cost of borrowing for communities. State revolving funds cannot be used to provide grants. According to EPA’s Director, State Revolving Fund Branch, wastewater treatment facilities account for about 95 percent of the dollars in assistance provided by state revolving funds. The Clean Water Act requires that wastewater treatment projects funded by a state revolving fund must be on the state-prepared project priority list. The priority list identifies and ranks those treatment facilities that the state expects to fund. The act also requires that activities that a state intends to fund to control nonpoint sources of water pollution and to protect estuaries must be identified in the state’s annual plan identifying the state’s intended use of the fund. According to the Director, EPA State Revolving Fund Branch, states must base their decisions on which project to include on their priority lists on public health and water quality factors, and the economic development of an area should not be a factor in a state’s decision of whether to place a project on its priority list. This official told us that the economic development of an area can be taken into consideration when designing the treatment project but that there are controls to ensure that a project’s design allows only for reasonable growth. Nevertheless, EPA recognizes that while wastewater treatment projects are not funded solely to foster economic growth, the economic development of an area often occurs as an offshoot of such projects. In its report on the progress of the Clean Water State Revolving Fund Program issued in January 1995, EPA noted how investments in environmental infrastructure in the 1970s and 1980s to clean up the waterfronts in Cleveland, Pittsburgh, Seattle, and a number of other areas across the country lead to a revitalization of many of the major urban areas. EPA annually allots funds appropriated by the Congress for capitalization grants to the states, including Puerto Rico, generally according to percentages specified in the Clean Water Act. Each state has until the end of the fiscal year after the fiscal year in which the grant funds are appropriated to obligate its grant. Grants that are not obligated by the end of the second fiscal year are to be reallotted by EPA among those states that have obligated all of their grant funds within the first fiscal year. The Congress began appropriating funds for capitalization grants in fiscal year 1989. EPA’s data show that as of the end of May 1997, cumulative capitalization grants awarded to the 50 states and Puerto Rico for their revolving funds totaled about $13.6 billion. About $1.7 billion of this amount represents fiscal year 1996 capitalization grants. EPA anticipates that capitalization of the state revolving funds will continue until 2004. According to EPA’s Acting Director, State Revolving Fund Branch, data from EPA’s State Revolving Fund Management Information System show that as of June 30, 1996, facilities for the secondary treatment of wastewater accounted for about 50 percent of the total projects funded by state revolving funds. Other projects funded by the funds included combined sewer overflow projects, facilities to handle and treat sludge at water treatment plants, and projects to protect or restore streams, wetlands, and estuaries. While title VI of the Clean Water Act, which authorizes the Clean Water State Revolving Fund Program, and EPA’s regulation governing the program place several restrictions on the use of state revolving funds, they are silent regarding the use of program funds for relocating businesses. For example, the regulations allow state revolving funds to provide assistance only for the publicly owned portion of treatment works, and a revolving fund may not provide loans for the nonfederal share of the cost of treatment projects that the recipient is receiving from EPA under other authority. Agriculture’s Water and Waste Disposal Program provides loans and grants to rural communities for funding water and waste disposal facilities that will improve the quality of life and promote the economic development of rural areas. To be eligible for the Water and Waste Disposal Program, a rural community must have a population of 10,000 people or fewer. Financial assistance is provided in the form of direct loans and/or grants from Agriculture or loans from commercial sources, a portion of which Agriculture guarantees against loss. Applicants must be unable to obtain financing from other sources at reasonable rates and terms. Grants are primarily provided for reducing the water and waste disposal costs to a reasonable level for users of the system. Entities eligible for Water and Waste Disposal loans and grants are municipalities, counties, Indian tribes, special purpose districts, and nonprofit corporations. Water and Waste Disposal loans and grants may be used to construct, repair, improve, expand, or modify rural water, sanitary sewage, solid waste disposal, and storm wastewater disposal systems. Facilities that may be funded include reservoirs, pipelines, wells, pumping stations, sewer systems, storm sewer systems, and solid waste disposal equipment, including garbage trucks, sanitary landfills, and incinerators. Funds can also be used to acquire land and water rights; to pay legal, engineering, and other fees associated with developing facilities; and to provide training and technical assistance for, among other things, identifying and evaluating solutions to water and waste disposal problems. Agriculture allocates grant and loan funds to its state offices by a formula that measures each state’s (1) percentage of the national rural population (50 percent), (2) percentage of the national rural population with incomes below the poverty line (25 percent), and (3) percentage of national nonmetropolitan unemployment (25 percent). No one state may receive more than 5 percent of the total funds available. The state offices then make the funds available to their district offices to support rural water and sewer projects proposed by local communities. All 50 states, Puerto Rico, the U.S. Virgin Islands, and the Western Pacific territories are authorized to receive funds. Before allocating the funds to the state offices, Agriculture sets aside about 10 percent of both loan and grant funds as a reserve for emergencies, cost overruns, and other unforeseen problems. The type of assistance that Agriculture provides for the community—either a loan or a combination of a loan and grant—depends on each community’s financial situation. According to Agriculture’s program regulations, grant funds are to be provided for projects serving financially needy communities to reduce user charges to a reasonable level. Agriculture headquarters officials consider a “reasonable” user charge to be one that the community can afford. Agriculture’s state and district offices determine affordability on the basis of the (1) community’s median household income or (2) user charges for similar systems in the area. Agriculture has the discretion to decide which approach will be used to determine the amount of grant funds provided. Communities may also supplement Agriculture’s water and sewer funds with their own funds and funds from other federal, state, or private sources. While the laws and regulations governing the program place several restrictions on the use of the program funds, they do not address the use of funds for relocating businesses. For example, funds may not be used for building facilities that are not modest in design or cost, new combined storm and sanitary sewer facilities, or part of the project costs normally provided by a business or industrial user. Over the last 5 years, funding for Agriculture’s Water and Waste Disposal Program has averaged about $1 billion per year. In fiscal year 1996, Agriculture provided about $963 million in Water and Waste Disposal direct loans and grants. Of that amount, 617 direct loans worth about $389 million and 435 grants worth about $198 million were provided for rural water projects. Agriculture also provided 278 direct loans worth about $214 million and 233 grants worth about $162 million for rural waste disposal projects. Agriculture also guaranteed about $59 million in loans during fiscal year 1996. The Surface Transportation Program (STP) is a grant program created by the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) to develop and improve the nation’s surface transportation facilities. Administered by the Department of Transportation’s Federal Highway Administration, STP provides grants that states and localities can use to finance a variety of transportation-related projects. Except as noted below, STP funds cannot be used on roads classified as local roads and rural minor collectors. STP funds may be used for (1) construction, reconstruction, rehabilitation, resurfacing, restoration, and operational improvements of highways and bridges (including bridges on any public road); (2) capital costs for transit projects eligible for assistance under the Federal Transit Act and publicly owned intracity or intercity bus terminals and facilities; (3) carpool projects, fringe and corridor parking facilities and programs, and bicycle transportation and pedestrian walkways on any public roads; (4) highway and transit safety improvements and programs, hazard eliminations, projects to mitigate hazards caused by wildlife, and railway-highway grade crossings on any public roads; (5) highway and transit research and development and technology transfer programs; (6) capital and operating costs for traffic monitoring, management, and control facilities and programs; (7) surface transportation-planning programs; (8) transportation enhancement activities; (9) transportation control measures; (10) the development and establishment of management systems; and (11) wetland mitigation efforts. For STP projects, the normal federal share is 80 percent. When STP funds are used for interstate projects, the federal share may be 90 percent. The federal share may be increased to 95 percent in states with large areas of public lands and up to 100 percent for certain safety, traffic control, and carpool/vanpool projects. ISTEA established a requirement for a statewide planning process that takes into consideration all modes of transportation. The transportation-planning process must be carried out in cooperation with metropolitan planning organizations (MPOs), Indian tribal governments, transit operators, federal lands agencies, and environmental, resource, and permit agencies. The states are required to generate a long-range transportation plan that has a 20-year horizon and is based on realistic projections of available resources. The plan must consider all modes of surface transportation, and it must take into account a very wide range of environmental impact, recreation, and other factors. In addition to the long-range plan, the states are required to develop a statewide transportation improvement program (STIP) that includes all transportation projects that will receive federal transportation funding. The STIP must be consistent with the long-range plan and expected funding. Like the states, MPOs develop a long-range plan and a transportation improvement program (TIP) for each metropolitan area. Among other things, the long-range plan must include a financial plan and identify transportation facilities that function as an integrated transportation system. The TIP is developed in cooperation with the state and transit operators and must include all transportation projects to be funded. The TIP must be updated and approved at least every 2 years by the MPO and the state’s governor and have a reasonable opportunity for public comment prior to approval. Furthermore, the TIP must include a priority list and a financial plan that demonstrates how it can be implemented. Under ISTEA and the new project selection system, the STP gives localities, especially larger communities, an unprecedented level of control to select the surface transportation solutions that best fit their needs and preferences. Transportation projects are selected by the MPO in consultation with the state in areas with populations of greater than 200,000; by the state, in cooperation with the MPO, in areas with populations of between 50,000 and 200,000; and by the state, in cooperation with affected local officials, in areas with populations of less than 50,000. Transportation apportions STP funds to the states on the basis of historical federal highway funding that indirectly includes factors such as each state’s postal route mileage, land area, and urban and rural population. Each state must reserve 10 percent of the funds apportioned to it for safety construction activities (such as hazard elimination and rail-highway grade crossings) and 10 percent for transportation enhancements (such as the preservation of abandoned transportation corridors and the control and removal of outdoor advertising). Of the remaining funds, the state must distribute 62.5 percent between urbanized areas that have populations exceeding 200,000 and the remaining areas of the states in proportion to their relative share of the state’s population. States retain discretion over 37.5 percent of the remaining funds, which can be used in any area of the state. In fiscal year 1996, STP funds were obligated in the following manner: $416.6 million for safety construction activities, $426.9 million for transportation enhancements, $1 billion for urbanized areas with populations exceeding 200,000, $569.4 million for areas with populations of less than 200,000, $544.8 million for nonurban areas, and $2.8 billion for state discretionary projects in any area of the state. STP funds apportioned to the states by Transportation totaled approximately $3.9 billion for fiscal year 1995, $3.4 billion for fiscal 1996, and $3.9 billion for fiscal 1997. Individual state apportionments in fiscal year 1997 varied from a low of approximately $12.6 million for Massachusetts to a high of over $317 million for California. The laws and regulations governing the STP are silent on whether program funds may be used to relocate businesses. The following are GAO’s comments on the Department of Commerce’s letter dated July 28, 1997. 1. We reviewed the suggested editing changes and incorporated them into the report where appropriate. 2. It was not our intent to infer that any particular federal program has been used to lure jobs from one location to another. We deleted a statement from our draft report regarding the citing of federal programs used in incentive packages because of the concern and confusion it has caused. 3. At the end of our report, we note that Commerce has requested proposals for a research project that will (1) develop a tool to evaluate state incentive programs and (2) make recommendations on the appropriate federal role with regard to locational incentives. We also note that Commerce expects the final report for the project to be completed by June 1998. The following are GAO’s comments on the Department of Health and Human Services’ letter dated July 29, 1997. 1. At the end of the report, we discuss HHS’ position regarding a policy interpretation that HUD gave to the Atlanta EZ in a November 1996 memorandum. Because it is unclear as to the exact role that each agency played in developing the policy, we revised the report to include HHS’ position in the report sections that discuss the memorandum; we did not delete the phrase “after conferring with HHS” because it is contained in the memorandum. The following are GAO’s comments on the Department of Housing and Urban Development’s letter dated July 25, 1997. 1. In its enclosure, HUD provided suggested clarification regarding the definition of communities that are eligible to receive CDBG entitlement funds, which we incorporated into the report where appropriate. We also deleted a statement from the draft report regarding the citing of federal programs used in incentive packages because of the concern and confusion it caused. It was not our intent to infer that any particular federal program has been used to lure jobs from one location to another. 2. At the end of our report, we discuss HUD’s intention to issue guidelines clarifying its position on the use of SSBG funds for the relocation of jobs. We also note that a draft of the guidelines was included as an attachment to HUD’s comments. The following are GAO’s comments on the Department of Labor’s letter dated July 21, 1997. 1. At the end of our report, we discuss Labor’s observation that while our report deals with title III of JTPA and the services that are available to eligible dislocated workers, section 141 of JTPA deals with all training programs under the act, including those involving disadvantaged youths and adults, migrant and seasonal farm workers, Native Americans, and older Americans. We agree and make this point in the report. The sections of our report that discuss the relocation prohibition under section 141 of JTPA state specifically that this prohibition applies to all funds provided under JTPA. 2. At the end of our report, we also discuss Labor’s comment that, in addition to the relocation prohibition, there is a prohibition against using JTPA funds for economic development or employment-generating activities and that it is important to put this into the context of the training services that are available to help dislocated workers to return to the workforce. We agree with Labor and have added language to our final report to reflect these prohibitions. 3. In an attachment and in margin notes on a draft of our report, Labor provided additional comments. This included updated statistics on the number of individuals provided with training services under title III during program year 1995, which we included in our final report. Labor also made specific technical suggestions and observations to improve the clarity and accuracy of information in the report regarding (1) individuals who qualify as eligible dislocated workers under title III, (2) the two funding and service schemes for eligible dislocated workers under title III, (3) preaward reviews conducted by substate grantees and businesses to ensure compliance with the nonrelocation requirement, and (4) the responsibility of state and substate agencies for monitoring and ensuring compliance with the JTPA relocation prohibition. We have incorporated these changes in our final report where appropriate. Labor also suggested that the section of the report that discusses the different types of incentives and the role they may play in business relocation mentions that JTPA funds cannot be commingled with other federal or state funds and that JTPA includes a “maintenance of effort” requirement that JTPA funds be used only for activities that are in addition to those that would otherwise be available in the absence of such funds. We did not include Labor’s suggested language because this section of our report does not discuss any of the eight programs but is an overview discussion of incentives and their general role in business relocation. Austin Kelly Erin Lansburgh Sally Moino Stan Ritchick Rick Smith The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the economic development activities of eight federal programs, focusing on: (1) the economic development activities that major federal programs fund for the benefit of states and communities; (2) restrictions for using program funds to relocate existing businesses and jobs; (3) procedures that federal agencies established to ensure compliance with the restrictions; and (4) types of incentives that states and communities have used to attract businesses and the role that incentives may play in a business's decision to relocate. GAO noted that: (1) funds for the eight programs GAO examined can be used for a variety of economic development activities; (2) three of the programs, the Economic Development Administration's Public Works and Development Facilities Program, the Department of Housing and Urban Development's (HUD) Community Development Block Grant Program, and HUD's and the Department of Agriculture's Empowerment Zone and Enterprise Community Program, fund activities that focus primarily on the economic development of distressed areas; (3) two of the programs, the Department of Labor's Employment and Training Assistance for Dislocated Workers Program and the Department of Health and Human Services' Community Services Block Grant Program, focus on improving the economic viability of individuals by funding activities that help unemployed individuals qualify for and find new jobs and help low-income individuals and families obtain adequate jobs, education, nutrition, and housing; (4) the three remaining programs, the Environmental Protection Agency's Clean Water State Revolving Fund Program, Agriculture's Water and Waste Disposal Program, and the Department of Transportation's Surface Transportation Program, fund infrastructure projects in the form of wastewater treatment projects and other water quality projects and highway, mass transit, or other transportation projects where economic development of an area may be an offshoot; (5) of the eight programs, three have restrictions against using funds to relocate jobs, four do not address the issue of using funds to relocate jobs, and under one, legislation that would impose prohibitions against relocating jobs is pending; (6) agencies responsible for programs with relocation restrictions rely on various procedures to ensure compliance with the prohibitions; (7) states may use a variety of incentives, such as tax concessions, financial assistance, and other benefits, to encourage economic development and attract businesses; (8) also, when federal funds are used for an activity that the state or community would have undertaken anyway, those federal funds free up state money for some other activity, including incentives to attract businesses; and (9) studies have shown that when making decisions to locate in a particular area, businesses consider a variety of factors, such as workers' productivity, the efficiency of transportation facilities, and the community receptivity; incentives may or may not be a major factor in a firm's decision to locate to a particular area. |
Several legislative initiatives enacted during the past decade have emphasized the potential of IT to improve the federal government’s performance. For example, the Paperwork Reduction Act of 1995 (PRA) requires the Director of the Office of Management and Budget (OMB) to “promote the use of information technology to improve the productivity, efficiency, and effectiveness of Federal programs, including through dissemination of public information and the reduction of information collection burdens on the public.” The Information Technology Management Reform Act of 1996 (known as the Clinger-Cohen Act) also requires the OMB Director to “promote and be responsible for improving the acquisition, use, and disposal of information technology by the Federal Government to improve the productivity, efficiency, and effectiveness of Federal programs, including through dissemination of public information and the reduction of information collection burdens on the public.” Additionally, the Government Paperwork Elimination Act (GPEA) requires the OMB Director to ensure that federal agencies “provide for the option of electronic maintenance, submission, or disclosure of information, when practicable as a substitute for paper” by October 2003. GPEA’s full implementation will give individuals and organizations the option to submit information or transact business with agencies electronically. Executive branch initiatives have also encouraged the use of IT in the federal government. For example, in September 1993, the National Performance Review (later the National Partnership for Reinventing Government) announced a set of recommendations that were intended to improve government by reengineering through the use of information technology. Those recommendations included the development of integrated electronic access to government information and service; the creation of a national environmental data index; and the use of IT and other techniques “to increase opportunities for early, frequent, and interactive public participation during the rulemaking process and to increase program evaluation efforts.” In July 1996, President Clinton issued Executive Order 13011 on “Federal Information Technology,” which, among other things, established a Chief Information Officers Council (CIO Council) as the principle interagency forum to improve agency information resource management and to “share experiences, ideas, and promising practices.” A December 17, 1999, presidential memorandum on electronic government noted that “as public awareness and Internet usage increase, the demand for online Government interaction and simplified, standardized ways to access Government information and services becomes increasingly important” and directed federal agencies to take steps to address that growing demand. Additionally, this directive called for the establishment of a “one stop” gateway to government information available on the Internet. The federal government has taken some steps to establish electronic gateways that provide one-stop access to information from a variety of agencies, including regulatory agencies. For example, the “FirstGov” Web site (www.firstgov.gov), which was launched on September 22, 2000, provides links to all on-line federal resources—from applying for student loans to tracking Social Security benefits. Also, the U.S. Business Advisor site (www.business.gov) provides businesses with one-stop access to federal information on such topics as taxes, international trade, financial assistance, and laws and regulations. The laws and regulations link allows users to connect with the Federal Register, the United States Code, and compilations of laws and regulations affecting small businesses. The U.S. Business Advisor site was created by the Small Business Administration (SBA), the National Performance Review, and an interagency task force and is maintained and funded by SBA. In our report issued last year, we identified a number of examples of how federal agencies were using IT to facilitate public participation in rulemaking. Although all of the departments and agencies we contacted were developing some type of IT-based participation vehicles, officials and staff in those agencies questioned the need for standardization of those practices across agencies. They said that agencies need to be able to design their procedures to fit their particular circumstances, and that standardization would require scarce agency resources. However, agency officials and staff were supportive of efforts to better coordinate the use of those participation mechanisms to avoid each agency’s reinventing the wheel. OMB’s Office of Information and Regulatory Affairs (OIRA), which was created by the PRA of 1980, is responsible for providing guidance and oversight for both IT and regulatory issues. The OIRA Administrator sits on the CIO Council, which is chaired by OMB’s Deputy Director for Management. Executive Order 12866 identifies OIRA as “the repository of expertise concerning regulatory issues” and makes the office responsible for coordinating agencies’ regulatory missions. The executive order also established a Regulatory Working Group that is chaired by the OIRA Administrator and is comprised of representatives of the heads of each agency with significant domestic regulatory responsibilities. The order also says that the Regulatory Working Group “shall serve as a forum to assist agencies in identifying and analyzing important regulatory issues,” including “the development of innovative regulatory techniques.” OIRA has taken some steps to encourage the use of IT specifically to improve regulatory management in federal agencies. For example, in April 2000, the OIRA Administrator launched an initiative focusing on using IT to improve the quality of the information that the government collects, while minimizing the burden. The initiative began with a public forum that featured senior officials from a number of federal regulatory agencies presenting information on their agencies’ initiatives, followed by a series of roundtable discussions. Additionally, OIRA and OMB have provided guidance to agencies on a variety of information policy issues, including the implementation of GPEA, privacy, and data exchanges with the states. The guidance applies to regulatory management as well as other agency functions. The objectives of our review were to identify (1) examples of how federal agencies are using IT innovatively, either individually or in collaboration with other agencies or levels of government, to facilitate regulatory management; (2) examples of how state regulatory agencies are using IT innovatively to facilitate regulatory management; (3) IT applications that representatives of nongovernmental organizations believe could be more widely used by federal regulatory agencies; and (4) what officials and staff in federal and state regulatory agencies and nongovernmental organizations believe are the key factors that facilitate or hinder the adoption and diffusion of IT applications in regulatory management. We focused our efforts regarding the first objective on the Departments of Agriculture (USDA); Labor (DOL); Health and Human Services (HHS); and Transportation (DOT) and the Environmental Protection Agency (EPA). We selected these agencies because they are primarily responsible for federal health, safety, and environmental regulations that have been the target of reform initiatives. In each agency, we identified the IT and regulatory management officials and staff to interview, either through our designated liaisons or through publications that featured relevant IT applications, including the agencies’ Web sites and agency documents. We asked each of these officials and staff to identify IT-based regulatory management applications that they considered innovative. We did not attempt to define the word “innovative” but made it clear that the application should not simply be that the agency had a page on the Web. We also obtained information on the agencies’ innovative or “best practice” uses of IT in regulatory management from individuals and groups focusing on regulatory reform, including the Regulatory Working Group, the Council for Excellence in Government, and academics. In each agency, we asked a series of structured questions that were keyed to our reporting objectives. For each of the federal IT-based regulatory management applications that the agencies or others identified, we conducted a structured, follow-up interview that was designed to obtain more detailed information from relevant agency officials. Specifically, we asked, among other things, for a detailed description of the innovation and for information on the regulatory purpose(s), status, scope, and results of the IT-based application. Additionally, we asked about lessons learned, including obstacles and facilitators to development. We also reviewed information on the innovations on agencies’ Web sites and other relevant documents. In some cases, the innovations we identified were primarily located in one part of the agency. For example, in HHS, the innovations identified for this study were primarily in the Department’s Food and Drug Administration (FDA). Additionally, in some instances, we selected certain innovations for presentation in this report from a longer list of suggestions that was provided by the agency. For example, FDA officials provided a list of more than a dozen applications that they considered innovative. Working with FDA officials, we selected applications for inclusion in this report that represented different types of functions. For the second objective, we interviewed officials from organizations representing state governments (e.g., the National Governors Association and the Environmental Council of the States) to identify promising regulatory IT applications at the state level. Again, we allowed these organizations to define the word innovative. We also identified state IT applications in the regulatory arena that other organizations (e.g., the Council for Excellence in Government and the National Association of State Information Resource Executives) or publications identified as examples of best practices. For each of the identified state IT-based applications, we talked to officials or staff in the state agencies involved in the development and/or implementation of the application and reviewed information on the agencies’ Web sites. For the third objective, we interviewed representatives of business associations, consumer advocacy groups, and academic centers that deal with regulatory reform issues. Although we recognize that there are numerous organizations that are interested in regulatory issues, we judgmentally selected these nongovernmental organizations to contact because they have been actively involved in recent regulatory reform initiatives and represent alternative perspectives on regulatory reform. We contacted the following organizations: the National Federation of Independent Businesses, the National Association of Manufacturers, the U.S. Chamber of Commerce, the American Hospital Association, the Natural Resources Defense Council, Public Citizen, OMB Watch, Information Renaissance, the American Bar Association, the Heritage Foundation, Citizens for a Sound Economy, the Mercatus Center of George Mason University, American University’s Washington College of Law, and Washington University’s Center for the Study of American Business. Some of these organizations provided extensive information, while others gave us more limited answers to our questions. We also reviewed available Web sites for the organizations and looked at relevant publications discussing IT applications that may have potential for improving federal regulatory management. For the fourth objective, we asked all of the individuals that we interviewed what they viewed as the key factors that facilitate or hinder the adoption and diffusion of IT applications in regulatory management. Our review was intended to provide examples of innovative IT-based applications in regulatory management and should not be viewed as a compendium of all such applications, even within the federal agencies and states that are the focus of this report. Also, the suggestions offered by representatives of the nongovernmental organizations in relation to the third objective are not intended to be comprehensive of all possible suggestions. We did not attempt to validate federal or state agency officials’ views or data regarding the performance of the innovations that they identified. We conducted our work between June 2000 and December 2000 in Washington, D.C., in accordance with generally accepted government auditing standards. We provided a draft of this report to the Director of OMB for his review and comment. OMB officials said that they had no comments on the draft report. We also provided federal and state agency officials with the relevant draft report sections attributed to them to ensure that we correctly characterized their systems and comments. All of the federal agencies included in our review were using some form of IT to improve regulatory management and to meet legislative and executive branch mandates in this area. The applications that the agencies and others identified as innovative covered all of the dimensions of regulatory management that we examined, and most applications covered more than one dimension. Most of the applications involved using IT to improve traditional regulatory management approaches within their agencies. Other applications were more interactive in nature and appeared to change the nature of the relationship between regulatory agencies and the relevant public. A few of the applications attempted to address issues involving interagency or intergovernmental coordination. Although agency officials were able to identify perceived benefits for the innovations, few agencies had performance data clearly demonstrating the effect of the innovations on the agencies’ effectiveness or efficiency, burden reduction, or other regulatory outcomes. The innovative IT-based applications that attempted to improve traditional regulatory management approaches addressed several of the different dimensions of regulatory management—rulemaking, information collection, compliance assistance, information dissemination, and other compliance/enforcement actions. Many of these applications also had implications for burden reduction and/or improved transparency of the regulatory process. Several of the federal IT-based applications and initiatives that agency officials and others identified as innovative were attempting to improve the internal management of the rulemaking process. DOT’s Docket Management System (DMS) is an electronic, image-based database covering every agency and every rulemaking within the Department. A DOT official said that the DMS not only offered easier access to rulemaking materials to the public, but it also made it easier for DOT lawyers, analysts, managers, and others involved in the rulemaking to find the information they needed when they needed it. For example, they said agency professionals could review public comments on proposed rules at their desks or even from their homes as they develop final rules. As noted in our previous report, the DMS has become the official rulemaking record for the Department, enabling DOT to save more than $1 million each year in administrative costs. USDA’s Risk Management Agency—the agency responsible for crop insurance programs—developed an Internet-based Regulatory Processing Management Tracking System that monitors proposed and final rules through all steps in the rulemaking process. The system permits agency employees and others to identify planned regulations and their estimated time frames, the status of rules being developed (including the number of days in each processing step), and the next steps required in the process. The system also has a forecasting feature that allows users to develop a list of process steps required for publication of rules and to calculate estimated dates of publication that are based on best and worst case scenarios. Other features are planned for the future, and other agencies within USDA have expressed interest in developing similar systems. DOT’s Federal Aviation Administration (FAA) has developed an Internet-based Integrated Rulemaking Management Information System (IRMIS) to track the status of rulemaking projects, including their corresponding schedules and associated documents. IRMIS also provides users with access to other rulemaking-related systems, including DOT’s DMS; federal regulations; and the agency’s Regulatory Guidance Library. DOT officials said the Department expects to implement a DOT-wide tracking system within a year that will interact with IRMIS and other agency tracking systems. Several agencies have developed IT-based applications that involve the collection of information through some form of electronic reporting. Two agencies’ efforts in this area are particularly noteworthy—EPA and FDA. EPA has established a central Office of Environmental Information (www.epa.gov/oei/) to coordinate the agency’s information collection and dissemination activities and to develop integrated, standardized collections of information (among other things). EPA is also taking a number of actions to make electronic reporting available to all regulated communities for all environmental compliance reports, including (1) developing electronic data interchange (EDI) standards; (2) developing user-friendly Web-based forms, which would be appropriate for electronic reporting by companies that are not EDI-capable; and (3) implementing a “central data exchange facility” to provide a single, one- stop point of entry for data submitted to EPA. In addition, the agency is developing electronic reporting and recordkeeping best practices and implementation support to help state and local agencies accept electronic reports under EPA-delegated programs. EPA does not yet have any data on the amount of burden actually reduced through the use of electronic reporting. However, on the basis of industry experience with electronic commerce, EPA officials estimated that these initiatives could ultimately reduce regulated entities’ paperwork time and costs by as much as 20 percent for a given entity, allow EPA and state and local agencies potentially to save millions of dollars in processing costs, and reduce data entry errors. FDA’s Operational and Administrative System for Import Support is an automated system for processing and making admissibility determinations for shipments of foreign-origin, FDA-regulated products seeking to enter domestic commerce. Agency officials said that admissibility decisions are transmitted to importers’ agents within minutes after shipment data are electronically submitted to FDA, and that 85 percent of shipments are cleared without any submission of paper. Automated screening functions also reportedly enhance FDA’s ability to detect problems, thereby keeping certain products from entering the country. An FDA contractor estimated that the system would save the import industry $1.2 billion during a 7-year period, and FDA believes that the system will also improve the effectiveness and productivity of agency employees. This system has won a number of awards, including the CIO Council’s and Industry Advisory Council’s 1998 Best IT Practices in the Federal Government and Government Executive magazine’s 1998 Government Technology Leadership Award. FDA’s Center for Devices and Radiological Health has developed an Internet-based Mammography Program Reporting and Information System to support the agency’s statutorily mandated responsibility for certification and inspection of all mammography facilities in the United States. The system permits the electronic tracking and monitoring of a facility’s accreditation, certification, inspection, and compliance history. FDA and state inspectors use laptop computers to record inspection results and send the results to a centralized database, which is also used by FDA-approved accreditation bodies. The system allows access to data from all authorized user locations and was built to accommodate a variety of users’ computing environments. FDA’s Center for Food Safety and Applied Nutrition’s Voluntary Cosmetics Registration System provides Internet-based access to a database that allows cosmetic companies to obtain a registration number and subsequently submit formulation information and ingredient lists to the center in a secure manner. FDA officials said that cosmetic companies are more willing to voluntarily register with the agency through the system because it reduces the amount of time the companies spend registering and submitting information. FDA’s Center for Biologics Evaluation and Research has designed and implemented an Electronic Regulatory Submission Review Program to support the required performance goals in the Prescription Drug User Fee Act and proposed international standards. The purpose of the program is to move from a largely paper-based, regulatory submission and review environment toward one that works with an all-electronic regulatory submission. Agency officials said that the program would enable the efficient receipt, viewing, storage, and archiving of electronic submissions, thereby allowing access to information from any reviewer’s desktop and automating analytical and administrative processes. Compliance assistance has long been recognized as a way to reduce the burden associated with federal regulations, but those efforts have not always proved successful. In our 1996 report on federal regulatory burden, federal agencies said that several private sector companies we contacted during our review had misstated or misinterpreted statutory or regulatory requirements, sometimes incurring unnecessary expenses. Some of the companies told us that it was difficult to obtain clear compliance information from federal agencies. We observed in our report that the mechanisms agencies used to provide information on regulatory requirements appeared fragmented both between and within agencies, and that this fragmented approach may be contributing to ineffective communication between regulatory agencies and the business community. Some of the IT-based applications that agency officials and others identified as innovative during this review were intended to inform regulated entities of their responsibilities under applicable statutes and regulations. EPA has partnered with industry associations, environmental groups, universities, and other government agencies to create 10 compliance assistance centers for specific sectors, many of which are heavily populated with small businesses and other small entities. (See www.assistancecenters.net.) Sectors served by the centers include agriculture, automotive services and repair, metal finishers, printing, transportation, local governments, and federal facilities. EPA manages two of the centers (agriculture and federal facilities), with the other eight managed by organizations outside of EPA. The centers offer a range of communication services, including Internet sites, E-mail groups, fax-back systems, and telephone assistance hotlines. Information provided through these mechanisms include plain-language compliance guides, updates on industry-specific regulatory developments, on-line access and search capabilities for state regulations, and training and satellite conferences. According to EPA, the centers were used more than 400,000 times by regulated entities and the public in fiscal year 2000, a 56-percent increase from fiscal year 1999. DOL’s Occupational Safety and Health Administration (OSHA) offers electronic Compliance Assistance Tools (e-CAT) that help businesses identify workplace hazards in specific areas. They also provide safety and health information to help businesses address the identified hazards. (See www.osha-slc.gov/dts/osta/oshasoft/osha-advisors.) The six available e-CATs cover compliance requirements for baggage handling, nursing homes, the logging industry, respirator protection, silica protection, and lockout/tagout inspections. Although all of the selected agencies had IT-based systems to provide information to the public, four EPA systems were particularly noteworthy. “Envirofacts” (www.epa.gov/enviro/) is an Internet-based system that allows users to retrieve environmental information about different media and issues (e.g., air and water quality, hazardous wastes, and toxic releases) from several EPA databases. Envirofacts also includes (1) mapping programs that allow users to identify sources of pollution within the users’ community and (2) a Facility Registry System database that provides a single, integrated source of comprehensive information about particular facilities. Envirofacts has received numerous awards, including the Government Computer News Agency Excellence Award in 2000 and the 1999 Government Technology Leadership Award. EPA’s Integrated Data for Enforcement Analysis (IDEA) database (www.epa.gov/oeca/idea/) is a comprehensive source of environmental performance information on any EPA-regulated facility, retrieving data from across agency program offices. The database provides federal and state employees with facility-specific historical profiles of inspections, enforcement actions, penalties assessed, toxic chemicals released, and emergency hazardous spills. Public users can obtain access to certain information in the system by registering with EPA and paying for computing services. EPA has also developed a separate but related Sector Facility Indexing Project (SFIP) database (www.epa.gov/oeca/sfi) to provide information from the IDEA database to the public in a more user-friendly and accessible manner. SFIP currently provides information about compliance and enforcement history, pollutant releases and spills, production capacities, and the demographics of the surrounding community for facilities in five industrial sectors: pulp mills, petroleum refining, automobile assembly, iron and steel, and primary nonferrous metals. EPA officials said they plan to expand the database to include federal facilities in the near future. EPA’s “AirNow” Program (www.epa.gov/airnow/) is a Web site that provides environmental information to the public through links to regional cameras that show air quality in various parts of the country. The site also provides public health information on the environmental effects of air pollution, featuring interactive ozone maps, air quality forecasts, and health advisories that help keep users informed about the air they breathe. The site won a Government Technology Leadership award in 1998 and was selected by Government Executive magazine as one of the “Best Feds on the Web” for 2000. The applications previously discussed, although innovative in many respects, are not interactive or transactional in nature and generally do not represent significant departures from traditional regulatory management functions. On the other hand, a few of the applications that the agencies and others identified as innovative have gone beyond the traditional constructs and provide new forms of interaction with the public. One such application is DOL’s set of Employment Laws Assistance for Workers and Small Businesses (elaws) advisors. (See http://www.dol.gov/elaws/.) Elaws is a set of interactive advisors that is available on the Internet to help workers and small businesses understand their rights and responsibilities under federal employment laws and regulations. Each advisor imitates the interaction that an individual might have with a DOL employment law expert, asking questions and providing answers that are based on the responses provided. For example, the Confined Spaces Advisor leads the user through a series of questions designed to determine whether a particular business is covered by the applicable regulations. Among other things, the advisor asks whether the space in question is large enough for a worker to enter bodily; is configured so that a worker can perform work inside; has a restricted entry or exit; is designed for continuous worker occupancy; has a hazardous atmosphere; and has a floor that slopes down to a narrower cross section. At the end of this series of questions, the advisor informs the user whether OSHA considers the space in question to be a confined space, and whether a permit is required for its use. The advisor also directs the user to an overview of OSHA guidance on permit-required confined spaces. The elaws advisors differ in the types of interactions they support. For example, the Posters Advisor not only allows business owners to identify any DOL-required posters their business must display, but also allows them to print the required posters. Other advisors help users fill out required forms and submit them electronically. As of November 2000, DOL had elaws advisors covering a variety of issues and DOL-administered statutes, including the Fair Labor Standards Act, the Drug-Free Workplace Act of 1988, and the Family and Medical Leave Act. According to DOL, the various advisers were accessed more than 450,000 times during fiscal year 2000, and their use is increasing. In addition to its on-line Fire Safety and Confined Spaces advisors as part of the DOL elaws system, OSHA also has a set of downloadable expert advisors. These advisors run on personal computers and enable businesses and others to receive answers off-line on how OSHA regulations apply to their work sites. (See www.osha-slc.gov/dts/osta/oshasoft /.) An OSHA official said that off-line advisors allow users to input detailed information about their companies without privacy or enforcement concerns associated with on-line systems connected to the agency. The OSHA advisors include (1) a Hazard Awareness Advisor to identify hazards in general industry workplaces; (2) an Asbestos Advisor for building owners, managers, and others; and (3) a Lead in Construction Advisor to help clarify the coverage of OSHA’s rule, the use of exposure data, and other issues. In each of these advisors, users are interviewed about relevant issues; asked follow-up questions that are based on the answers previously provided; and, in most cases, provided a written report tailored to the circumstances described. In August 2000, the Ford Foundation and Harvard University’s John F. Kennedy School of Government named OSHA’s expert advisors as a finalist in the Innovations in American Government Award, which recognizes responsive and innovative government programs. Another interactive IT-based application used in federal regulatory management is DOT’s “Do It Yourself” (DIY) system. (See http://diy.dot.gov.) The DIY system was developed by DOT’s finance office and permits regulated entities to file for required licenses and certifications and to make related payments using a credit card through a central DOT Web site or through the regulating agency’s Web site. DOT agencies using the DIY system as of November 2000 included, among others, (1) the Federal Motor Carrier Safety Administration (FMCSA) (for registration applications, insurance payments, and fine payments); (2) the National Highway Traffic Safety Administration (for import fee payments, technical information services payments, and Freedom of Information Act request payments); and (3) FAA (for aircraft registration and airman data). Users are guided through a series of screens that ask for confirmation of transaction requests and address information. Built-in error messages in the on-line forms help users complete the forms correctly, thereby eliminating rework for both the users and the agency. The final screens take the user through the credit card part of the transaction and provide the user with a transaction number that can be used to track orders. DOT officials said that, in most cases, users should be able to complete the transactions in 10 minutes or less. The DIY system was designed to provide better service to customers, reduce paperwork, and introduce efficiencies into DOT’s operations, and agency officials said the system has already demonstrated its effectiveness. For example, they said the DIY system has helped FMCSA eliminate a 5-week backlog of applications from truckers wanting to engage in interstate hauling because staff no longer have to contact truckers about errors and no longer have to follow a series of steps to process the application and payment. A few of the federal regulatory management innovations that the agencies or others identified involved interagency or intergovernmental cooperation—one EPA effort involving the states and separate efforts at DOT and FDA involving multiple federal agencies. EPA is working with the states to develop a “national environmental information exchange network” that the agency believes can improve both the quality and access to environmental data. The exchange network is a voluntary, standards-based system that links different state systems and EPA systems, using common language and secure connections through the Internet. In October 2000, a team comprising participants from EPA, individual states, and the Environmental Council of the States released a Blueprint for a National Environmental Information Exchange Network that lays out the network design and partnership agreements for implementing the network. Both federal and state officials consider coordination and cooperation between EPA and the states essential to successful implementation of the environmental electronic reporting initiative previously discussed. DOT has been involved in an interagency effort to develop an Integrated Government-wide International Trade Data System (ITDS) that the developers hope will coordinate the collection, use, and dissemination of international trade information. When fully developed (in an estimated 5 to 6 years), ITDS will be the public and interagency interface for all international trade and transportation transactions for the movement of cargo in either direction across U.S. borders. ITDS goals include improving compliance with trade requirements; reducing burden on both the trade community and the government; and providing more accurate, timely, and thorough international trade data. According to system developers, ITDS will provide the primary inspector with “one look” at the truck, its goods, and the driver’s compliance with key federal requirements before the truck enters the United States. Truckers will electronically file transport declarations and goods declarations before arriving at the port of entry. ITDS will pass relevant data to the agency for selective screening and determination of appropriate action. In October 2000, DOT’s FMCSA agreed to participate in the first deployment of ITDS at the federal ports of entry in Buffalo, NY, in 2001. Also expected to participate in the Buffalo pilot are the Customs Service within the Department of Commerce, the Immigration and Naturalization Service within the Department of Justice, FDA, and the trade and transportation communities. Ensuring the safety of the nation’s food supply is the responsibility of an interlocking monitoring system that watches over food production and distribution at every level of government—local, state, and national. Given the complex set of food safety laws, regulations, and responsibilities, even obtaining information about which entity has responsibility for what task can be daunting. In 1997, the Clinton administration created a Food Safety Initiative to strengthen the fight against food-borne illnesses, which afflict between 6.5 million and 33 million Americans every year. The President directed the Secretaries of the U.S. Department of Agriculture and HSS and the Administrator of EPA to identify ways to further improve the safety of the food supply. One outgrowth of the Food Safety Initiative has been the development of a gateway Web site (www.foodsafety.gov) that is maintained by the FDA’s Center for Food Safety and Applied Nutrition. The site provides links to a wide range of information on food safety, including information on relevant laws, regulations, and enforcement responsibilities. Also included are links to dozens of federal, state, and local agencies involved in food safety and buttons on the site’s home page that provide safety alerts and methods to report illnesses and product complaints. A number of agencies in the state governments that we contacted were also using IT to facilitate regulatory management. The applications these state government organizations identified as innovative, like their federal counterparts, represented the range of regulatory management functions. Several of the state innovations were interactive systems that allowed regulated entities to identify their regulatory responsibilities and sometimes to complete the related transactions. One of the innovations was proactive, notifying users of opportunities to participate in rulemaking. States also used other less interactive or proactive IT-based applications to improve traditional management approaches. Agencies in four of the states we contacted (Florida, Texas, Virginia, and the State of Washington) have developed IT-based regulatory management systems with extensive interactive capabilities. Most of these systems help regulated entities comply with state requirements. For example, Florida’s Department of Environmental Protection (DEP) has a “One-Stop Permit Registry” (OSPREY) allowing users to obtain information about all environmental permits administered by the department. (See http://osprey.dep.state.fl.us.) According to a DEP official, OSPREY was developed as a result of customer comments on how difficult it was to (1) identify the right DEP contact, (2) determine the appropriate permits that had to be filed, (3) determine where the permits had to be filed, and (4) identify the responsible officials for permit approval. To determine what permits a particular activity requires, users first select the Florida county in which the activity will be performed and then identify the type of activity involved (e.g., home building, construction of a boat launch, or road building). OSPREY then asks a series of questions, culminating in a “Consultation Summary” that lists applicable permit requirements and contact points and provides links to the application forms. The site also contains links to help users determine the fees associated with an application and a link to allow users to check on the status of a submitted application. Although the department has not developed any performance measures for the system, a DEP official said that customer feedback has been very positive. Another interactive state application is the Texas Railroad Commission’s Electronic Compliance and Approval Process (ECAP) system. (See http://www.rrc.state.tx.us/ecap.) ECAP streamlines regulatory requirements by implementing a totally paperless workflow that allows users to obtain oil or gas well permits on-line and captures, stores, and transmits oil or gas well permitting information electronically. The system encompasses all aspects of permit requirements, including security/authentication, fee collection, data reuse, and electronic transmission of required attachments. ECAP users can file the appropriate forms, pay the associated fees, and submit the required attachments on- line. Once the commission receives the information, it processes the forms and issues the permit. The industry information is stored by the system so that the user needs to enter facility data only once. The ECAP project is being implemented through a 3-phase pilot project that will provide the ability to electronically file, process, and approve a drilling permit application. According to commission officials, the first phase of the project has been completed and the second phase will soon be released. The last phase, scheduled for implementation in September 2001, includes data entry of a complex permit, complete integration with existing mainframe computer systems, comprehensive on-line permit approval, and concurrent update of its two database environments. Industry estimates that ECAP will save them between $3 million to $6 million annually for drilling permits alone. By 2005, when ECAP is expanded to include all permits and performance reports, commission officials estimate that the savings to industry will be over $17 million per year, and that the savings for the Railroad Commission could be up to $1 million per year. A third example of an interactive state system is the Virginia Department of Motor Vehicle’s (DMV) Virtual Customer Service Center. (See http://www.dmv.state.va.us.) Through this system, users are able, among other things, to renew licenses and vehicle registrations on-line. A user’s information (including digital photographs) is stored within the system, thereby allowing on-line renewals. The Virtual Customer Service Center started by allowing customers to view the catalogue of over 150 different license plates. A customer was able to access the site and determine if a particular personalized message was still available and, if it was, to reserve that message for 90 days. To go further and allow customers to avoid waiting in line, the DMV was able to modify its IT architecture to support Internet-based applications. This involved reviewing the various activities performed by DMV personnel for the different functions and then writing a program that could emulate the various steps. As a result, the Virginia DMV was able to provide many of the functions performed at the various customer service centers via the Internet. The State of Washington’s Department of Labor and Industries has developed several IT-based “assistance network” systems that are interactive and facilitate compliance with state rules and regulations. Users can access these systems either through the department’s Web site (http://www.lni.wa.gov) or through a statewide portal called “Access Washington” (http://access.wa.gov), which links all state agencies and provides the public with a common access point to state government information and services. The department provides an assistance network that enables users to obtain regulatory information and complete transactions. For example, the systems (1) allow users to determine what labor-related rules are applicable to their operations, (2) provide computer- based training to help employers comply with various labor rules, and (3) offer a training management system to track whether employees are fulfilling training requirements. The department also maintains a database that the public can access to identify registered or certified contractors and to report unregistered contractors. The department is also implementing a site that will allow employers to pay industrial insurance premiums on-line. Officials plan to expand the site to include allowing employers to make other required payments. Both the Departments of Ecology and Labor and Industries in the State of Washington have developed proactive systems that notify customers by E- mail of upcoming regulatory actions, including the publication of proposed rules, rulemaking hearings, the issuance of interpretive statements, and semiannual regulatory agenda updates. The departments’ goal in developing these systems was to provide the public with accurate, current, user-friendly, and timely information related to their rulemaking activities by informing users of new rules or revisions. According to a 1999 Department of Ecology report, the public downloaded more than 3,000 rules per month in its first year of operation, saving the department about $132,000 in printing and mailing costs. Some of the IT-based regulatory management applications that the states and others identified were less interactive or proactive, often focusing on providing regulatory information to the public. For example, the previously mentioned Virtual Customer Service Center in the State of Virginia also provides the public with information on a variety of topics—from licensing requirements to waiting times at customer service and telephone centers. Customers are also able to ask questions on-line and receive a response within the next business day. The State of Washington’s Departments of Ecology (http://www.ecy.wa.gov/) and Labor and Industry have also developed systems that facilitate the dissemination of information to the public. The Department of Ecology’s index of rules, regulations, and related documents is located at one Web site so users can “one-stop shop” for information. This index was created to give the public a crosswalk between the department’s various rules and publications. The Department of Labor and Industries also has an index of rules and regulations grouped by program area to provide the public with easier access to the information. Washington’s Department of Ecology is in the process of implementing an on-line comment site that will allow the public the opportunity to submit their comments electronically and have questions addressed by department personnel. Users wishing to comment on a proposed rule will be able to visit the department’s Web site and use an on-line form to submit written comments. This form will generate an explanatory statement that combines all comments and responses on a particular proposed rule. User- specific information will be maintained by the system, thereby making it easier for an individual to comment multiple times on various rules and only submit personal information once. Representatives from the nongovernmental organizations who participated in our review recognized and supported federal regulatory agencies’ current efforts to use IT to improve their regulatory management processes. However, the representatives also said that federal agencies could improve their performance in this area. Specifically, they suggested that agencies improve both the content and access to on-line information, more broadly and consistently use some existing applications, and adopt some new applications. The representatives also expressed concern that the use of IT-based applications in regulatory management could (1) make individuals and businesses more vulnerable to scrutiny and federal enforcement actions and (2) disadvantage those individuals and businesses with limited technical resources. The representatives of nongovernmental organizations affected by federal regulations recognized that federal agencies’ Web sites already provide regulated entities and others with a great deal of useful information. However, several of the representatives said that these sites vary considerably in terms of their format, content, and ease of navigability. They also said some sites provide a clear link on their home pages to regulatory information, but, in other sites, users must search for the same types of information. One of the representatives said that some agencies are not providing the public with some types of information that could be useful, and that the agencies could do more to disseminate that information electronically to the public and other agencies. Most of the representatives agreed that agencies should provide as much regulatory information as possible on-line, including information developed during the rulemaking process (e.g., economic analyses, hearing transcripts, and comments from the public) and other types of information (e.g., agencies’ agendas of upcoming regulatory actions). Several representatives specifically mentioned the DOT docket management system as a model that could be followed by other agencies. One representative suggested that OMB implement a DOT-type docket system itself and become the model or standard system that other agencies could emulate. Several of the representatives suggested that other innovative regulatory management applications that certain agencies are beginning to implement also should be used more broadly. For example, several representatives suggested that more agencies allow the public to comment on proposed rules electronically and make all of the comments the agencies received on a proposed rule available on-line. One person said permitting electronic comments should allow the agencies to save money because fewer staff would be needed to handle the comments received. Other representatives suggested wider use of proactive electronic notification systems (e.g., list servers) to increase the dialog between regulated entities and the public and to encourage more people to get involved in regulatory issues. Another representative suggested that agencies make greater use of video technology and make their public hearings available, either live or on tape, through the agencies’ Web sites. According to the representative, this approach would enable more people to participate in the process, particularly those who were in remote locations or otherwise unable to attend a public hearing on a rule in which they were interested. Some of the applications that the representatives suggested do not, to our knowledge, currently exist. For example, one representative suggested that agencies could develop a “rule cost calculator” that would include all of the costs of complying with a rule. By entering pertinent information about its own business (e.g., type of business or number of employees), a regulated entity could calculate the potential cost of the proposed rule to its business. Others said that they would like to be able to go to one place and find out all applicable federal regulations. One representative said that this kind of one-stop shopping is particularly appealing to small businesses. Although these representatives of affected communities generally encouraged agencies’ efforts to use IT in regulatory management, some also indicated that regulated entities are sometimes nervous about how an agency’s use of technology may affect them. For example, they said regulated entities are concerned that they may be opening themselves up to additional scrutiny and enforcement actions as a result of the electronic trail they might leave if they access or query a regulator’s Web site for information. They said this was of particular concern to regulated entities that must provide private or proprietary information about their business in order for the agency to electronically develop a list of applicable regulations. In addition, some of the representatives also expressed concerns about the “digital divide”—that is, differences within the regulated community in terms of their technological capabilities. One representative said that some regulated entities, particularly small businesses, do not have the latest technological equipment or the financial or staff resources available to take advantage of the IT-based applications that some of the agencies are developing. Therefore, the representative suggested that it would be best for agencies to make the use of IT for regulatory compliance purposes voluntary, and to continue to allow businesses to comply with regulations and obtain information using traditional approaches. Federal and state agency officials and representatives of nongovernmental organizations identified a number of factors that they believed affect the adoption and diffusion of IT-based approaches in regulatory management: (1) top-level leadership commitment/support, (2) adequate financial resources and human capital, (3) legislative and executive branch initiatives, (4) internal and external partnerships with critical stakeholders, (5) reengineering of existing business processes, and (6) the development of a communication infrastructure. Federal and state agency officials and staff said that the commitment and support of top-level leaders is critical to the successful development and implementation of IT-based systems to improve regulatory management. Federal officials said that leadership commitment is very important in overcoming resistance to changing the traditional ways that agencies conduct business. Officials in DOL said that projects could languish without commitment from the top. Therefore, before beginning to develop an elaws advisor in a new DOL agency, they require that the agency commit the resources—both dollars and people—to ensure successful development. In some cases, the federal officials indicated that leadership support could be positively influenced by factors outside of the agencies. For example, several officials pointed to the importance of presidential initiatives (e.g., the Clinton administration’s E-gov initiative) and congressional mandates (e.g., GPEA) in focusing the agencies’ attention and in obtaining the commitment and resources needed to carry out initiatives. An HHS official said that leadership support can also be stimulated by a few pioneers in the agency who have a vision and can sell the idea to agency management. Some of the officials also indicated that leadership support can be engendered by success. For example, a DOT official said that obtaining early demonstrable savings helped obtain top-level support and widespread interest in the Department’s docket management system. On the other hand, federal officials also said that frequent changes in top agency leadership could make it difficult to sustain commitment to specific projects. Therefore, they said, agencies need to devise ways to get long- term commitment to proposed innovations that transcends changes in leadership. All of the state officials and staff that we interviewed also cited the support of top-level management as a key factor in their ability to develop and implement innovative systems. State officials said that the support of department heads, state CIOs, and/or the states’ governors allowed them to consider new ways to address issues raised by customers and develop mechanisms to respond to their concerns. For example, officials in both the State of Washington and the Commonwealth of Virginia cited the enthusiastic support and leadership of their respective governors. In Washington, the governor issued an executive order directing all state agencies to develop and implement technological approaches to regulatory management. In Virginia, the governor made the application of technology to all governmental activities a priority. Federal and state officials and staff also said that leadership commitment involves not only giving initiatives priority and visibility, but also committing appropriate financial resources and human capital to implement IT-based regulatory management applications. Federal officials said that the lack of adequate resources has been the biggest obstacle to implementing innovative IT-based approaches in their respective agencies. However, they recognized that agency leaders must allocate increasingly scarce resources among competing priorities both across programmatic areas and among IT initiatives. To resolve this issue, several federal officials said that top leadership commitment could help to forge partnerships among program areas and help to obtain the financial resources needed to implement new programs. For example, several program managers at EPA were able to use financial resources provided to the Office of Environmental Information to help develop and implement the agency’s electronic reporting initiative. Top- level commitment can also help ensure that adequate human capital is invested in developing these IT-based approaches. One official in the State of Washington emphasized how important it was to the success of the project that leaders in the Department of Ecology dedicated the people that developers needed to complete their work. Without adequate human capital investment, the official said that the department would not have had the right mixture of skills necessary for the development of innovative applications to facilitate regulatory management. Several of the federal and state officials said that legislative and executive branch IT initiatives had acted as catalysts in developing IT-based approaches to facilitate regulatory management. As previously mentioned, several of the federal officials that we contacted said that the passage of GPEA had helped them to obtain the top leadership commitment needed to support IT innovations in their agencies. They also said the legislation had helped the agencies develop clear schedules for moving toward the use of IT in regulatory management. OMB officials also said that GPEA had served as an impetus for developing new IT-based approaches to regulatory management. In the State of Washington, agency officials credited Executive Order 97-02 as the impetus for many of the IT-based developments occurring in state agencies. The executive order required all state agencies to review their reporting requirements. The goal of this review was to develop reporting requirements that are coordinated with other state agencies requiring similar information, that are economical and understandable, and that rely on the electronic transfer of information. Federal and state officials and staff said that creating appropriate partnerships—intra-agency, interagency, and/or public-private—was also critical in developing systems that facilitated regulatory management. They said that intra-agency partnerships helped the agencies eliminate internal “stovepipes” that were a barrier to developing and implementing innovative IT approaches. Federal officials particularly cited the need for internal partnerships between IT and program officials for successful development and implementation of IT projects. They said that developing IT-based management programs is often considered strictly an IT issue, and that program officials (in this case, regulatory officials) often do not get involved with developing those applications for their areas of responsibility. However, federal and state officials and staff said it is essential to involve the people familiar with current regulatory processes and issues in each stage of planning, developing, and implementing new IT applications in their areas of expertise. For example, DOL officials told us that a standard part of the development of a new elaws application is identifying and involving appropriate regulatory managers, subject matter experts, consultants, and lawyers who are knowledgeable about the program. Without this kind of partnership between IT and program office personnel, they said, agencies are likely to automate inefficient processes that will not meet new programmatic needs. Federal officials also cited the importance of external partnerships in developing and disseminating innovative regulatory management systems. For example, in developing EPA’s electronic reporting initiative, the agency established partnerships with the states through the National Governors Association and the Environmental Council of States. As a result of these partnerships, EPA was able to leverage financial and human capital resources to develop the National Environmental Information Exchange Network. In addition, these partnerships helped ensure that all stakeholders shared information and provided input into the development of system requirements. Although agency officials said that the development of these partnerships had been a huge task, they believed that the final system would yield the results they expected—reduced regulatory burden and consistent data collection and analysis. State officials also emphasized the benefits of internal and external partnerships. In Florida, the Department of Environmental Protection formed a working group consisting of representatives from other departmental offices to assist in developing OSPREY. Virginia’s DMV was able to develop its system through the cooperation of other departmental officials who not only participated in the development process, but also played a key role in the testing and verification of the system before it was released to the public. Texas officials cited the importance of partnerships with the private sector. In Texas, state officials formed a public/private partnership with the regulated community as well as with the federal Department of Energy. Texas officials said that stakeholders’ participation in the developmental process ensured that (1) their issues were addressed and (2) that they would assume ownership and use of the system that was developed. Several of the federal and state agency officials said that comprehensive reengineering of their business processes before developing new systems enabled them to develop more innovative IT-based regulatory management processes. As a result of reengineering, they said they not only increased the efficiency of the selected processes but also eliminated processes that no longer made sense and introduced new ways of relating to the regulated community and the public. However, federal officials cautioned that reengineering their regulatory processes is not always possible because agencies may be legally prohibited from making substantive changes. Federal officials also said it is sometimes important to implement in segments, rather than undertaking “grand designs.” For example, DOL staff involved in the development of the elaws advisors emphasized the importance of modular development within their formal, structured development model. They said that developing and testing key pieces that they showed to program management helped maintain support for the program. A DOT official also told us that developing the basic capability of the DMS and gradually adding new features and capabilities to meet additional needs as additional resources became available has worked for them. Also, some officials in Washington and Virginia said they believed it is better to seize opportunities and move ahead without substantive reengineering, particularly in developing interactive Internet-based applications. As one official said, “it is better to beg for forgiveness than to ask for permission.” State officials also indicated that a well-developed communication infrastructure was important to facilitate the adoption and diffusion of these innovative regulatory management systems. In some cases, the governors in those states were critical to the establishment of that infrastructure. For example, the Governor of the State of Washington created a Subcabinet on Management Improvement and Results that was charged with the responsibility of overseeing the regulatory process and ensuring that the state government “pursues a fair, effective, and sensible regulatory strategy.” The subcabinet’s responsibilities included making recommendations for statutory, administrative, and organizational changes as well as special projects that result in regulatory improvements in state government. In Virginia, the governor appointed a Secretary of Technology who presides as chairman of the governor’s Council on Technology Services. The council consists of 23 representatives from state and local government agencies and institutions and is charged with implementing electronic government in various areas, such as procurement, services, communications, and computing architecture, and coordinating technologies-based systems at all levels of government. The governor of Florida established a similar type of interagency working group that facilitated information sharing and served as a catalyst for partnerships between respective state agencies. State officials said that the availability of organizations that serve as clearinghouses of information about technological applications in other states helped in not only developing their own systems, but also in disseminating information about their systems to other states. Officials cited several organizations, such as the National Association of State Information Resource Executives, the National Association of State Chief Administrators, the National Governors Association, and the Environmental Council of States, for having served as information conduits among states. They said these organizations sponsor conferences, newsletters, and databases that members may use as mechanisms to inform other entities about the development of systems to address various regulatory management requirements. State officials also said that involving segments of the regulated community provided valuable insights to the process and ensured stakeholder ownership of the resulting system. Officials in Texas, Washington, and Florida said their states had involved members of the regulated community during the development of their systems, and, in each case, the states benefited from the collaboration. In Texas, the oil and gas industries not only provided input into the development process but were also a valuable funding source for the system. In Washington and Florida, members of the regulated community participated in developing the system requirements. This participation in the development process facilitated the implementation of the system since this key group of stakeholders perceived themselves as part of the system, not as having the system imposed on them. Federal regulatory officials and staff said they were aware of some, but not all, of the IT-based applications that other agencies, or even other offices within their own agencies, were using to improve regulatory management. They said that most of their knowledge about other agencies’ practices came about through ad hoc and informal mechanisms, such as brown-bag lunches by career officials assisting the Regulatory Working Group and meetings sponsored by GAO and others. They told us that there was a need for some type of communication infrastructure to promote more consistent and structured sharing of information about IT innovations to facilitate the diffusion of those innovations across agencies. They said that a new organization was not needed, and that they preferred to use existing groups to share information. Several federal agency officials also recommended greater use of IT to assist in disseminating information on what other agencies are doing. Some thought that there should be a governmentwide portal focused on regulatory issues or a section of a portal, such as FirstGov, that would be a single point of entry for regulatory agencies as well as the public. One official suggested that there should be an inventory of best practices in the use of IT in federal regulatory management available on-line. There was widespread interest among federal officials and staff in several types of best practices, including electronic dockets, new options for developing and implementing electronic reporting, and certain interactive models that enable agencies to change the way they interact with the public. Federal officials also said that OMB needed to play a role in facilitating communication regarding this issue. For example, one agency official suggested that OMB devote one meeting each year to discussing innovative applications of IT in regulatory management. OMB officials noted that the electronic government committee of the CIO Council has more than 1,000 best practices in its inventory of innovative IT applications, and that this information would be available on the Internet soon. Although regulatory management applications are not separately identified, these applications could be highlighted for use in regulatory management. The OMB officials also noted that the agency had taken a number of steps to encourage the use of IT in regulatory management, and that the CIO Council and the National Association of State Information Resource Executives were setting up a working structure for continuing discussion of IT issues between state and federal agencies. Nevertheless, they recognized that more could be done to improve communications among the agencies. For example, they said OMB could encourage interagency forums on the topic of IT in regulatory management and could highlight regulatory issues as part of the agency’s oversight of the implementation of GPEA. Federal and state regulatory agencies are already making extensive use of IT to address traditional regulatory problems and improve regulatory management. However, they are just beginning to realize the full capabilities of IT and the Internet to develop interactive regulatory management practices and facilitate interagency and intergovernmental uses. Our work during this review and during our review last year indicates that innovative IT-based approaches to regulatory management have the potential to increase the amount and quality of public participation in rulemaking, increase regulatory transparency, reduce burden on regulated entities and help them understand their responsibilities, save regulatory agencies money, and improve the quality of agencies’ regulatory programs. Most of the agencies that we contacted cited benefits of their innovative IT applications, although few had performance data yet that clearly demonstrated the effect of the innovations on the agencies’ efficiency or effectiveness, burden reduction, or other regulatory management outcomes. Such performance data would be useful as other agencies try to decide which IT-based applications to adopt or adapt in their own agencies. A key factor in encouraging greater use of IT-based innovations in regulatory management is, ironically, information. Officials in federal regulatory agencies were sometimes unaware of the innovative uses of IT to improve regulatory management in other agencies, and sometimes in other parts of their own agencies. As a result, federal agencies may either not adopt innovative approaches that could be useful to them or reinvent the wheel as they develop their own approaches in those areas. Federal regulatory agency officials told us that there is a need for better communication and sharing of information about innovative IT applications and indicated that existing organizations, such as the CIO Council and the Regulatory Working Group, be used to facilitate information sharing. Representatives of the nongovernmental organizations and officials and staff in the regulatory agencies themselves also called for greater consistency across agencies’ IT-based regulatory management systems. However, both parties cautioned against mandatory conformity. As agency officials told us during our first review, agencies may need to have somewhat different systems because of differences in their operating environments. Also, common IT-based approaches may be more appropriate for some aspects of regulatory management than others. For example, federal rulemaking processes are somewhat similar across federal agencies, so common approaches regarding that aspect of regulatory management may be more appropriate than in other, more idiosyncratic parts of the process (e.g., enforcement or licensing requirements.) Specific options in the rulemaking area could include common approaches for accepting electronic comments on proposed rules, similarly structured electronic docketing systems, and tracking systems that allow agencies to understand the causes of delays in their rulemaking processes. Compliance assistance functions similar to DOL’s elaws and OSHA’s expert advisor programs appear to have broad applicability. OIRA is responsible for providing guidance and oversight for both IT and regulatory issues. The OIRA Administrator sits on the CIO Council, which Executive Order 13011 says should allow agencies to “share experiences, ideas, and promising practices.” The OIRA Administrator chairs the Regulatory Working Group, which Executive Order 12866 says “shall serve as a forum to assist agencies in identifying and analyzing important regulatory issues,” including “the development of innovative regulatory techniques.” Although OIRA has taken some steps to encourage the use of IT in regulatory agencies, we believe that it could do more to encourage information sharing among the agencies on IT innovations. For example, OIRA could encourage additional forums on the use of IT in regulatory management, devote a portion of its Web site to innovative IT applications, or work with the CIO Council to encourage dialogue between the regulatory and IT elements of agencies’ workforces. It could also make the use of IT in regulatory management a specific focus in its implementation of GPEA. We also believe that OIRA can work with the agencies to identify specific types of innovative IT-based approaches that multiple agencies could use to improve regulatory management. By implementing common approaches regarding regulatory functions that are used in multiple agencies, the regulatory management approaches can begin to have a more consistent “look and feel,” which some nongovernmental and federal representatives believed is needed. We recommend that the OIRA Administrator develop a systematic process by which federal agencies can share information regarding the use of innovative IT-based applications in regulatory management. We also recommend that the Administrator work with federal agencies to identify types of innovative IT-based approaches that multiple agencies could use to improve regulatory management. On December 20, 2000, we sent a draft of this report to the Director of OMB for his review and comment. OMB officials told us that OMB had no comments on the draft report. We also provided federal and state agency officials with the relevant draft report sections attributed to them to ensure that we correctly characterized their systems and comments. These officials provided several technical corrections, which we incorporated as appropriate. As we arranged with your offices, unless you publicly announce this report’s contents earlier, we plan no further distribution of it until 30 days after the date of this letter. We will then send copies to Representative Dan Burton, Chairman of the House Committee on Government Reform. We will also provide copies to the Honorable Mitchell E. Daniels, Jr., Director, OMB; the Honorable Ann Veneman, Secretary of Agriculture; the Honorable Tommy Thompson, Secretary of Health and Human Services; the Honorable Elaine Chao, Secretary of Labor; the Honorable Norman Y. Mineta, Secretary of Transportation; and the Honorable Christine Todd Whitman, Administrator, EPA. We will also make copies available to others and post this report on GAO’s home page at www.gao.gov. If you have any questions regarding this report, please contact me or Curtis Copeland on (202) 512-6806. Key contributors to this assignment were Elizabeth Powell, Joseph Santiago, and Ellen Grady. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | Federal and state agencies are making extensive use of information technology (IT) to address traditional regulatory management. For example, the Department of Labor has a system of electronic "advisors" imitating the interaction that an individual might have with an employment law expert, and the Environmental Protection Agency is working with partners in state government to develop a national environmental information exchange network. Several of the state innovations include interactive systems that allow regulated entities to identify their regulatory responsibilities and complete related transactions. For example, the Texas Railroad Commission has an electronic process that allows users to obtain oil or gas well permits on-line, complete the required forms, and pay any associated fees. Representatives from nongovernmental organizations suggest that federal agencies improve both the content and access to on-line information, more broadly and consistently use some existing applications, and adopt some new applications. Several key factors that facilitate or hinder the adoption and diffusion of innovative IT applications are (1) top-level leadership commitment/support, (2) adequate financial resources and human capital, (3) legislative and executive branch IT initiatives, (4) internal and external partnerships with critical stakeholders, (5) reengineering of existing business processes, and (6) development of a communication infrastructure. |
Under the AFDC program, many states received waivers from federal rules to strengthen work requirements for adults. In addition, some states began experiments with time limits on receiving cash assistance. Under TANF, states generally must impose work and other program requirements on most adults receiving aid and, when an adult does not comply, reduce a family’s benefit or, at state option, terminate the benefit entirely. Moreover, families receiving TANF face a lifetime limit of 5 years, or less at state option, of federal assistance. These reforms represent significant departures from previous state and federal policies for needy families with children and have been accompanied by large declines in the number of families receiving cash assistance, from an all-time high in 1994 of about 5 million families to just over 3 million as of June 1998. While numerous efforts are planned or under way to assess welfare reform nationally, currently little information is available on the status of families who have left welfare. Although families have always left welfare for a variety of reasons, including increased household income due to employment or marriage, once their cases were closed and the families no longer received assistance, they usually were not routinely tracked or monitored. However, in the new environment in which eligible needy families are no longer entitled to cash assistance and the emphasis is on moving families off welfare into employment, concern about the condition of families no longer receiving aid has increased. The Congress and others are interested in the employment status of former welfare recipients, changes in family composition resulting from marriage and pregnancy, and the overall well-being of these families and their children. While the Congress has earmarked $5 million for HHS to study the outcomes of welfare reform and has taken other steps to monitor the status of poor families as discussed below, states are not federally required to report on the condition of former welfare families. States’ greater responsibility for welfare programs under PRWORA has increased states’ need for information to support program management and decision-making, as well as to respond to the information requests from a variety of interested parties, such as service providers, advocacy groups, and the media. For example, some state legislatures are requiring state welfare agencies to report on outcomes from their reformed welfare programs, including the status of former welfare families. Consequently, many states have begun to track former welfare families. The data these states are reporting are the major source of information currently available on the condition of families who have left welfare. Only those families who actually become welfare recipients and then leave the rolls are included in most state tracking studies. However, the changes in welfare can also have the effect of decreasing the number of families coming onto the welfare rolls. For example, many states have diversion strategies designed to prevent families from coming onto the welfare rolls by providing a needed service, such as child care or transportation, providing a one-time cash payment to overcome a barrier to employment, or requiring that applicants conduct a job search before receiving cash assistance. As a result, a comprehensive assessment of the postreform status of poor families with children would include information on TANF-eligible families who did not become welfare recipients as well as former welfare recipients. To provide information on the postreform status of all low-income families, not just former welfare families, the U.S. Census Bureau at the direction of the Congress is conducting a longitudinal survey of a nationally representative sample of families, paying particular attention to eligibility and participation in welfare programs, employment, earnings, out-of-wedlock births, and adult and child well-being. Data from this survey, called the Survey of Program Dynamics, will help researchers and policymakers understand the impact of welfare reform on the well-being of low-income families and children by providing information on whether welfare recipients are finding jobs, what their earnings are, and what types of support they need to make the transition from welfare to work. In addition, the Urban Institute is conducting a multiyear project monitoring program changes and fiscal developments, along with changes in the well-being of children and families. As part of this project, the Urban Institute has surveyed nearly 50,000 people to obtain comprehensive information on the well-being of adults and children as welfare reform is being implemented in the states. A second survey is planned for 1999. Full results from the Census Bureau and Urban Institute surveys may not be available until the year 2000. In addition, a plethora of studies are under way that will be providing information in the future on various aspects of welfare reform. Seventeen states have collected data and reported on the status of some former welfare families in the key areas of economic status, family composition, or family and child well-being. The state studies differed in important ways, such as categories of families tracked, the length of time families were tracked, and the sources of follow-up data. Some of the studies presented no information on a substantial portion of the sample families, limiting the usefulness of these studies for drawing conclusions about the status of most former welfare families in the state. We determined that studies in 7 of the 17 states had enough data on a sample of families who had left welfare to generalize sample findings to the population of former welfare families from which the sample was drawn. We identified a total of 18 state-sponsored or -conducted studies in 17 states—2 studies in Wisconsin and 1 in each of the other states—that reported on the status of families who left welfare in 1995 or later. The reports contain a broad range of information on economic status, family composition, and family and child well-being. Figure 1 summarizes the kinds of information reported in each of the 17 states and classifies the information according to the three major areas of interest. All of the studies reported information on economic status, all but one reported on family and child well-being, and most reported some information on family composition. Overall, 15 of the 17 states reported information in all three areas. (App. II lists the 17 states and their study reports.) Because states generally initiated tracking studies to meet their own information needs, the 18 studies in the 17 states differed in a number of important ways, including the categories of families tracked, geographic coverage, the time periods covered, and the timing and frequency of follow-up. The studies also differed in the sources of data used for tracking families who had left welfare. Table 1 summarizes key information on the studies, including the categories of families studied, the time periods involved, the frequency of follow-up, the time between leaving and follow-up, and the method of data collection. Fourteen of the studies reported data on a statewide sample of families who left welfare for a range of reasons, and one study reported on a sample of families who left welfare in the state’s major city. The remaining three studies focused primarily on families who left welfare because of an adult recipient’s failure to comply with program requirements. These three studies were conducted, at least in part, because of concerns about the potential impact on family well-being of the loss of the entire cash benefit, rather than just a reduction in benefits, as was typically required under AFDC for noncompliance. None of the studies reported specifically on families who had left welfare because of time limits. While there is great interest in the status of these families, in most states few families have reached their time limits, and in states where they have, few families have lost benefits as a result of the time limits. However, as states’ programs mature and more families reach the federal 5-year time limit on TANF benefits or state-established time limits of shorter duration, more of these families will be included in the tracking studies. The studies also differed in the time period during which families left welfare and the length of time between the family’s exit and the study follow-up. The time at which states initiated a study of families who had left welfare depended in part upon when states’ reforms were implemented and when they needed information on the status of families affected by the reforms. The time periods of the 18 state studies ranged from as early as 1995 (before federal welfare reform) to as late as 1998 (after TANF was implemented in most states). The amount of time between leaving welfare and the follow-up also varied, ranging from 1 to 24 months. There were also differences in the frequency of follow-up. At least one state, Maryland, has been tracking families who have left welfare for a number of years and plans to track monthly samples of families for 2 years after they leave the rolls, whereas other states planned a one-time follow-up effort. In addition, the studies used different sources of data to locate and track families. The Maryland study and the first Wisconsin study relied solely on administrative data, while other states’ studies were based on surveys of the former recipients using in-home visits, the telephone, or the mail. Some states’ studies used both survey and administrative data. Administrative data are case-specific information from the files of various programs, services, or agencies, including state unemployment insurance, food stamps, Medicaid, child welfare, child support enforcement, and criminal and education agencies. Since administrative data are limited to data collected for program management purposes, they may not be as focused on the questions of interest as are the survey data. On the other hand, administrative data may be less expensive to collect and more accurate than the self-reported data and can more readily than a survey provide information on large numbers of individuals. We determined that eight tracking studies, covering seven states, (1) were designed to include most families who left welfare in the state at the time of the study and (2) had sufficient data on the sample of families tracked for the sample to be considered representative of families studied. These studies were designed to include families who left welfare for a range of reasons, although the studies varied in the specific category of families covered. For example, the Maryland study included all families who had left welfare, while the South Carolina study included only families with a household member required to seek employment who subsequently left welfare and had not returned at the time of follow-up. Although none of the 18 studies were able to locate all families included in the samples to be tracked, eight studies had sufficient data on a sample of families to conclude that the sample represented the population from which it was taken. The nonresponse rates ranged from 15 to 88 percent for the state surveys. For the two studies using administrative data only, information about 8 percent and 18 percent of the families being tracked could not be found in the data being used. (See app. I for the proportion of families located in all 18 studies.) Missing information for some members of a sample raises concerns about the representativeness of the remaining sample and whether findings can be generalized to the population from which the sample was drawn. Families who left welfare and subsequently responded to a survey and families about whom information was available in administrative data may be different in important ways from families for whom no information is available; thus, results based on such families are not generalizable to the entire population of families who left welfare in a state. Some policymakers and researchers are concerned that families who do not answer surveys or whose current status is no longer reflected in administrative data might be worse off than families for whom there are data. While the families who were not located may have fared quite well in terms of employment or family formation, some missing families may be experiencing hardship. For the purpose of summarizing findings, we either included only those studies that had data on at least 70 percent of the sample of families from the population of interest in the state or included a nonresponse analysis that showed no important differences between respondents and nonrespondents. The seven states that we determined to have studies with results generalizable to their welfare populations are Indiana, Maryland, Oklahoma, South Carolina, Tennessee, Washington, and Wisconsin. We estimated that these seven states accounted for about 8 percent of the number of families who left welfare nationwide between October 1993 and June 1997. Figure 2 highlights these 7 states, along with the 10 other states that reported information on former welfare recipients. Studies in the seven states had either (1) data on a high enough percentage of the sample to reasonably generalize the results to the population from which the sample was drawn or (2) an analysis showing that the nonrespondents had some of the same key characteristics as the respondents, providing greater assurance that the results from the limited sample could be generalized to the population from which the sample was drawn. (See app. I for a more detailed discussion of our assessment.) Because the seven states’ studies differ in several ways, as discussed above, the results are not completely comparable across states. However, the studies provide an indication of the status of families who had left welfare in these states at the time of the studies and, to the extent that the results are consistent, suggest a pattern of what is happening to these families. The studies had consistent findings on employment and earnings. Most former welfare families had an adult who was or had been employed since leaving welfare. Although the studies indicated that former recipients often worked at low-wage jobs, little information was available on families’ total household incomes, which could include child support or earnings from a second worker. Some studies also reported that significant proportions of the families had returned to welfare. In general, the studies provided little information on family and child well-being. Employment rates ranged from 61 to 87 percent for adults in the families who left welfare in the seven states; however, these employment rates were measured in different ways. Studies measuring employment at the time of follow-up reported employment rates from 61 to 71 percent. Studies measuring whether an adult in a family had ever been employed since leaving welfare reported employment rates from 63 to 87 percent. In the four studies reporting both employment measures, the percentage employed at some time since leaving welfare was considerably higher than the percentage reporting employment at the time of follow-up. (Table 2 summarizes employment and earnings data in seven states.) These employment rates generally exclude families who returned to welfare, which can be a substantial portion of the families who leave welfare. In the three studies for which such data were available, the percentage of the families who initially left welfare and then returned to the rolls ranged from 19 percent after 3 months in Maryland to 30 percent after 15 months in Wisconsin. Removing families who return to welfare from the employment rate calculations results in higher employment rates than when they are included, since many former recipients who return to the welfare rolls are not employed. While all eight studies reported some information on former recipients’ earnings or wages, the studies did not provide a complete story on hourly wages or number of hours worked. Average quarterly earnings for former recipients ranged from $2,378 to $3,786 in the studies that either reported quarterly earnings or for which we estimated quarterly earnings. Extrapolating these quarterly earnings to a year results in average annual earned incomes ranging from $9,512 to $15,144. These amounts of annual earned income are greater than the maximum annual amount of cash assistance and food stamps that a three-person family with no other income could have received in these states. However, if these earnings were the only source of income for the families after they left welfare, many of them would remain below the federal poverty level. The question of whether a family is economically better off after leaving welfare than when receiving cash assistance is quite complex. The answer depends on many factors, including the amount of the cash benefit while on welfare, which varies by state, family size, and earnings while on welfare; family earnings and other sources of income; and aid after leaving welfare, as well as any work-related expenses. For example, the 1995-96 Wisconsin study that tracked families for more than 15 months after they left welfare compared postwelfare earnings of these families to the maximum benefit they could have received under AFDC to see if families were economically better off after leaving welfare. The study found that whether postwelfare earnings exceeded the maximum AFDC benefit depended in part upon the number of children in the family. Postwelfare earnings exceeded the maximum AFDC cash benefit for 54 percent of the families with one child and for 41 percent of the families with three or more children. The study also noted that because some families combine welfare and work, the combination of the cash benefit and earnings could result in some families on welfare having more cash income than families with earnings only. The study showed that during their first year off welfare, less than half of the families had cash incomes higher than their incomes had been while on AFDC, including both benefits and earnings. While the tracking studies provide information on individuals’ earned incomes, much remains unknown about families’ total household income. For example, the studies generally do not provide information on whether others in a household have earnings or on other sources of household income, such as child support payments or financial assistance from relatives and friends. Moreover, most of the studies do not include comprehensive information on the receipt of other noncash benefits, such as food stamps, Medicaid, and child care or transportation assistance, or what employment-related expenses, including child care and transportation, households may have. Only three of the eight state studies had some information on household income. In the Oklahoma study, 57 percent of the former welfare families reported household incomes at or below the official poverty level. In the Indiana study, 57 percent of the families off welfare at follow-up reported monthly household income below $1,000. In contrast, Washington reported average total family income, including child support payments, equal to 130 percent of the federal poverty level for a family of three. According to the Washington study, 35 percent of the families who left welfare and had children received some child support, and 36 percent had at least one worker in the family other than the respondent to the survey. The 1995-96 Wisconsin study found that the proportion of families who had left and remained off welfare for at least 1 year who had earnings above the official poverty level varied by family size. While 35 percent of the families with one child and 24 percent of the families with two children had earnings above the poverty level, only 11 percent of the families with three or more children did. Although these studies do not provide a comprehensive picture of families’ financial situations, they consistently indicated that many of the families leaving welfare were employed at fairly low-paying jobs. Our recent report on TANF implementation in seven states and other studies indicate that many states and localities are providing support services, such as case management services and financial assistance with child care, to help former welfare recipients maintain their employment. Several states and localities have also undertaken efforts to help these low-wage workers upgrade their job skills to improve their job prospects. Moreover, the recently expanded earned income credit can increase the incomes of qualified low-income families by as much as $2,271 for families with one child and $3,756 for families with two or more children. Information on total household income and receipt of government supports is key to understanding the condition of former welfare recipients and the extent to which they continue to rely on government aid rather than becoming economically self-sufficient. The studies in five states reported on the extent to which former welfare families say they receive noncash public assistance. As shown in table 3, in these states, between 44 and 83 percent of the families who left welfare received Medicaid benefits, and between 31 and 60 percent received food stamps. The Wisconsin study that tracked families who left welfare between July 1995 and July 1996 for 15 months found significant decreases in the use of noncash public assistance over time. Forty-six percent of the former recipients who remained off welfare for at least 1 year received both Medicaid and food stamps in the first quarter after leaving welfare, and 28 percent received both in the fifth quarter after leaving cash assistance. Four studies had information on the receipt of child care subsidies. While receiving AFDC or TANF, families generally also receive Medicaid benefits to cover their health expenses. However, whether Medicaid benefits are retained after a family has left welfare depends on many factors, and health insurance coverage after leaving welfare varied in the states with these data. For example, about 9 percent of the children in families who left welfare in South Carolina, about 20 percent in Oklahoma, and 35 percent in Indiana did not have health insurance at the time of follow-up. For adults who left welfare in these states, 24 percent in Oklahoma, 32 percent in Washington, 48 percent in South Carolina, and 54 percent in Indiana did not have health insurance. While much attention is paid to welfare recipients who become employed and stay off the rolls, there is also interest in how those who are not employed and have not returned to welfare are faring. The South Carolina and Wisconsin surveys asked nonworking former recipients what stopped them from working for pay. In both states, the most frequently mentioned reason was their own physical or mental illness, followed by the inability to find a job, lack of transportation, and lack of child care. The Wisconsin study attempted to determine how these families were supporting themselves. Of the 142 former recipients not currently working, 18 percent were living with employed spouses or partners. Sixty-five percent of the families of the remaining nonworking former recipients were receiving Social Security, state unemployment insurance, child support, or foster care payments; 23 percent were not receiving cash assistance but were receiving noncash assistance, such as free housing, rent subsidies, Medicaid, or food stamps. The studies in seven states provided limited information on the family composition and well-being of former welfare families and the children in these families. Although a major goal of welfare reform was the promotion of two-parent families and the reduction of out-of-wedlock pregnancies, the tracking studies report only minimal information on family composition at the time of data collection, and no information on changes that may have occurred just before or after leaving welfare. The studies with surveys asked questions regarding family composition; however, these surveys did not provide information on changes in the number of children in a family, changes in marital status, or the formation of other two-parent families since a family left the welfare rolls. Further, beyond any inferences that could be drawn from the employment and earnings of parents, the studies provided little information on how former welfare children and families were doing relative to housing, health, education, food security, substance abuse, crime, and victimization. While some of the studies provided limited information on some of these factors, there are no comprehensive data on family and child well-being. Three studies—from Maryland, Oklahoma, and Washington—reported on the number of children in former recipient families that had ever been involved with child protective services. These studies found few cases in which children had been involved with child protective services since leaving welfare. For example, the Maryland study reviewed state data from its foster care program to determine the number of children placed in foster care after their families left welfare. This study reported that less than one-half of 1 percent of the children studied entered foster care after their families left cash assistance. Two studies, South Carolina’s as well as Wisconsin’s recent survey of families leaving welfare during the first quarter of 1998, asked former recipients to compare several aspects of their general well-being after leaving welfare with their situation when they were on welfare. Because Wisconsin used a modified version of the interview schedule developed in South Carolina, the data are comparable, even though the programs that the recipients experienced are not. Table 4 shows the results from the two states’ surveys. Former welfare recipients in both states were more likely to experience some deprivations after leaving welfare than while on welfare. At the same time, in South Carolina and Wisconsin, 76 and 68 percent, respectively, disagreed or strongly disagreed with the statement that “life was better when you were getting welfare.” Regarding housing status, an important aspect of well-being, the limited information from the studies did not suggest increased incidence of homelessness at the time of follow-up. The number of states conducting or sponsoring studies that track the status of families leaving welfare has increased in recent years, and state and federal efforts are under way to improve the usefulness of the data being collected. Thirty-nine states, including the 17 we identified in this report, and the District of Columbia already are tracking or plan to track families leaving welfare. In an attempt to improve the quality and comparability of these studies, HHS has funded several states and other jurisdictions to conduct tracking studies and is providing them technical assistance in conducting these studies. We have identified 39 states and the District of Columbia that are either planning to study former welfare families or are already doing so. Most of the 17 states that we discuss in this report are planning to continue their tracking and to enhance their efforts, in some cases with federal funds and in other cases with state funds. Maryland, for example, plans to survey former recipients to get the kind of detailed information about families’ lives that is not available in the program data upon which state officials currently rely in their ongoing longitudinal studies. Among the topics to be covered are how former welfare families are able to make ends meet; what enabled them to leave the welfare system; and, in the cases of those who returned to welfare, what brought them back. Idaho is trying to locate families that did not respond to its survey. Maryland, Massachusetts, South Carolina, Wisconsin, and Mecklenburg County in North Carolina have received funds from HHS to support their efforts to link administrative data systems for purposes of studying the effects of welfare reform on other state and federal public assistance programs. A South Carolina official told us that by linking TANF data to state unemployment insurance data, the state was able to locate many of the families that did not respond to its survey. To increase the usefulness of state tracking efforts in providing a more complete picture of the status of former welfare families, HHS is supporting some states and counties with funds and technical assistance. As part of its overall strategy to evaluate welfare reform, HHS has awarded grants to 14 projects covering 16 jurisdictions—10 states, five counties in 2 other states, and the District of Columbia—to support efforts to track, through administrative data, surveys, or other methods, former TANF recipients’ work transitions and receipt of other benefits, including supportive services. Each of these tracking efforts plans to collect information on one or more of the following: families diverted from welfare, eligible families who do not apply for benefits, and families who have left welfare. All 14 grantees will collect both administrative data and survey data on former recipients. (See app. III for information on the 14 studies.) In addition, HHS submitted its overall research plan for evaluating welfare reform to the National Academy of Sciences for guidance on research design and recommendations for further research. The National Academy has convened a panel of experts on program evaluation methods, survey design, administrative record analysis, state database development and analysis, and welfare policy and evaluation to review data needs and methods. One of the panel’s first activities was to conduct a workshop to review and assess the HHS grantees’ study methodologies. The workshop provided a forum in which representatives from states and counties that had been awarded grants were able to talk with experts about their planned tracking studies. The panel plans a 30-month study on a broad range of issues related to evaluating welfare, with an interim report to be ready in June 1999. HHS expects to use data from the 14 funded projects to generate a picture of what is happening to families exiting welfare and families diverted from ever entering welfare. In recognition of the need for high-quality research and comparable findings, HHS is providing technical assistance to the states directly, and through the National Academy panel, and is encouraging grantees to share information with one another. The National Academy panel is providing advice on issues of data quality and comparability as well as policy relevance. Initially, the 14 grantees have agreed to work toward increasing comparability across studies by using a common definition of welfare “leaver.” They have also agreed to clarify which studies will be tracking only families with adults and which will also track welfare cases that only include children. Finally, with the encouragement of HHS and the National Academy, the grantees will be sharing common approaches to studying such areas as insecurity and deprivation, child well-being, and changes in household composition. While we were able to learn about the status of former welfare recipients in several states, we could conclude little about the status of most families that have left welfare nationwide. However, the limited information on economic status of the families being tracked indicates that many families who leave welfare find jobs that are low-paying. The low wages of these jobs emphasize the importance that income supports, such as subsidized medical and child care and the earned income credit, can assume in these families’ total financial resources. As we noted in our earlier report on TANF implementation in seven states, federal and state policies and programs for assisting low-income working families are likely to play a critical role in helping these families remain off cash assistance and move toward economic self-sufficiency. But much remains unknown about most families leaving welfare nationwide. In our attempt to describe the condition of former welfare families, we were constrained by the data currently available from these early state tracking studies. More specifically, the high nonresponse rates in many state studies limit the usefulness of the results because generalizations cannot be made to all families of interest. Because those families who do not respond to surveys, or who may not show up in administrative data for other programs, may be the ones at greatest risk of negative outcomes, some policymakers and program officials are particularly concerned about not having enough information to determine the status of these families. In addition, for policymakers to better understand whether states are making progress in meeting the goals of TANF, more comprehensive information is needed on household income; receipt of government assistance; and changes in family composition, including increases in the number of two-parent families and additional births, especially to teens. And, finally, the data often are not comparable among the states. Consequently, even if each state had collected generalizable data on a comprehensive range of topics, it would often be difficult to generate a national picture from such studies. More comparable data would also be useful to individual states that want to understand how former welfare families fare in their states as compared with those in other states. In addition, comparable data among the states could help policymakers and program administrators at all levels of government identify promising approaches and practices for assisting low-income families. The limited nature of the information currently available emphasizes the importance of additional state efforts, such as those funded by HHS. The ongoing state efforts promise to provide a more complete picture in the future. Many more states have tracking studies in progress or planned and efforts are under way at the state and federal levels to improve the usefulness of these efforts. As HHS continues to work with states to support their efforts to collect data on families who have left welfare, it has an opportunity to help states develop more generalizable, comprehensive, and comparable data. We obtained comments on a draft of this report from HHS, which stated that the report provides useful information on the status of former welfare recipients and the varied efforts being made by states to follow up on the impacts of welfare reform. HHS also noted, however, how difficult it was to glean general results from such varied studies. It expressed concern that, while the report appropriately discusses the issues that preclude generalizing findings in many of the studies to the state level, the report does not address other factors, such as differences in the definitions of the populations studied and in states’ economic conditions, which make it difficult to report general results from any of the studies. HHS further suggested that rather than attempting to find areas of comparability in the studies, we should focus on the crucial differences between studies and emphasize the contribution that the HHS-funded state studies of families leaving welfare will make. Finally, HHS had concerns about our reliance upon studies with low response rates. We agree with HHS about the difficulties involved in discussing general results from the varied studies. We did not suggest, however, that results from the eight studies could be generalized beyond the states from which the study samples were drawn. We also pointed out that much remains unknown about most families who have left welfare nationwide. In addition, we were not attempting to report program impacts, which would require controlling for other factors that could affect family status, such as varying economic conditions. Rather, we focused on what is currently known about the status of former welfare families given the extent to which a particular state study was generalizable to the study population within the state. We also agree with HHS’ concern regarding low response rates, and this was the reason most of the studies were determined to not be generalizable and no data from them were included in the report. Finally, the report had already noted that we believe the HHS-funded state efforts will make an important contribution toward improving the usefulness of future studies and increase understanding of the condition of former welfare families. To address HHS’ concerns, we revised the report to place greater emphasis on the studies’ differences by moving detailed information on the studies’ varied populations, time periods, and methodologies from the appendix to the body of the report. We also added an additional caveat to the discussion about employment and earnings information, pointing out the lack of complete comparability among the studies. In addition, where results from several states were displayed, we added information on time periods and references to more detailed information on the studies’ populations and methodologies. HHS also made technical comments, which we incorporated where appropriate. (See app. IV for the text of HHS’ comments.) We also provided copies of the draft report to the 17 states whose studies we reviewed and to an expert on welfare reform issues. We incorporated their technical comments where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies to the Honorable Donna E. Shalala, Secretary of the Department of Health and Human Services; state TANF directors; and other interested parties. We will also make copies available upon request. If you or your staff have any questions about this report, please contact me on (202) 512-7215. Other staff who contributed to this report are listed in appendix V. This appendix provides more detail on how we (1) identified state studies of families who had left welfare, (2) assessed the extent to which the studies could support statewide generalizations of results, and (3) summarized the findings on employment and income from the studies with results generalizable to each state’s welfare population. To obtain information to answer this request, we searched for state studies of families who left welfare during or after 1995 that had published results by September 30, 1998. We began with a list of state tracking efforts prepared by a joint effort of the staffs of the National Conference of State Legislatures, the National Governors’ Association, and the American Public Human Services Association. In addition, we talked to representatives of the 10 states with the largest welfare caseloads (California, Florida, Illinois, Michigan, New Jersey, New York, Ohio, Pennsylvania, Texas, and Washington) and asked if they had any studies of former welfare families for which results had been published. We also talked to welfare experts and asked them if they knew of any ongoing studies of former welfare recipients. Finally, we talked to representatives of those states that we identified as having published reports on the basis of their tracking efforts and asked what additional plans, if any, they had for tracking former welfare families and if they had updated their work. If a state had more recent information available, we included it in our analysis when possible, in some cases using a report published after September 30, 1998. Through this process, we identified 18 separate tracking efforts in 17 states. The 18 studies varied in degree of data completeness and statewide generalizability. We were interested in summarizing results that could reasonably be generalized to most families who left welfare in the state at the time of the study. We considered a study to be of an acceptable level of statewide generalizability if the study successfully obtained data on at least 70 percent of the sample of families for which it sought such data, or if a nonresponse analysis of the data showed that were no important differences between families represented in the data and those missing from the data. Except for this assessment, we did not independently verify the data included in the state studies. Using this assessment, we identified eight studies representing seven states. The proportion of families that responded to surveys or for whom data were located in administrative databases in each study is shown in table I.1. Response rate (percentage) Families who left TANF July to Dec. 1997 Families receiving AFDC May 1995 to May 1996 who subsequently left AFDC AFDC families assigned to or who volunteered for the Limited Benefit Plan Nov. 1994 to Apr. 1995 Families who left TANF Jan. to Nov. 1997 Families in metropolitan New Orleans who left TANF Jan. to Mar. 1998 Families who left TANF Oct. 1996 to Sept. 1997 Families whose AFDC benefits were terminated in Apr. 1996 because they did not comply with program rules Families who received or left AFDC or TANF Mar. 1996 to Sept. 1997 Families whose TANF benefits were terminated Jan. to Feb. 1998 because of failure to comply with program rules Families who left AFDC July 1996 to June 1997 Families who left or were denied TANF Oct. 1996 to Nov. 1997 (continued) Response rate (percentage) Data not available. We summarized the results for the eight studies that we considered could reasonably be generalized to the state level in the three major areas of interest: economic status, family composition, and family and child well-being. This effort was constrained by the different sources of data used by the state studies, different categories of families tracked, and different questions asked of respondents in the surveys. The area for which each state study had somewhat comparable data was economic status. The state studies with surveys generally asked employment status at the time of follow-up, hourly wage rate, number of hours worked, and whether the respondent had ever worked since leaving welfare. The two studies relying on state unemployment insurance program data to track employment could report only whether an individual had worked at some point during a 3-month period and total earnings for the period. Further, since state unemployment insurance programs do not cover some categories of employed individuals, program data would not have information on these individuals. For example, self-employed individuals and certain agricultural workers are generally not covered. The two studies reporting average quarterly earnings based on state unemployment insurance program data did not report average hourly wage or average number of hours worked. In order to make earnings data comparable among the eight studies, we estimated average quarterly earnings for the other studies for which we had, or could calculate, average hourly wage rates and average number of hours worked. We multiplied the average hourly wage in each of these studies by the study’s reported average number of hours worked in a week and multiplied by 13 to estimate a quarterly wage. This enabled us to compare estimated quarterly earnings with the reported quarterly earnings. We also had to make some adjustment to ensure that the employment rates were for comparable categories of families. Although some of the studies reported employment rates only for adults who left welfare and were still off the rolls at the time of follow-up, others included all families who left the rolls during the study period—even those who had returned to welfare at the time of follow-up. To estimate comparable employment rates, we removed from the calculation data on the individuals who returned to the rolls and assumed that those who returned to the rolls were not employed. Project Self-Reliance TAFI Participant Closure Study (II), Idaho Department of Health and Welfare, spring 1998. The Indiana Welfare Reform Evaluation: Assessing Program Implementation and Early Impacts on Cash Assistance, Abt Associates, Inc., Aug. 1997. The Indiana Welfare Reform Evaluation: Who Is On and Who Is Off? Comparing Characteristics and Outcomes for Current and Former TANF Recipients, Abt Associates, Inc., Sept. 1997. The Indiana Welfare Reform Evaluation: Program Implementation and Economic Impacts After Two Years, Abt Associates, Inc., and The Urban Institute, Nov. 1998. Iowa’s Limited Benefit Plan: Summary Report, Mathematica Policy Research, Inc., and the Institute for Social and Economic Development, May 1997. A Study of Well-Being Visits to Families on Iowa’s Limited Benefit Plan, Mathematica Policy Research, Inc., June 1998. From Welfare to Work: Welfare Reform in Kentucky, Welfare Reform Evaluation No. 1, Center for Policy Research and Evaluation, Urban Studies Institute, University of Louisville, Jan. 1998. Exiting Welfare: The Experiences of Families in Metro New Orleans, School of Social Work, Southern University at New Orleans, June 1998. Life After Welfare: An Interim Report, University of Maryland School of Social Work, Sept. 1997. Life After Welfare: Second Interim Report, University of Maryland School of Social Work, Mar. 1998. A Study of AFDC Case Closures Due to JOBS Sanctions April 1996, Michigan Family Independence Agency, May 1997. Montana’s Welfare Reform Project: Families Achieving Independence in Montana FAIM, February 1998 Update, Montana Department of Public Health & Human Services, Feb. 12, 1998. WFNJ (TANF) Sanction Survey, New Jersey Department of Human Services, July 2, 1998. Survey of the New Mexico Closed-Case AFDC Recipients July 1996 to June 1997, Final Report, University of New Mexico, Sept. 1997. Family Health & Well-Being in Oklahoma: An Exploratory Analysis of TANF Cases Closed and Denied October 1996 to November 1997, Oklahoma Department of Human Services, Sept. 1998. TANF Closed-Case Telephone Survey, Pennsylvania Department of Public Welfare, Mar. 1998. Survey of Former Family Independence Program Clients: Cases Closed During January Through March 1997, South Carolina Department of Social Services, Division of Program Quality Assurance, Mar. 3, 1998. Survey of Former Family Independence Program Clients: Cases Closed During July Through September 1997, South Carolina Department of Social Services, Division of Program Quality Assurance, Oct. 9, 1998. Summary of Surveys of Welfare Recipients Employed or Sanctioned for Non-Compliance, University of Memphis, Mar. 1998. Washington’s TANF Single Parent Families Shortly After Welfare: Survey of Families Which Exited TANF Between December 7 and March 1998, Washington DSHS Economic Services Administration, July 1998. Washington’s TANF Single Parent Families After Welfare, Washington DSHS Economic Services Administration, Jan. 1999. Post-Exit Earnings and Benefit Receipt Among Those Who Left AFDC in Wisconsin, Institute for Research on Poverty, University of Wisconsin-Madison, Aug. 17, 1998. Post-Exit Earnings and Benefit Receipt Among Those Who Left AFDC in Wisconsin, Institute for Research on Poverty, University of Wisconsin-Madison, Oct. 30, 1998. Survey of Those Leaving AFDC or W-2 January to March 1998, Preliminary Report, State of Wisconsin, Department of Workforce Development, Jan. 13, 1999. A Survey of Former POWER Recipients (Personal Opportunities With Employment Responsibilities), Western Management Services, LLC, for Wyoming Department of Family Services, May 1998. This appendix provides information on the study methodologies planned by the jurisdictions receiving grants from HHS to study families who have left welfare. Timing of survey (months after exit) Jan. to June 1997 and Dec. 1998 to Feb. 1999 Oct. to Dec. 1996 and Oct. to Dec. 1997 Nov. 1997 to Mar. 1998 and Jan. to Mar. 1999 Oct. to Dec. 1996 and Jan. to Mar. 1998 Jan. to Mar 1997 and Jan. to Mar. 1999 (continued) Timing of survey (months after exit) This category includes other data sources, such as sources of tax, welfare-to-work, and health information. Welfare Reform: States’ Experiences in Providing Employment Assistance to TANF Clients (GAO/HEHS-99-22, Feb. 26, 1999). Domestic Violence: Prevalence and Implications for Employment Among Welfare Recipients (GAO/HEHS-99-12, Nov. 24, 1998). Welfare Reform: Early Fiscal Effects of the TANF Block Grant (GAO/AIMD-98-137, Aug. 18, 1998). Welfare Reform: Child Support an Uncertain Income Supplement for Families Leaving Welfare (GAO/HEHS-98-168, Aug. 3, 1998). Welfare Reform: Many States Continue Some Federal or State Benefits for Immigrants (GAO/HEHS-98-132, July 31, 1998). Welfare Reform: Changes Will Further Shape the Roles of Housing Agencies and HUD (GAO/RCED-98-148, June 25, 1998). Welfare Reform: States Are Restructuring Programs to Reduce Welfare Dependence (GAO/HEHS-98-109, June 18, 1998). Welfare Reform: Transportation’s Role in Moving From Welfare to Work (GAO/RCED-98-161, May 29, 1998). Medicaid: Early Implications of Welfare Reform for Beneficiaries and States (GAO/HEHS-98-62, Feb. 24, 1998). Welfare Reform: States’ Efforts to Expand Child Care Programs (GAO/HEHS-98-97, Jan. 13, 1998). Welfare Reform: States’ Early Experiences With Benefit Termination (GAO/HEHS-97-74, May 15, 1997). Welfare Waivers Implementation: States Work to Change Welfare Culture, Community Involvement, and Service Delivery (GAO/HEHS-96-105, July 2, 1996). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on families no longer receiving welfare, focusing on: (1) the extent to which states have reported information on the condition of families who have left welfare in the following key areas: (a) economic status; (b) family composition; and (c) family and child well-being; (2) generalizable state studies on what is known about the status of former welfare families in the key areas; and (3) federal and state efforts to improve the usefulness of the data obtained through these state efforts. GAO noted that: (1) 17 states have published information on the status of their families who have left welfare; (2) each of these states reported on the economic status of former welfare recipients, and the majority reported on family composition and family and child well-being; (3) the studies differed in important ways, including the categories of families studied, geographic scope, the time during which families who had left welfare were tracked, and the extent to which the families for whom data were available are representative of all families in the sample; (4) taking these factors into account, GAO determined that studies from only 7 of the 17 states had enough information on a sample of families to generalize findings to most families who had left welfare in the state at the time of the study; (5) these seven states' studies reported that most of the adults in families remaining off the welfare rolls were employed at some time after leaving welfare, but significant numbers of families also returned to the rolls; (6) in the three studies that reported the information, from 19 to 30 percent of the families who left welfare returned to the rolls at some time during the follow-up period; (7) although the seven states' studies generally had limited data on total household income, five reported that many families who had left welfare subsequently received noncash public assistance, such as Medicaid and food stamps, indicating that families' incomes were low enough to keep them eligible for these forms of government assistance; (8) none of the studies reported on changes in family composition resulting from marriage or pregnancy after leaving welfare; (9) regarding measures of well-being, six states' studies included data on homelessness or separation of children from their parents and reported no indication of increased incidence of these outcomes at the time of the followup; (10) efforts are under way at both the federal and state levels to improve the usefulness of the data being collected to assess the status of former welfare families; (11) most states either are studying or plan to study former welfare families, and the Department of Health and Human Services (HHS) has recently funded 14 projects to track and monitor families who have left welfare; (12) these projects will receive technical assistance through HHS and from other states on developing their tracking efforts; and (13) in the future, these ongoing state efforts, many supported by HHS, should provide a more complete picture of the status of families who have left welfare. |
On January 1, 2000, computer systems worldwide could malfunction or produce inaccurate information simply because the century has changed. Unless corrected, such failures could have a costly, widespread impact. The problem is rooted in how dates are recorded and computed. For the past several decades, systems have typically used two digits to represent the year—such as “97” for 1997—to save electronic storage space and reduce operating costs. In such a format, however, 2000 is indistinguishable from 1900. Software and systems experts nationwide are concerned that this ambiguity could cause systems to malfunction in unforeseen ways, or to fail completely. As we reported to you and testified to Congress earlier this year, the public faces a risk that critical services could be severely disrupted by the Year 2000 computing crisis. Financial transactions could be delayed, airline flights grounded, and national defense affected. Also, the many interdependencies that exist among governments and within key economic sectors could cause the failure of a single system to have adverse repercussions across the nation and internationally. While managers in the government and the private sector are taking many actions to mitigate these risks, a significant amount of work remains, and time frames are unrelenting. One key concern in addressing the Year 2000 problem is the availability of trained information technology personnel. We reported in April 1998, that while various agencies stated that they or their contractors had problems in obtaining or retaining information technology personnel, no governmentwide strategy existed to address recruiting and retaining the personnel with the appropriate skills for Year 2000-related work. We recommended that the Chairman of the President’s Council on Year 2000 Conversion develop a personnel strategy which includes (1) determining the need for various information specialists, (2) identifying any administrative or statutory changes that would be required to waive reemployment penalties for former federal employees, and (3) identifying ways to retain key Year 2000 staff in agencies through the turn of the century. We reemphasized the need for such a strategy in a June 1998 testimony. Within the executive branch, several executive councils and agencies are responsible for the human resources aspect of the Year 2000 effort. These councils and agencies, and their respective Year 2000 responsibilities are described below: The President’s Council on Year 2000 Conversion is chaired by an Assistant to the President and is comprised of one representative from each of the executive departments and from other federal agencies as determined by the chair. The chair of the Conversion Council is tasked with the following Year 2000 roles: (1) overseeing federal agencies’ Year 2000 activities, (2) acting as chief spokesperson in national and international forums, (3) providing policy coordination of executive branch activities with state, local, and tribal governments, and (4) promoting appropriate federal roles with respect to private sector activities. To date, the Conversion Council has established several working groups to address Year 2000 concerns. One of these is the Workforce Issues working group, which began meeting in May 1998. The working group is chaired by the Deputy Secretary of the Department of Labor, and includes officials from the Departments of Labor, Education, Housing and Urban Development, Commerce, Defense, as well as OPM and the Small Business Administration. The group’s main objective is to determine what the federal government can do to help meet the country’s needs for technically skilled personnel to address the Year 2000 problem, with special attention to small businesses, local governments, and organizations in rural areas. The CIO Council is comprised of CIOs and Deputy CIOs from 30 federal departments and agencies, representatives from OMB, and the chairs of the Government Information Technology Services Board and Information Technology Resources Board. It is the principal interagency forum for improving the design, modernization, use, sharing, and performance of information technology resources. The Council’s role includes: (1) developing recommendations for information technology management policy, procedures, and standards, (2) identifying opportunities to share information resources, and (3) assessing and addressing the needs of the federal government for an information technology workforce. One of the committees reporting to the CIO Council, the Education and Training Committee, is charged with addressing issues in hiring, training, and maintaining an effective federal information technology workforce. OMB is responsible for overseeing federal agencies’ responses to the Year 2000 problem. In early 1997, OMB issued a broad Year 2000 strategy for the federal government and required that 24 major agencies submit quarterly reports on their Year 2000 progress. On January 20, 1998, OMB added new quarterly reporting requirements, specifically asking agencies to provide a narrative description of progress, including a description of any problems affecting progress and, in particular, any problems in acquiring or retaining skilled personnel. In March and April 1998, OMB requested that an additional 31 small agencies report their progress to OMB by April 30, 1998, and another 10 small agencies and other entities, such as the Tennessee Valley Authority and the United States Postal Service, report by May 15, 1998. Most recently, in July 1998, OMB revised its earlier reporting requirements and asked that nine small and independent agencies begin providing quarterly reports on progress in addressing difficulties relating to the Year 2000 problem. OPM, the federal government’s human resources agency, provides federal agencies with personnel services and policy leadership including staffing tools, guidance on labor-management relations, preparation of government’s future leaders, compensation policy development, and programs to improve workforce performance. OPM is responsible for helping agencies to equip themselves with the systems they need to manage their human resources effectively, and in light of the Year 2000 problem, is providing tools that agencies may use to help recruit and retain information technology professionals. Of the 24 large agencies reporting to OMB, 13 are expressing concerns about the availability of information technology personnel. Also, 10 of the 41 small agencies and independent entities expressed these concerns.These organizations’ concerns generally fall into the categories of difficulty in recruiting and retaining internal staff and in obtaining contractor support. Appendix II identifies the organizations reporting workforce issues and summarizes them. It also identifies the organizations with no reported concerns. Both large and small agencies and entities reported that retaining key information technology staff and recruiting new staff were among their greatest concerns in addressing the Year 2000 problem. Agencies indicated that they had lost skilled information technology employees through retirements and through increased recruitment by the private sector. For example, in May 1998, the Department of Agriculture reported that several of its agencies expressed particular concern that the loss of staff would affect their ability to meet Year 2000 deadlines. Specifically, the Farm Services Agency reported that it lost 28 (7 percent) of its 403 information technology staff in the first 6 months of fiscal year 1998. Agencies have also reported that recruiting replacements for information technology personnel is difficult. The Department of Veterans Affairs noted that recruiting is very competitive for Year 2000 professionals in some geographic areas, and stated the concern that with lucrative finders fees being advertised, government employees may leave for the private sector. Also, the National Credit Union Administration reported that it had experienced difficulties in hiring senior Year 2000 program officials. Among the various types of information technology workers needed, computer programmers are reported to be in the shortest supply. For example, in February 1998, the Justice Department reported that it had difficulty hiring and retaining skilled COBOL programmers. In August, the Department reported that it continues to encounter these problems. As another example, in its February, May, and August reports, the Environmental Protection Agency stated that it experienced problems finding PL/I programmers. Recruiting and retaining qualified contract personnel is another issue frequently mentioned by agencies reporting staffing problems. Once again, the specific type of information technology worker most often mentioned as being in short supply is programmers. For example, the Department of Justice reported in February, May, and again in August that it is continuing to encounter problems in obtaining contractor support with the necessary programming skills. Agencies’ concerns also include the high turnover rate of contractor staff and the time it takes to recruit contractor staff. For example, the State Department indicated in May 1998 that the recruitment cycle for replacing contract systems programmers took more time than in past years despite the use of professional recruitment services by the contract vendors. Also, several agencies noted problems with increasing contract labor wage rates. These agencies reported that they are having to increase the hourly rate they pay for contractor staff because contractors are increasing their own staff salaries. For example, the Department of Agriculture indicated that its agencies are experiencing contracting delays as vendors find it increasingly difficult to bring on more contract employees without substantial increases in contract dollars. In addition, the Federal Deposit Insurance Corporation reported that some of its contractors are losing personnel to higher salaries with other contractors. When replacing these personnel, the contractors are increasing their hourly rates. Four agencies reported delays on six system development efforts because of problems they had encountered in obtaining contractors to address the Year 2000 problem. For example, the State Department reported that the loss of key contractor personnel had delayed the completion of one of its mission critical systems, the Management, Policy and Planning Information System, by 3 months. The Department of Commerce also reported that its Patent and Trademark Office experienced a 3-month delay on one of its mission-critical system development efforts, the Classified Search and Image Retrieval system, when the contractor was unable to place qualified staff on the task. The task order was terminated with that contractor, a new task order was awarded to another contractor, and work is now proceeding. Although a significant number of agencies are reporting concerns with the availability of qualified Year 2000 staff, it is not possible to determine the full extent or severity of personnel shortages from these concerns because they are often anecdotal. For example, one department notes that one of its agencies “has had difficulty” hiring a particular type of programmer, while another reports that it is encountering “some problems” hiring personnel. Also, only six mission critical systems were reported to have “experienced delays” in reaching Year 2000 compliance because of personnel issues. Without more detailed information on the nature and extent of personnel issues, it is difficult to determine how to best address it. OPM, the Conversion Council, and the CIO Council have various initiatives underway to address Year 2000 personnel issues: OPM has provided tools to assist agencies in dealing with Year 2000 workforce issues; the Conversion Council is identifying solutions to personnel shortages in both the government and the private sector; and the CIO Council has initiated a broad study of information technology workforce issues in the government and private sector. Agencies currently have a number of aids they can use to help recruit and retain needed personnel. Some have been available for years. Others are new. These aids are summarized below: Recruitment and relocation bonuses: Federal agencies have the authority to make a lump-sum payment of up to 25 percent of basic pay to a newly appointed employee, or to an employee who must relocate in cases in which the agency determines that the position would otherwise be difficult to fill. In return for this lump-sum payment, the employee must sign a service agreement with the agency to complete a specified period of employment. Superior qualifications appointments: Agencies have the authority to set pay for new appointments or reappointments of individuals to General Schedule (GS) positions above step 1 of the grade on the basis of the candidate’s superior qualifications or the agency’s special need. Pay at highest previous rate: Upon reemployment, transfer, reassignment, promotion, or change in type of appointment, agencies can set an employee’s basic pay by taking into account the employee’s previous pay rate while employed in another civilian federal position (with certain exceptions). Temporary and term appointments: Agencies can use temporary appointments in the competitive service for positions not expected to last longer than 1 year, but which can be extended for 1 additional year. Agencies can use term appointments when positions are expected to last longer than 1 year but not more than 4 years. Retention allowances for individual employees: Agencies have the authority to make continuing payments of up to 25 percent of basic pay if the agency determines that (1) the unusually high or unique qualifications of the employee or the agency’s special need for the employee’s services makes it essential to retain the employee and (2) the employee would be likely to leave federal service in the absence of a retention allowance. Retention allowances must be paid in accordance with the agency’s previously established retention allowance plan and must be reviewed and certified annually. Performance and incentive awards: Agencies can provide employees a lump-sum cash award on the basis of a fully successful or better rating of record or in recognition of accomplishments that contribute to the improvement of government operations. Awards based on the rating of record can be up to 10 percent of salary, or up to 20 percent for exceptional performance, provided the award does not exceed $10,000 per employee. With OPM review and approval, agencies can grant awards over $10,000, up to $25,000. Any award that would grant over $25,000 to an individual employee must be reviewed by OPM for submission to the President for approval. Quality step increases: Agencies have the authority to increase an employee’s pay by providing an additional step increase to an employee who has received the highest rating of record available in the agency’s performance appraisal program. Training and education costs reimbursement: Agencies can pay or reimburse an employee for all or part of the necessary expenses for training and education, including the costs for college tuition. Agencies may require service agreements for training of long duration or of high cost. Advance payments for new appointees: Agencies may advance a new hire up to two paychecks so that a new employee can meet living and other expenses. Special salary rates: Agencies may request a higher salary rate for an occupation or group of occupations nationwide or in a local area based on a finding that the government’s recruitment or retention efforts are, or would likely become, significantly handicapped without those higher rates. The minimum of a special rate range may exceed the maximum of the corresponding grade by as much as 30 percent. However, no special rate may exceed the rate for Executive Level V (currently $110,700). A special rate request must be submitted to OPM by department headquarters and must be coordinated with other federal agencies with employees in the same occupational group and geographic area. Dual compensation waivers for retirees: On March 30, 1998, OPM issued a memorandum announcing that agencies could reemploy federal retirees (civilian and military) to work specifically on the Year 2000 conversion without the usually required reduction in the retiree’s salary or military annuity. With OPM’s determination that the Year 2000 computer conversion problem is an “unusual circumstance,” agencies can request delegated authority from OPM to rehire former federal personnel (up to a maximum number of individual exceptions approved by OPM) on a temporary basis through March 31, 2000. Premium pay for employees performing emergency work: Agencies have authority under the law and OPM regulations to make exceptions to the biweekly limitation on premium pay (including overtime, night, and holiday pay) when the head of an agency or his or her designee determines that an emergency involving a direct threat to life or property exists. In its March memorandum, OPM also encouraged agency heads to exercise this authority in the case of any employee who performs emergency work to resolve a direct threat to property (including monetary errors or cost) in connection with updating computer systems to prevent malfunction, erroneous computations, or other problems associated with the Year 2000. By exercising this authority, agencies will be able to compensate employees who perform significant amounts of overtime work related to the Year 2000 problem, as long as the total of their basic pay and premium pay does not exceed a certain rate. Exclusions from early retirement programs: On June 15, 1998, OPM issued interim regulations allowing agencies, with OPM approval, to limit the scope of voluntary early retirement offers when separating or downgrading employees in a major reorganization, a major reduction in force, or a major transfer of functions. Agencies can limit their retirement offers on the basis of (1) occupational series or level, (2) organizational unit, (3) geographic area, (4) specific window periods, (5) other similar nonpersonal factors, or (6) any combination of these factors that the agency determines appropriate. Using this tool, agencies can exclude critical Year 2000 positions from any voluntary early retirement program it offers. Retention allowances for groups or categories of employees: On June 23, 1998, OPM issued interim regulations allowing agencies to authorize a retention allowance of up to 10 percent of an employee’s rate of basic pay (or up to 25 percent with OPM approval) for a group or category of employees such as computer programmers and system engineers. Retention allowances authorized for a category of employees must be based on a written determination that (1) the category of employees has unusually high or unique qualifications, or (2) the agency has a special need for the employees’ services that makes it essential to retain the employees in that category, and (3) it is reasonable to presume that there is a high risk that a significant number of employees in the targeted category are likely to leave federal service in the absence of the allowance. The Conversion Council’s Year 2000 Workforce Issues working group began meeting in May 1998 to address some of the Year 2000 workforce issues in both the government and private sector, focusing on three areas: (1) raising awareness of the Year 2000 problem, (2) helping managers assess their particular situations, and (3) connecting managers with solution-providers, including programmers, project managers, and those familiar with embedded systems. In July 1998, the working group released a draft sector action plan which lists key activities that the group is undertaking. Specifically, the group established an Internet site to link information technology workers with the companies that need them to solve the Year 2000 problem;is attempting to determine the effect of workforce issues on local communities by surveying community colleges; and is exploring outreach activities, such as having community colleges raise awareness of the Year 2000 problem within their communities and assist in solving the problem. Although some of these initiatives may benefit the government, the working group is clearly adopting a nationwide focus and is not solely targeting the federal workforce issue. In March 1998, the CIO Council tasked its Education and Training committee with crafting recommendations and legislation by May 1998 to help agencies recruit and retain information technology personnel. However, the committee found that in order to develop recommendations, it first needed more information about the problem. In May, the committee formed five focus teams to study the federal information technology workforce challenge. The focus teams will address the following areas: (1) national workforce strategies, (2) federal workforce planning, (3) recruitment, (4) retention, and (5) executive development. Each team will present its findings at a forum of CIO Council Members, Human Resource Council members, and OPM personnel currently planned for November 1998. The committee expects to prepare a final conference report to the CIO Council after the November 1998 forum. According to the Co-Chair of the Committee, part of the study will be focused specifically on the Year 2000 personnel issue and determining the extent and scope of the personnel problems that exist for the Year 2000 problem. It is unclear, however, that this committee will produce timely recommendations because the final report is scheduled to be issued after November 1998, which may be too late to address the Year 2000 workforce issues. While the executive councils and OPM have initiatives underway to help resolve Year 2000 workforce issues, it is not yet known if these efforts will effectively address federal agencies’ concerns. OPM has developed new human resources management flexibilities, the Conversion Council working group is identifying solutions that are applicable to private industry, and the CIO Council is studying the problem. No organization, however, is working with individual agencies to determine how significant their personnel concerns are, and if they can be adequately resolved through existing human resources management tools. Given that the potential consequences of having an inadequate workforce to tackle critical Year 2000 conversions are severe, such an endeavor seems worthwhile. A significant number of agencies have reported concerns about the availability of information technology personnel needed to address their Year 2000 problems. Also, the executive councils and OPM have a number of initiatives underway to address perceived personnel shortages. However, it is not yet known whether these efforts will ensure an adequate supply of qualified personnel to solve the government’s Year 2000 problem. Various organizations have responsibilities in this arena. While individual agencies are in the best position to determine if current tools adequately resolve their own Year 2000 personnel issues, OMB is responsible for overseeing federal agencies’ responses to the Year 2000 problem, and OPM has both the knowledge of existing personnel management options and, in some cases, the authority to waive existing rules or develop new approaches. The workforce issue could quickly become more complicated. As awareness of the criticality of the Year 2000 problem grows throughout government and industry, there is a chance that competition for limited skilled personnel will increase. If this more vigorous competition occurs, the government may find it increasingly difficult to obtain and retain the skilled personnel needed to correct its mission critical systems in time. Given the adverse consequences of any staffing shortages, it is critical that agencies be able to quickly determine if mechanisms currently exist to resolve personnel issues or if additional solutions are needed. Given the likelihood that critical government operations will cease if key systems are not made Year 2000 compliant, we recommend that the Director of the Office of Management and Budget, as part of the agency’s monitoring responsibilities for the government’s Year 2000 program, determine if recent OPM initiatives have satisfactorily addressed agencies’ reported personnel problems. If these problems have not been addressed by existing OPM tools, the Director of the Office of Management and Budget should designate an OMB official who, together with OPM and the CIO Council, would proactively and quickly help individual agencies resolve their Year 2000 workforce concerns. We also recommend that the Director of the Office of Management and Budget work with the CIO Council to expedite the portions of its ongoing study that are relevant to the Year 2000 problem, with a goal of issuing its Year 2000-related recommendations as soon as possible. The Chairman of the President’s Council on Year 2000 Conversion, as well as officials representing the CIO Council, OMB, and OPM provided oral comments on a draft of this report. These officials concurred with the report and our recommendations. They also offered several technical suggestions which we have incorporated as appropriate. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate and House Committee on Appropriations and the House Committee on Government Reform and Oversight; the Ranking Minority Member of the House Committee on Banking and Financial Services; the Co-Chairs of the House Year 2000 Task Force; the Chairman of the President’s Council on Year 2000 Conversion; the Director of the Office of Management and Budget; and the Director of the Office of Personnel Management. Copies will also be made available to others upon request. If you have any questions about this report, please contact us at (202) 512-6253 and (202) 512-8676, respectively. We can also be reached by e-mail at [email protected] and [email protected]. Major contributors to this report are listed in appendix III. To determine the nature and extent of the Year 2000 personnel issues being reported by federal agencies, we reviewed and analyzed the Year 2000 progress reports submitted to OMB by 24 large agencies in February, May, and August 1998, by 40 of the 41 small agencies and entities in April and May 1998, and by 9 of those same small agencies and entities that were requested to report in August 1998. In addition, when personnel issues were not specifically addressed by these agencies in their progress reports or when the progress reports were not submitted to OMB, we conducted telephone interviews with agency officials to determine if the agencies were experiencing personnel problems related to the Year 2000 problem. Further, after reviewing the August 1998 reports, we conducted telephone interviews with agency officials when an agency was newly reporting it had no personnel problems. We did this to determine if prior concerns had been resolved. We did not independently assess the reliability of the information provided by the agencies. To identify what is being done to address personnel shortages related to the Year 2000 problem, we evaluated the Year 2000 personnel efforts of OPM, the Human Resources Technology Council, and the CIO Council’s Education and Training and Year 2000 Committees. We also reviewed the efforts of the Workforce Issues Working Group of the President’s Council on Year 2000 Conversion. In addition, we interviewed officials from each of these groups. We conducted our review from May 1998 through August 1998 in accordance with generally accepted government auditing standards. We provided a draft of this report to the Chair of the President’s Council on Year 2000 Conversion, the Chair of the CIO Council, and OPM and OMB management and incorporated their comments as appropriate. Table II.1 summarizes the concerns identified by the various agencies and other entities in their reports to OMB. In cases where the agencies and entities did not specifically report on personnel issues, we interviewed agency officials to determine if the agencies were experiencing personnel problems related to the Year 2000 problem. Also, in cases where agencies newly reported that they had no personnel problems in August 1998, we interviewed agency officials to determine if prior concerns had been resolved. We did not independently assess the reliability of the information provided by the agencies. Table II.2 lists the agencies and entities that identified no concerns with the availability of personnel to address the Year 2000 problem. Large departments and agencies (13) The agency is experiencing increased attrition, and reported concern with the short supply of human resources and the upward pressure on salaries of key personnel. A contractor hired to perform legacy system maintenance and Year 2000 compliance services did not supply the key officials as provided in the contract for 5 weeks. The department is encountering high turnover and is finding it difficult to compete with higher salaries being offered by private industry. Vendors are finding it increasingly difficult to bring on contract employees without substantial increases in contract dollars and have lost contract employees who have left for better paying positions. The department reported that it continues to experience difficulties in finding and hiring qualified information technology personnel. A key contractor was unable to provide qualified staff, causing contract delays. The department reported that one facility has experienced difficulty in retaining and recruiting programming resources for the Year 2000 effort. Efforts to contract for mainframe systems programmers at one facility have not been successful. The agency experienced problems in finding programmers needed to fix key payroll systems. While the agency has located contractor resources, the costs are higher than the agency has historically paid. (continued) Description of in-house personnel issue The department reported concern that unforeseen retirements could affect its Year 2000 efforts. The department is experiencing difficulty acquiring and retaining skilled personnel, particularly COBOL programmers. The department is encountering problems in obtaining contractor support with necessary programming skills. The department is experiencing problems in acquiring and retaining skilled programmers. The department reported concerns with the availability of trained contractor staff and that the turnover rate tends to be high due to current market conditions. The Ames Research Center, located in the Silicon Valley area, is finding it a challenge to hire qualified contract programmers. The department experienced high turnover in systems support personnel, and is now facing severe staffing shortages. The department reported that recruiting to replace contract systems programmers has taken more time than in past years and has resulted in a labor rate increase. The department reported that one agency’s contractors have experienced problems in finding qualified programmers. The department encountered an increased rate of attrition of its information systems workforce. It reported that skilled programmers, especially those with skills in legacy platforms, are in strong demand with the private sector, which can pay significantly higher salaries than the government. (continued) Description of in-house personnel issue The department reported that recruiting is very intensive for Year 2000 professionals in some geographic areas and expressed the concern that government employees may leave for the private sector because of the lucrative “finders fees” being advertised. Contractors are having difficulty finding and retaining personnel. Small agencies and other entities (10) The agency reported that retention and recruitment could become key issues if key computer programmers and/or network personnel decide to leave the agency. The agency expressed concern about retaining sufficient qualified contractors to carry out needed work as the demand for Year 2000 programmers increases. The agency has encountered problems with contractors who are losing personnel for higher salaries at other contractors. When replacing the contractor staff, they are increasing the hourly rate. The agency reported that it has insufficient personnel resources to accomplish all Year 2000 renovation, testing, and implementation work in-house. The agency experienced delays in scheduling the conversion of its systems due to competing workloads. Also, it has experienced difficulties in hiring senior Year 2000 program officials. (continued) The agency has encountered increased competition for skilled Year 2000 personnel, and has had to settle for Year 2000 contractors with fewer skills than needed because the contractors that possessed all the desired skills generally cost too much. Also, the agency reported that its consultants are paying their staffs more to retain them, and these costs are being passed on to the agency. The office reported that the Year 2000 program is severely straining the workload of existing information systems and technology personnel, and that any diversion of personnel to the Year 2000 program creates a potential support problem for ongoing or day-to-day operations. The office reported that finding contractor personnel with the appropriate skill level to analyze legacy systems and to recommend alternatives continues to be a problem. The corps reported its concerns with retaining information resources management staff and filling vacancies. The agency has encountered problems matching information technology skill sets with specific Year 2000 needs and has found that there is a strong employment market for information technology skills. It also reported that salaries have increased for all information technology skills, not just for Year 2000 staff. The service reported that retaining skilled resources needed for remediation and testing continues to be a challenge which is exacerbated by a limited labor pool. The service reported that retaining skilled contractor staff continues to be a challenge. Large Departments and Agencies (11) Small Agencies and Other Entities (31) Armed Forces Retirement Home Board Corporation for National and Community Services Defense Nuclear Facilities Safety Board Export-Import Bank of the United States Federal Home Loan Mortgage Corporation Federal Retirement Thrift Investment Board John F. Kennedy Center for the Performing Arts (continued) Small Agencies and Other Entities, cont. U.S. Arms Control and Disarmament Agency U.S. Trade Representative, Executive Office of the President Four large and three small agencies did not state concerns in their most recent reports to OMB, but told us of them in subsequent discussions. These agencies are not included in this list. Glenda C. Wright, Senior Information Systems Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed workforce issues associated with the year 2000 computing crisis, focusing on: (1) the nature and extent of year 2000 personnel issues being reported by federal agencies; and (2) what is being done by the government to address reported federal personnel shortages related to the year 2000 problem. GAO noted that: (1) about half of the 24 large agencies and a quarter of the 41 small agencies and independent entities reporting to the Office of Management and Budget (OMB) expressed concerns that the personnel needed to resolve the year 2000 problem would not be available; (2) generally, these concerns fall into the categories of difficulty in finding and keeping qualified government personnel, and difficulty in obtaining contractors; (3) while a significant number of agencies are raising these concerns, their comments are largely anecdotal and a comprehensive analytical assessment of the issue has not yet been made; (4) as a result, the full extent and severity of the year 2000 workforce issue across the government is not known; (5) the President's Council on Year 2000 Conversion, the Chief Information Officers (CIO) Council, and the Office of Personnel Management (OPM) have various initiatives under way to address reported year 2000 personnel issues; (6) for example, OPM has recently developed additional human resources management aids to assist agencies in dealing with year 2000 workforce issues; (7) while such initiatives have provided agencies with important options to help address reported year 2000 personnel problems, it is not yet clear that recent actions have enabled agencies to successfully resolve all perceived personnel issues; (8) accordingly, it is essential that OMB, as part of its monitoring responsibilities for the government's year 2000 program, continue to solicit from agencies whether they have any remaining year 2000 personnel problems and to help provide specific assistance to individual agencies; and (9) moreover, OMB should work with the CIO Council to expedite evaluations of the full extent and scope of information technology personnel issues to help formulate effective solutions. |
As a result of the rapid growth in computer technology, the Department of Defense, like the rest of government and the private sector, has become extremely dependent on automated information systems. These systems have also become increasingly interconnected worldwide to form virtual communities in cyberspace. The Department calls its portion of this global community the Defense information infrastructure. To communicate and exchange unclassified information, Defense relies extensively on a host of commercial carriers and common user networks. This network environment offers Defense tremendous opportunities for streamlining operations and improving efficiency, but also greatly increases the risks of unauthorized access to information. As depicted in figure 1.1, the Department of Defense has a vast information infrastructure of computers and networks to protect including over 2.1 million computers, 10,000 local networks, 100 long-distance networks, 200 command centers, and 16 central computer processing facilities or MegaCenters. There are over 2 million Defense computer users and an additional two million non-Defense users that do business with the Department. As discussed in chapter 2, Defense systems contain very valuable and sensitive information including commercial transactions, payrolls, sensitive research data, intelligence, operational plans, procurement sensitive source selection data, health records, personnel records, and weapons systems maintenance records. This unclassified but sensitive information constitutes a majority of the information on Defense computers. The systems are attractive targets for individuals and organizations seeking monetary gain, or dedicated to damaging Defense and its operations. Generally, classified information such as war planning data or top secret research is safer from attack since it is (1) protected on computers isolated from outside networks, (2) encrypted, or (3) only transmitted on dedicated, secure circuits. The Internet is a global network interconnecting thousands of dissimilar computer networks and millions of computers worldwide. Over the past 20 years, it has evolved from its relatively obscure use by scientists and researchers to its significant role today as a popular, user-friendly, and cost-effective means of communication and information exchange. Millions of people conduct business over the Internet, and millions more use it for entertainment. Internet use has been more than doubling annually for the last several years to an estimated 40 million users in nearly every country today. Connections are growing at an ever increasing rate; the Internet is adding a new network about every 30 minutes. Because the Internet strives to be a seamless web of networks, it is virtually impossible today to distinguish where one network ends and another begins. Local, state, and federal government networks, for example, are interconnected with commercial networks, which in turn are interconnected with military networks, financial networks, networks controlling the distribution of electrical power, and so on. Defense itself uses the Internet to exchange electronic-mail, log on to remote computer sites worldwide, and to download and upload files from remote locations. During the conflict in the Persian Gulf, Defense used the Internet to communicate with U.S. allies and gather and disseminate intelligence and counter-intelligence information. Many Defense and information technology experts predict that Defense will increase its reliance on Internet in the future. They believe that public messages originating within regions of conflict will provide early warnings of significant developments earlier than the more traditional indications and warnings obtained through normal intelligence gathering. They also envision the Internet as a back-up communications medium if other conventional channels are disrupted during conflicts. Though clearly beneficial, the Internet also poses serious computer security concerns for Defense and other government and commercial organizations. Increasingly, attempted break-ins and intrusions into their systems are being detected. Federal law enforcement agencies are likewise initiating more investigations of computer systems intrusions, based on the rising level of Internet-related security breaches and crimes. Similarly, security technologies and products are being developed and used to enhance Internet security. However, as new security tools are developed, hackers quickly learn how to defeat them or exploit other vulnerabilities. A variety of weaknesses can leave computer systems vulnerable to attack. For example, they are vulnerable when (1) inexperienced or untrained users accidentally violate good security practices by inadvertently publicizing their passwords, (2) weak passwords are chosen which can be easily guessed, or (3) identified security weaknesses go uncorrected. Malicious threats can be intentionally designed to unleash computer viruses, trigger future attacks, or install software programs that compromise or damage information and systems. Attackers use a variety of methods to exploit numerous computer system vulnerabilities. According to Defense, the three primary methods described below account for most of the successful attacks. Sendmail is a common type of electronic mail used over the Internet. An attacker can install malicious code in an electronic mail message and mail it to a networked machine. Sendmail will scan the message and look for its address, but also execute the attacker’s code. Since sendmail is executing at the system’s root level, it has all systems privileges and can, for example, enter a new password into the system’s password file which gives the attacker total system privileges. Password cracking and theft is a technique in which attackers try to guess or steal passwords to obtain access to computer systems. This technique has been automated by attackers; rather than attackers trying to guess legitimate users’ passwords, computers can very efficiently and systematically do the guessing. For example, if the password is a dictionary word, a computer can quickly look up all possibilities to find a match. Complex passwords comprised of alphanumeric characters are more difficult to crack. However, even with complex passwords, powerful computers can use brute force to compare all possible combinations of characters until a match is found. Of course, if attackers can create their own passwords in a system, as in the sendmail example above, they do not need to guess a legitimate one. Packet sniffing is a technique in which attackers surreptitiously insert a software program at remote network switches or host computers. The program monitors information packets as they are sent through networks and sends a copy of the information retrieved to the hacker. By picking up the first 125 keystrokes of a connection, attackers can learn passwords and user identifications, which, in turn, they can use to break into systems. Once they have gained access, attackers use the computer systems as though they were legitimate users. They steal information, both from the systems compromised as well as systems connected to them. Attackers also deny service to authorized users, often by flooding the computer system with messages or processes generated to absorb system resources, leaving little available for authorized use. Attackers have varied motives in penetrating systems. Some are merely looking for amusement; they break in to obtain interesting data, for the challenge of using someone else’s computers, or to compete with other attackers. They are curious, but not actively malicious, though at times they inadvertently cause damage. Others—known as computer vandals—are out to cause harm to particular organizations, and in doing so, attempt to ensure that their adversary knows about the attack. Finally, some attackers are professional thieves and spies who aim to break in, copy data, and leave without damage. Often, their attacks, because of the sophistication of the tools they use, go undetected. Defense is an especially attractive target to this type of attacker, because, for example, it develops and works with advanced research data and other information interesting to foreign adversaries or commercial competitors. Attackers use a variety of tools and techniques to identify and exploit system vulnerabilities and to collect information passing through networks, including valid passwords and user names for both local systems as well as remote systems that local users can access. As technology has advanced over the past two decades, so have the tools and techniques of those who attempt to break into systems. Figure 1.2 shows how the technical knowledge required by an attacker decreases as the sophistication of the tools and techniques increases. Some of the computer attack tools, such as SATAN, are now so user-friendly that very little computer experience or knowledge is required to launch automated attacks on systems. Also, informal hacker groups, such as the 2600 club, the Legions of Doom, and Phrackers Inc., openly share information on the Internet about how to break into computer systems. This open sharing of information combined with the availability of user-friendly and powerful attack tools makes it relatively easy for anyone to learn how to attack systems or to refine their attack techniques. The Ranking Minority Member, Senate Committee on Governmental Affairs; the Ranking Minority Member, Permanent Subcommittee on Investigations, Senate Committee on Governmental Affairs; and the Chairman, Subcommittee on National Security, International Affairs and Criminal Justice, House Committee on Government Reform and Oversight requested information on the extent to which Defense computer systems are being attacked, the damage attackers have caused, and the potential for more damage. We were also asked to assess Defense efforts to minimize intrusions into its computer systems. To achieve these objectives, we obtained documentation showing the number of recent attacks and results of tests conducted by Defense personnel to penetrate its own computer systems. We obtained data on actual attacks to show which systems were attacked, and how and when the attack occurred. We also obtained information available on the extent of damage caused by the attack and determined if Defense performed damage assessments. We obtained documentation that discusses the harm that outsiders have caused and can potentially cause to computer systems. We also assessed initiatives at Defense designed to defend against computer systems attacks. We reviewed the Department’s information systems security policies to evaluate their effectiveness in helping to prevent and respond to attacks. We discussed with Defense officials their efforts to provide information security awareness and training programs to Defense personnel. We obtained information on technical products and services currently available and planned to protect workstations, systems, and networks. We also obtained and evaluated information on obstacles Defense and others face in attempting to identify, apprehend, and prosecute those who attack computer systems. We interviewed officials and obtained documentation from the Office of the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence, Washington, D.C.; Defense Information Systems Agency, Center for Information Systems Security, Washington, D.C.; Army, Navy, and Air Force Headquarters Offices, Washington, D.C.; National Security Agency, Ft. Meade, Maryland; Air Force Information Warfare Center, Kelly Air Force Base, San Antonio, Navy Fleet Information Warfare Center, Norfolk, Virginia; Air Force Office of Special Investigations, Bolling Air Force Base, Naval Criminal Investigative Service, Navy Yard, Washington, D.C.; Army Criminal Investigation Command, Ft. Belvoir, Virginia; Rome Laboratory, Rome, New York; Naval Research Laboratory, Washington, D.C.; Army Military Traffic Management Command, Falls Church, Virginia; Pentagon Single Agency Manager, Washington, D.C.; Wright-Patterson Air Force Base, Dayton, Ohio; Army Intelligence and Security Command, Ft. Belvoir, Virginia; Army 902d Military Intelligence Group, Ft. Meade, Maryland; Science Applications International Corporation, McLean, Virginia; and Department of Justice, Washington, D.C. We also interviewed officials and obtained data from the Computer Emergency Response Team Coordination Center, Software Engineering Institute, Carnegie-Mellon University, Pittsburgh, Pennsylvania. In response to computer security threats, Defense established the Coordination Center in 1988, to support users of the Internet. The Center works with the Internet community to detect and resolve computer security incidents and to prevent future incidents. Our review was conducted from September 1995 to April 1996 in accordance with generally accepted government auditing standards. We provided a draft of this report to the Department of Defense for comment. On May 15, 1996, we discussed the facts, conclusions, and recommendations with cognizant Defense officials. Their comments are presented and evaluated in chapter 4 and have been incorporated where appropriate. To operate more effectively in a technologically sophisticated world, Defense is moving from a computing environment of stand-alone information systems that perform specific functions to a globally integrated information structure. In doing so, it has linked thousands of computers to the Internet as well as other networks and increased its dependence on computer and network technology to carry out important military functions worldwide. As a result, some operations would now be crippled if (1) the supporting technology failed or (2) information was stolen or destroyed. For example: Defense cannot locate or deliver supplies promptly without properly functioning inventory and logistics systems; Defense relies heavily on computer technology—especially a network of simulators that emulate complex battle situations—to train staff; it is impossible to pay, assign, move, or track people without globally networked information systems; Defense cannot control costs, pay vendors, let or track contracts, allocate or release funds, or report on activities without automation; and Defense systems handle billions of dollars in financial transactions for pay, contract reimbursement, and economic commerce. Defense systems are enticing targets for attackers for several reasons. Attackers seeking financial gain may want to access financial systems to direct fraudulent payments, transfer money between accounts, submit fictitious claims, direct orders for unneeded products, or wipe out an entire organization’s budget. Companies doing business with Defense may want to strengthen their competitive position by accessing systems that contain valuable information about billions of dollars worth of sophisticated research and development data and information on contracts and evaluation criteria. Enemies may want to better position themselves against our military by stealing information on force locations and plans for military campaigns and use this data to locate, target, or misdirect forces. Although no one knows the exact number, DISA estimates show that Defense may have experienced about 250,000 attacks last year, and that the number of attacks is increasing. Establishing an exact count of attacks is difficult since some attackers take measures to avoid detection. In addition, the Department does not detect or react to most attacks, according to DISA, and does not report the majority of attacks it does detect. Estimates of the number of computer attacks are based on DISA’s Vulnerability Analysis and Assessment Program. Under this program, DISA personnel attempt to penetrate computer systems at various military service and Defense agency sites via the Internet. Since the program’s inception in 1992, DISA has conducted 38,000 attacks on Defense computer systems to test how well they were protected. DISA successfully gained access 65 percent of the time (see figure 2.1). Of these successful attacks, only 988 or about 4 percent were detected by the target organizations. Of those detected, only 267 attacks or roughly 27 percent were reported to DISA. Therefore, only about 1 in 150 successful attacks drew an active defensive response from the organizations being tested. Reasons for Defense’s poor detection rates are discussed in chapter 3. The Air Force conducts similar vulnerability assessments. Its data shows better success in detecting and reacting to attacks than DISA’s data. However, Defense officials generally acknowledge that, because the Air Force’s computer emergency response team resources are larger and more experienced, they have had better success in detecting and reacting to attacks than either the Navy or Army. DISA also maintains data on officially reported attacks. Defense installations reported 53 attacks in 1992, 115 in 1993, 255 in 1994, and 559 in 1995. Figure 2.2 shows this historical data on the number of officially reported attacks and projections for future attack activity. According to Defense officials, attacks on Department computer systems have been costly and considerably damaging. Attackers have stolen, modified, and destroyed both data and software. They have installed unwanted files and “back doors” which circumvent normal system protection and allow attackers unauthorized access in the future. They have shut down entire systems and networks, thereby denying service to users who depend on automated systems to help meet critical missions. Numerous Defense functions have been adversely affected, including weapons and supercomputer research, logistics, finance, procurement, personnel management, military health, and payroll. Following are examples of attacks to date. The first attack we highlight, on Rome Laboratory, New York, was well-documented by Defense and of particular concern to committees requesting this report because the attack shows how a small group of hackers can easily and quickly take control of Defense networks. Rome Laboratory, New York, is Air Force’s premier command and control research facility. The facility’s research projects include artificial intelligence systems, radar guidance systems, and target detection and tracking systems. The laboratory works cooperatively with academic institutions, commercial research facilities, and Defense contractors in conducting its research and relies heavily on the Internet in doing so. During March and April 1994, more than 150 Internet intrusions were made on the Laboratory by a British hacker and an unidentified hacker. The attackers used trojan horses and sniffers to access and control Rome’s operational network. As depicted in figure 2.3, they also took measures to prevent a complete trace of their attack. Instead of accessing Rome Laboratory computers directly, they weaved their way through various phone switches in South America, through commercial sites on the east and west coast, and then to the Rome Laboratory. The attackers were able to seize control of Rome’s support systems for several days and establish links to foreign Internet sites. During this time, they copied and downloaded critical information such as air tasking ordersystems data. By masquerading as a trusted user at Rome Laboratory, they were also able to successfully attack systems at other government facilities, including the National Aeronautics and Space Administration’s (NASA) Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations. Figure 2.3 illustrates the route the hackers took to get to the Rome Laboratory computers and the computer sites they successfully attacked from Rome. Because the Air Force did not know it was attacked for at least 3 days, vast damage to Rome Laboratory systems and the information in those systems could potentially have occurred. As stated in the Air Force report on the incident, “We have only the intruders to thank for the fact that no lasting damage occurred. Had they decided, as a skilled attacker most certainly will, to bring down the network immediately after the initial intrusion, we would have been powerless to stop them.” However, the Air Force really does not know whether or not any lasting damage occurred. Furthermore, because one of the attackers was never caught, investigators do not know what was done with the copied data. The Air Force Information Warfare Center (AFIWC) estimated that the attacks cost the government over $500,000 at the Rome Laboratory alone. Their estimate included the time spent taking systems off the networks, verifying systems integrity, installing security patches, and restoring service, and costs incurred by the Air Force’s Office of Special Investigations and Information Warfare Center. It also included estimates for time and money lost due to the Laboratory’s research staff not being able to use their computer systems. However, the Air Force did not include the cost of the damage at other facilities attacked from the Rome Laboratory or the value of the research data that was compromised, copied, and downloaded by the attacker. For example, Rome Laboratory officials said that over 3 years of research and $4 million were invested in the air tasking order research project compromised by the attackers, and that it would have cost that much to replace it if they had been unable to recover from damage caused by the attackers. Similarly, Rome laboratory officials told us that all of their research data is valuable but that they do not know how to estimate this value. There also may have been some national security risks associated with the Rome incident. Air Force officials told us that at least one of the hackers may have been working for a foreign country interested in obtaining military research data or information on areas in which the Air Force was conducting advanced research. In addition, Air Force Information Warfare Center officials told us that the hackers may have intended to install malicious code in software which could be activated years later, possibly jeopardizing a weapons system’s ability to perform safely and as intended, and even threatening the lives of the soldiers or pilots operating the system. The U.S. Naval Academy’s computer systems were penetrated by unknown attackers in December 1994. The intrusions originated from Great Britain, Finland, Canada, the University of Kansas, and the University of Alabama. During the attack, 24 servers were accessed and sniffer programs were installed on 8 of these. A main router was compromised, and a system’s name and address were changed, making the system inaccessible to authorized users. In addition, one system back-up file and files from four other systems were deleted. Six other systems were corrupted, two encrypted password files were compromised, and over 12,000 passwords were changed. The Navy did not determine how much the attack cost and Navy investigators were unable to identify the attacker(s). At a minimum, however, the attack caused considerable disruptions to the Academy’s ability to process and store sensitive information. Between April 1990 and May 1991, hackers from the Netherlands penetrated computer systems at 34 Defense sites. The hackers browsed directories and modified systems to obtain full privileges allowing them future access. They read e-mail, in some cases searching the messages for key words such as nuclear, weapons, missile, Desert Shield, and Desert Storm. In several instances, the hackers copied and stored military data on systems at major U.S. universities. After the attacks, the hackers modified systems logs to avoid detection and to remove traces of their activities. We testified on these attacks before the Subcommittee on Government Information and Regulation, Senate Committee on Governmental Affairs, on November 20, 1991.In 1995 and 1996, an attacker from Argentina used the Internet to access a U.S. university system, and from there broke into computer networks at the Naval Research Laboratory, other Defense installations, NASA, and Los Alamos National Laboratory. The systems at these sites contained sensitive research information, such as aircraft design, radar technology, and satellite engineering, that is ultimately used in weapons and command and control systems. The Navy could not determine what information was compromised and did not attempt to determine the cost of the incident. Unknown person(s) accessed two unclassified computer systems at the Army Missile Research Laboratory, White Sands Missile Range and installed a sniffer program. The intruder was detected entering the systems a second and third time, but the sniffer program was removed before the intruder could be identified. The missile range’s computer systems contain sensitive data, including test results on the accuracy and reliability of sophisticated weaponry. As with the case above, the Army could not determine what data was compromised. However, such data could prove very valuable to foreign adversaries. While these are specific examples, Defense officials say they reflect the thousands of attacks experienced every year. Although no one has attempted to determine the total cost of responding to these attacks, Defense officials agreed the cost of these incidents is significant and probably totals tens or even hundreds of millions of dollars per year. Such costs should include (1) detecting and reacting to attacks, repairing systems, and checking to ensure the integrity of information, (2) lost productivity due to computer shutdowns, (3) tracking, catching, and prosecuting attackers, and (4) the cost and value of information compromised. Because so few incidents are actually detected and reported, no one knows the full extent of damage caused by computer attacks. However, according to many Defense and private sector experts, the potential for catastrophic damage is great given (1) the known vulnerabilities of the Department’s command and control, military research, logistics, and other systems, (2) weaknesses in national information infrastructure systems, such as public networks which Defense depends upon, and (3) the threat of terrorists or foreign nationals using sophisticated offensive information warfare techniques. They believe that attackers could disrupt military operations and threaten national security by successfully compromising Defense information and systems or denying service from vital commercial communications backbones or power systems. The National Security Agency (NSA) has acknowledged that potential adversaries are developing a body of knowledge about the Defense’s and other U.S. systems, and about methods to attack these systems. According to NSA, these methods, which include sophisticated computer viruses and automated attack routines, allow adversaries to launch untraceable attacks from anywhere in the world. In some extreme scenarios, experts state that terrorists or other adversaries could seize control of Defense information systems and seriously degrade the nation’s ability to deploy and sustain military forces. The Department of Energy and NSA estimate that more than 120 countries have established computer attack capabilities. In addition, most countries are believed to be planning some degree of information warfare as part of their overall security strategy. At the request of the Office of the Secretary of Defense for Command, Control, Communications and Intelligence, the Rand Corporationconducted exercises known as “The Day After . . .” between January and June 1995 to simulate an information warfare attack. Senior members of the national security community and representatives from national security-related telecommunications and information systems industries participated in evaluating and responding to a hypothetical conflict between an adversary and the United States and its allies in the year 2000. In the scenario, an adversary attacks computer systems throughout the United States and allied countries, causing accidents, crashing systems, blocking communications, and inciting panic. For example, in the scenario, automatic tellers at two of Georgia’s largest banks are attacked. The attacks create confusion and panic when the automatic tellers wrongfully add and debit thousands of dollars from customers’ accounts. A freight train is misrouted when a logic bomb is inserted into a railroad computer system, causing a major accident involving a high speed passenger train in Maryland. Meanwhile, telephone service is sabotaged in Washington, a major airplane crash is caused in Great Britain; and Cairo, Egypt loses all power service. An all-out attack is launched on computers at most military installations, slowing down, disconnecting, or crashing the systems. Weapons systems designed to pinpoint enemy tanks and troop formations begin to malfunction due to electronic infections. The exercises were designed to assess the plausibility of information warfare scenarios and help define key issues to be addressed in this area. The exercises highlighted some defining features of information warfare, including the fact that attack mechanisms and techniques can be acquired with relatively modest investment. The exercises also revealed that no adequate tactical warning system exists for distinguishing between information warfare attacks and accidents. Perhaps most importantly, the study demonstrated that because the U.S. economy, society, and military rely increasingly on a high performance networked information infrastructure, this infrastructure presents a set of attractive strategic targets for opponents who possess information warfare capabilities. “there is mounting evidence that there is a threat that goes beyond hackers and criminal elements. This threat arises from terrorist groups or nation states, and is far more subtle and difficult to counter than the more unstructured but growing problem caused by hackers. The threat causes concern over the specter of military readiness problems caused by attacks on Defense computer systems, but it goes well beyond the Department. Every aspect of modern life is tied to a computer system at some point, and most of these systems are relatively unprotected. This is especially so for those tied to the NII (National Information Infrastructure).” The report added that a large structured attack with strategic intent against the United States could be prepared and exercised under the guise of unstructured activities and that such an attack could “cripple U.S. operational readiness and military effectiveness.” These studies demonstrate the growing potential threat to national security posed by computer attacks. Information warfare will increasingly become an inexpensive but highly effective tactic for disrupting military operations. As discussed in chapter 3, successfully protecting information and detecting and reacting to computer attacks presents Defense and our nation with significant challenges. The task of precluding unauthorized users from compromising the confidentiality, integrity, or availability of information is increasingly difficult given the complexity of Defense’s information infrastructure, growth of and reliance on outside networks including the Internet, and the increasing sophistication of the attackers and their tools. Absolute protection of Defense information is neither practical nor affordable. Instead, Defense must turn to risk management to ensure computer security. In doing so, however, it must make tradeoffs that consider the magnitude of the threat, the value and sensitivity of the information to be protected, and the cost of protecting it. In our review of key studies and security documents and discussions with Defense security experts, certain core elements emerged as critical to effective information system security. A good computer security program begins with top management’s understanding of the risks associated with networked computers, and a commitment that computer security will be given a high priority. At Defense, management attention to computer security has been uneven. The Defense information infrastructure has evolved into a set of individual computer systems and interconnected networks, many of which were developed without sufficient attention to the entire infrastructure. While some local area networks and Defense installations have excellent security programs, others do not. However, the overall infrastructure is only as secure as the weakest link. Therefore, all components of the Defense infrastructure must be considered when making investment decisions. In addition, policies and procedures must also reflect this philosophy and guide implementation of the Department’s overall security program as well as the security plans for individual Defense installations. The policies should set minimum standards and requirements for key security activities and clearly assign responsibility and accountability for ensuring that they are carried out. Further, sufficient personnel, training, and resources must be provided to implement these policies. While not intended to be a comprehensive list, following are security activities that all of the security studies and experts agreed were important: (1) clear and consistent information security policies and procedures, (2) vulnerability assessments to identify security weaknesses at individual Defense installations, (3) mandatory correction of identified network/system security weaknesses, (4) mandatory reporting of attacks to help better identify and communicate vulnerabilities and needed corrective actions, (5) damage assessments to reestablish the integrity of the information compromised by an attacker, (6) awareness training to ensure that computer users understand the security risks associated with networked computers and practice good security, (7) assurance that network managers and system administrators have sufficient time and training to do their jobs, (8) prudent use of firewalls, smart cards, and other technical solutions, and (9) an incident response capability to aggressively detect and react to attacks and track and prosecute attackers. “The vulnerability to . . . systems and networks is increasing . . . The ability of individuals to penetrate computer networks and deny, damage, or destroy data has been demonstrated on many occasions. . . As our warfighters become more and more dependent on our information systems, the potential for disaster is obvious.” In addition, as part of its Federal Managers’ Financial Integrity Actrequirements, the Department identified information systems security as a system weakness in its Fiscal Year 1995 Annual Statement of Assurance, a report documenting high-risk areas requiring management attention. In its statement, Defense acknowledged a significant increase in attacks on its information systems and its dependence on these systems. Also, Defense has implemented a formal defensive information warfare program. This program was started in December 1992 through Defense Directive 3600.1. The directive broadly states that measures will be taken as part of this program to “protect friendly information systems by preserving the availability, integrity, and confidentiality of the systems and the information contained within those systems.” DISA, in cooperation with the military services and defense agencies, is responsible for implementing the program. The Department’s December 1995 Defensive Information Warfare Management Plan defines a three-pronged approach to protect against, detect, and react to threats to the Defense information infrastructure. The plan states that Defense must monitor and detect intrusions or hostile actions as they occur, react quickly to isolate the systems under attack, correct the security breaches, restore service to authorized users, and improve security. DISA has also taken a number of actions to implement its plan, the most significant being the establishment of its Global Control Center at DISA headquarters. The center provides the facilities, equipment, and personnel for directing the defensive information warfare program, including detecting and responding to computer attacks. DISA has also established its Automated Systems Security Incident Support Team (ASSIST) to provide a centrally coordinated around-the-clock Defense response to attacks. DISA also performs other services to help secure Defense’s information infrastructure, including conducting assessments of Defense organizations’ vulnerability to computer attacks. AFIWC has developed a computer emergency response capability and performs functions similar to DISA. The Navy and Army have just established similar capabilities through the Fleet Information Warfare Center (FIWC) and Land Information Warfare Activity (LIWA), respectively. Defense is incorporating some of the elements we describe above as necessary for strengthening information systems security and countering computer attacks, but there are still areas where improvement is needed. Even though the technology environment has changed dramatically in recent years, and the risk of attacks has increased, top management at many organizations do not consider computer security to be a priority. As a result, when resources are allocated, funding for important protective measures, such as training or the purchase of protection technology, take a back seat. As discussed in the remainder of this chapter, Defense needs to establish a more comprehensive information systems security program. A program which ensures that sufficient resources are directed at protecting information systems. Specifically, (1) Defense’s policies for protecting, detecting, and reacting to computer attacks are outdated and incomplete, (2) computer users are often unaware of system vulnerabilities and weak security practices, (3) system and network administrators are not adequately trained and do not have sufficient time to perform their duties, (4) technical solutions to security problems show promise, but these alone cannot guarantee protection, and (5) while Defense’s incident response capability is improving, it is not sufficient to handle the increasing threat. The military services and Defense agencies have issued a number of information security policies, but they are dated, inconsistent, and incomplete. At least 45 separate Defense policy documents address various computer and information security issues. The most significant Defense policy documents include Defense Directive 3600.1, discussed above, and Defense Directive 5200.28, entitled Security Requirements for Automated Information Systems, dated March 21, 1988, which provides mandatory minimum information systems security requirements. In addition, Defense Directive 8000.1, entitled Defense Information Management Program, dated October 27, 1992, requires DISA and the military services to provide technology and services to ensure the availability, reliability, maintainability, integrity, and security of Defense information. However, these and other policies relating to computer attacks are outdated and inconsistent. They do not set standards, mandate specific actions, or clearly assign accountability for important security activities such as vulnerability assessments, internal reporting of attacks, correction of vulnerabilities, or damage assessments. Shortcomings in Defense’s computer security policy have been reported previously. The Joint Security Commission found similar problems in 1994, and noted that Defense’s policies in this area were developed when computers were physically and electronically isolated. Consequently, the Commission reported that Defense information security policies were not suitable for today’s highly networked environment. The Commission also found that Defense policy was based on a philosophy of complete risk avoidance, rather than a more realistic and balanced approach of risk reduction. In addition, the Commission found a profusion of policy formulation authorities within Defense. This has led to policies being developed which create inefficiencies and implementation problems when organizations attempt to coordinate and interconnect their computer systems. Defense policies do not specifically require the following important security activities. Vulnerability Assessments: DISA established a Vulnerability Analysis and Assessment Program in 1992 to identify vulnerabilities in Defense information systems. The Air Force and Navy have similar programs, and the Army plans to begin assessing its systems next year. Under its program, DISA attempts to penetrate selected Defense information systems using various techniques, all of which are widely available on the Internet. DISA personnel attack vulnerabilities which have been widely publicized in their alerts to the military services and defense agencies. Assessment is performed at the request of the targeted Defense installation, and, upon completion, systems and security personnel are given a detailed briefing. Typically, DISA and the installation develop a plan to strengthen the site’s defenses, more effectively detect intrusions, and determine whether systems administrators and security personnel are adequately experienced and trained. Air Force and Navy on-line assessments are similar to DISA vulnerability assessments. However, there is no specific Defensewide policy requiring vulnerability assessments or criteria for prioritizing who should be targeted first. This has led to uneven application of this valuable risk assessment mechanism. Some installations have been tested multiple times while others have never been tested. As of March 1996, vulnerability assessments had been performed on less than 1 percent of the thousands of defense systems around the world. DISA and the military services recognize this shortcoming, but state that they do not have sufficient resources to do more. This is a concern because vulnerabilities in one part of Defense’s information infrastructure make the entire infrastructure vulnerable. Correction of Vulnerabilities: Defense does not have any policy requirement for correcting identified deficiencies and vulnerabilities. Defense’s computer emergency response teams—ASSIST, AFIWC, FIWC, and LIWA—as well as the national computer emergency response team at the Software Engineering Institute routinely identify and broadcast to Defense network administrators system vulnerabilities and suggested fixes. However, the lack of specific requirements for correcting known vulnerabilities has led to no action or inconsistent action on the part of some Defense organizations and installations. Reporting Attacks: The Department also has no policy requiring internal reporting of attacks or guidance on how to respond to attacks. System and network administrators need to know when and to whom attacks should be reported and what response is appropriate for reacting to attacks and ensuring systems availability, confidentiality, and integrity. Reporting attacks is important for Defense to identify and understand the threat, i.e., size, scale, and type of attack, as well as to measure the magnitude of the problem for appropriate corrective action and resource allocation. Further, since a computer attack on federal facility is a crime, it should be reported. Damage Assessments: There is no policy for Defense organizations to assess damage to their systems once an attack has been detected. As a result, these assessments are not usually done. For example, Air Force officials told us that the Rome Laboratory incident was the exception rather than the rule. They said that system and network administrators, due to lack of time and money, often simply “patch” their systems, restore service, and hope for the best. However, these assessments are essential to ensure the integrity of the data in those systems and to make sure that no malicious code was inserted that could cause severe problems later. The Software Engineering Institute’s Computer Emergency Response Team estimates that at least 80 percent of the security problems it addresses involve poorly chosen or poorly protected passwords by computer users. According to the Institute, many computer users do not understand the technology they are using, the vulnerabilities in the network environment they are working in, and the responsibilities they have for protecting critical information. They also often do not understand the importance of knowing and implementing good security policies, procedures, and techniques. Defense officials generally agreed that user awareness training was needed, but stated that installation commanders do not always understand computer security risk and, thus, do not always devote sufficient resources to the problem. The officials told us they are trying to overcome the lack of resources by low cost alternatives such as banners that warn individuals of their security responsibilities when they turn on their computers. In addition, network and system administrators often do not know what their responsibilities are for protecting their systems, and for detecting and reacting to intrusions. Critical computer security responsibilities are often assigned to personnel as additional or ancillary duties. We interviewed 24 individuals responsible for managing and securing systems at four military installations. Sixteen stated that they did not have enough time, experience, or training to do their jobs properly. In addition, eight stated that system administration was not their full-time job, but rather an ancillary duty. Our findings were confirmed by an Air Force survey of system administrators. It found that 325 of 709 respondents were unaware of procedures for reporting vulnerabilities and incidents, 249 of 515 respondents had not received any network security training, and 377 of 706 respondents reported that their security responsibilities were ancillary duties. In addition, Defense officials stated that it is not uncommon for installations to lack a full-time, trained, experienced information systems security officer. Security officers generally develop and update the site’s security plan, enforce security statutes and policy, aggregate and report all security incidents and changes in the site’s security status, and evaluate security threats and vulnerabilities. They also coordinate computer security with physical and personnel security, develop back-up and contingency plans, manage access to all information systems with sound password and user identification procedures, ensure that audit trails of log-ins to systems are maintained and analyzed, and perform a host of other duties necessary to secure the location’s computer systems. Without a full-time security official, these important security activities are usually done in an ad hoc manner or not done at all. Defense officials again cited the low priority installation commanders give security duties as the reason for the lack of full-time, trained, experienced security officers. Defense has developed training courses and curricula which focus on the secure operation of computer systems and the need to protect information. For example, DISA’s Center for Information Systems Security offers courses on the vulnerability of networks and computer systems security. Each of the military services also provides training in this area. While we did not assess the quality of the training, it is clear that not enough training is done. Defense officials cite resource constraints as the reason for this limitation. To illustrate, in its August 1995 Command and Control Protect Program Management Plan, the Army noted that it had approximately 4000 systems administrators, but few of these had received formal security training. The plan stated that the systems administrators have not been taught security basics such as how to detect and monitor an active intrusion, establish countermeasures, or respond to an intrusion. The plan added that a single course is being developed to train systems administrators, but that no funds are available to conduct the training. This again demonstrates the low priority top Defense management officials often give security. “Because of a lack of qualified personnel and a failure to provide adequate resources, many information systems security tasks are not performed adequately. Too often critical security responsibilities are assigned as additional or ancillary duties.” The report added that the Department lacks comprehensive, consistent training for information systems security officers, and that Defense’s current information systems security training efforts produce inconsistent training quality and, in some cases, a duplication of effort. The report concluded that, despite the importance of security awareness, training, and education programs, these programs tend to be frequent and ready targets for budget cuts. According to Defense officials, installation commanders may not understand the risks associated with networked computers, and thus may not have devoted sufficient priority or resources to address these problems. These officials also cite the lack of a professional job series for information security officials as a contributing factor to poor security practices at Defense installations. Until systems security is supported by the personnel system—including potential for advancement, financial reward, and professional training—it will not be a full-time duty. As a result, security will continue to be the purview of part-time, inadequately trained personnel. As described below, Defense and the private sector are developing a variety of technical solutions which should assist the Department in preventing, detecting, and reacting to attacks on its computer systems. However, knowledgeable attackers with the right tools can defeat these technologies. Therefore, these should not be an entity’s sole means of defense. Rather, they should be prudently used in conjunction with other security measures discussed in this chapter. Investment in these technologies should also be based on a comprehensive assessment of the value and sensitivity of the information to be protected. One important technology is a smart card called Fortezza. The card and its supporting equipment, including card readers and software, were developed by the NSA. The card is based on the Personal Computer Memory Card International Association industry standard and is a credit card size electronic module which stores digital information that can be recognized by a network or system. The card will be used by Defense and other government agencies to provide data encryption and authentication services. Defense plans to use the card in its Defense Message Systemand other systems around the world. Another technology that Defense is implementing is firewalls. Firewalls are hardware and software components that protect one set of system resources from attack by outside network users by blocking and checking all incoming network traffic. Firewalls permit authorized users to access and transmit privileged information and deny access to unauthorized users. Several large commercial vendors have developed firewall applications which Defense is using and tailoring for specific organizations’ computing and communications needs and environments. Like any technology, firewalls are not perfect; hackers have successfully circumvented them in the past. They should not be an installation’s sole means of defense, but should be used in conjunction with the other technical, physical, and administrative solutions discussed in this chapter. Many other technologies exist and are being developed today which DISA, NSA, and the military services are using and considering for future use. These include automated biometrics systems which examine an individual’s physiological or behavioral traits and use that information to identify an individual. Biometrics systems are available today, and are being refined for future applications, that examine fingerprints, retina patterns, voice patterns, signatures, and keystroke patterns. In addition, a technology in development called location-based authentication may help thwart attackers by pinpointing their location. This technology determines the actual geographic location of a user attempting to access a system. For example, if developed and implemented as planned, it could prevent a hacker in a foreign country, pretending to come from a military installation in the United States, from logging into a Defense system. These technical products show promise in protecting Defense systems and information from unauthorized users. However, they are expensive—firewalls can cost from $5,000 to $40,000 for each Internet access point, and Fortezza cards and related support could cost about $300 for each computer. They also require consistent and departmentwide implementation to be successful; continued development to enhance their utility; and usage by personnel who have the requisite skills and training to appropriately use them. Once again, no single technical solution is foolproof and, thus, combinations of protective mechanisms should be used. Decisions on which mechanisms to use should be based on an assessment of threat, the sensitivity of the information to be protected, and the cost of protection. Because absolute security is not possible and some attacks will succeed, an aggressive incident response capability is a key element of a good security program. Defense has several organizations whose primary mission is incident response, i.e. the ability to quickly detecting and reacting to computer attacks. These organizations—DISA’s Center for Information Systems Security, ASSIST, and the military service teams—as discussed previously in this chapter provide network monitoring and incident response services to military installations. The AFIWC, with its Computer Emergency Response Team and Countermeasures Engineering Team, was established in 1993 and has considerably greater experience and capability than the other military services. Recognizing the need for more incident response capability, the Navy established the FIWC in 1995, and the Army established its LIWA this year. However, these organizations are not all fully staffed and do not have the capability to respond to all reported incidents, much less the incidents not reported. For example, when the FIWC was established last year, 30 personnel slots were requested, but only 3 were granted. Similarly, the LIWA is just beginning to build its capability. Rapid detection and reaction capabilities are essential to effective incident response. Defense is installing devices at numerous military sites to automatically monitor attacks on its computer systems. For example, the Air Force has a project underway called Automated Security Incident Measurement (ASIM) which is designed to measure the level of unauthorized activity against its systems. Under this project, several automated tools are used to examine network activity and detect and identify unusual network events, for example, Internet addresses not normally expected to access Defense computers. These tools have been installed at only 36 of the 108 Air Force installations around the world. Selection of these installations was based on the sensitivity of the information, known system vulnerabilities, and past hacker activity. Data from the ASIM is analyzed by personnel responsible for securing the installation’s network. Data is also centrally analyzed at the AFIWC in San Antonio, Texas. Air Force officials at AFIWC and at Rome Laboratory told us that ASIM has been extremely useful in detecting attacks on Air Force systems. They added, however, that as currently configured, ASIM information is only accumulated and automatically analyzed nightly. As a result, a delay occurs between the time an incident occurs and the time when ASIM provides information on the incident. They also stated that ASIM is currently configured for selected operating systems and, therefore, cannot detect activity on all Air Force computer systems. They added that they plan to continue refining the ASIM to broaden its use for other Air Force operating systems and enhance its ability to provide data on unauthorized activity more quickly. AFIWC officials believe that a well-publicized detection and reaction capability can be a successful deterrent to would-be attackers. The Army and Navy are also developing similar devices, but they have been implemented in only a few locations. The Army’s system, known as Automated Intrusion Monitoring System (AIMS), has been in development since June 1995, and is intended to provide both a local and theater-level monitoring of computer attacks. Currently, AIMS is installed at the Army’s 5th Signal Command in Worms, Germany and will be used to monitor Army computers scattered throughout Europe. DISA officials told us that although the services’ automated detection devices are good tools, they need to be refined to allow Defense to detect unauthorized activity as it is occurring. DISA’s Defensive Information Warfare Management Plan provides information on new or improved technology and programs planned for the next 1 to 5 years. These efforts included a more powerful intrusion detection and monitoring program, a malicious code detection and eradication program, and a program for protecting Defense’s vast information infrastructure. These programs, if developed and implemented as planned, should enhance Defense’s ability to protect and react to attacks on its computer systems. Networked computer systems offer tremendous potential for streamlining and improving the efficiency of Defense operations. However, they also greatly increase the risks that information systems supporting critical Defense functions will be attacked. The hundreds of thousands of attacks that Defense has already experienced demonstrate that (1) significant damage can be incurred by attackers and (2) attacks pose serious risks to national security. They also show that top management attention at all levels and clearly assigned accountability are needed to ensure that computer systems are better protected. The need for such attention and accountability is supported by the Joint Security Commission which considers the security of information systems and networks to be the major security challenge of this decade and possibly the next century. The Commission itself believes there is insufficient awareness of the grave risks Defense faces in this arena. We recognize that no organization can anticipate all potential vulnerabilities, and even if one could, it may not be cost-effective to implement every measure available to ensure protection. However, Defense can take some basic steps to vastly improve its position against attackers. These steps include strengthening (1) computer security policies and procedures, (2) security training and staffing, and (3) detection and reaction programs. Since the level of protection varies from installation-to-installation, the need for corrective measures should be assessed on a case-by-case basis by comparing the value and sensitivity of information with the cost of protecting it and by considering the entire infrastructure. To better focus management attention on the Department’s increasing computer security threat and to ensure that a higher priority and sufficient resources are devoted to addressing this problem, we recommend that at a minimum the Secretary of Defense strengthen the Department’s information systems security program by developing departmentwide policies for preventing, detecting, and responding to attacks on Defense information systems, including mandating that (1) all security incidents be reported within the Department, (2) risk assessments be performed routinely to determine vulnerability to attacks and intrusions, (3) vulnerabilities and deficiencies be expeditiously corrected as they are identified, and (4) damage from intrusions be expeditiously assessed to ensure the integrity of data and systems compromised; requiring the military services and Defense agencies to use training and other mechanisms to increase awareness and accountability among installation commanders and all personnel as to the security risks of computer systems connected to the Internet and their responsibility for securing their systems; requiring information system security officers at all installations and setting specific standards for ensuring that these as well as system and network managers are given sufficient time and training to perform their duties appropriately; continually developing and cost-effectively using departmentwide network monitoring and protection technologies; and evaluating the incident response capabilities within DISA, the military services, and the Defense agencies to ensure that they are sufficient to handle the projected threat. The Secretary should also assign clear responsibility and accountability within the Office of the Secretary of Defense, the military services, and Defense agencies for ensuring the successful implementation of this computer security program. On May 15, 1996, we discussed a draft of this report with officials from the Office of the Secretary of Defense, DISA, Army, Navy, and Air Force who are responsible for information systems security. In general, these officials agreed with the report’s findings, conclusions, and recommendations. They stated that the report fairly represents the increasing threat of Internet attacks on the Department’s computers and networks and acknowledges the actions Defense is taking to address that threat. In concurring with our conclusions and recommendations, Defense officials acknowledged that with increased emphasis and additional resources, more could be done to better protect their systems from attack and to effectively detect and aggressively respond to attacks. They stressed that accountability throughout the Department for implementing policy was as important as the policy itself and that cost-effective technology solutions should be encouraged, particularly in light of the increasing sophistication of the future threat. Defense officials believe that a large part of the Department’s security problems result from poorly designed systems or the use of commercial off-the-shelf computer hardware and software products that have little or no inherent security. We agree that this is a serious problem. They also cited some of the more recent actions being taken to improve security, such as DISA’s information systems security implementation plan and the Joint Chiefs of Staff instruction on defensive information warfare. These are positive steps that will help focus attention on the importance of information security. In this context, it is important that our recommendations be effectively implemented to ensure that sufficient management commitment, accountability, priority, and resources are devoted to addressing Defense’s serious information security problems. We have incorporated the Department’s comments and other points of clarification throughout the report where appropriate. | Pursuant to a congressional request, GAO reviewed the extent to which Department of Defense (DOD) computer systems are attacked, focusing on the: (1) potential for further damage to DOD computer systems; and (2) challenges DOD faces in securing sensitive information on its computer systems. GAO found that: (1) DOD relies on a complex information infrastructure to design weapons, identify and track enemy targets, pay soldiers, mobilize reservists, and manage supplies; (2) use of the Internet to enhance communication and information sharing has increased DOD exposure to attack, since the Internet provides unauthorized users a means to access DOD systems; (3) while the DOD information available on the Internet is unclassified, it is sensitive and must be restricted; (4) only about 1 in 500 attacks is detected and reported, but the Defense Information Systems Agency (DISA) estimates that DOD is attacked about 250,000 times per year; (5) attackers have stolen, modified, and destroyed data and software, disabled protection systems to allow future unauthorized access, and shut down entire systems and networks to preclude authorized use; (6) security breaches pose a serious risk to national security because terrorists or U.S. adversaries could disrupt the national information infrastructure; (7) security breaches cost DOD hundreds of millions of dollars annually; and (8) DOD needs to increase the resources devoted to computer security, update the policies that govern computer security, and increase security training for system and network administrators. |
The use of federal procurement to promote environmental goals has gained increasing emphasis since the 1976 RCRA legislation. Under RCRA section 6002, each procuring agency purchasing more than $10,000 of an item (in a fiscal year) that EPA has designated as available with recycled content must have an affirmative procurement program in place. This program is to ensure that the agency purchases recycled-content products to the maximum extent practicable. This requirement applies both to purchases made directly by the agency and to purchases made indirectly by their contractors and grantees. To comply with RCRA and the executive order, an agency’s affirmative procurement program must consist of four elements: a preference program that requires the agency to institute practices and procedures favoring the specification and procurement of recycled- content products; internal and external programs to actively promote the purchase program for recycled-content products; procedures for obtaining pre-award estimates and post-award certifications of recovered materials content in the products to be supplied under any contracts over $100,000 and, where appropriate, reasonably verifying those estimates and certifications; and procedures for monitoring and annually reviewing the effectiveness of the affirmative procurement program to ensure the use of the highest practicable percentage of recycled-content materials available. The 1998 executive order strengthened the RCRA requirements for an effective affirmative procurement program for recycled-content products and added two new product types—environmentally preferable products and biobased products. The 1998 executive order further clarified some previous requirements and defined more clearly the duties of the Federal Environmental Executive and the responsibilities of agency environmental executives in implementing certain initiatives and actions to further encourage the “greening” of the government through federal procurement. The order did not require agencies to purchase environmentally preferable and biobased products, but encouraged them to do so. A recent change to the Federal Acquisition Regulations (FAR) formalized the 1998 executive order by making it a requirement for all executive agencies and contracting officers to follow when buying products, including supplies that are furnished under a service contract. The changes to the FAR also emphasized executive branch policies to purchase products containing recycled content material and other environmentally preferable products and services when feasible. The Office of the Federal Environmental Executive has overarching responsibilities to advocate, coordinate and assist federal agencies in acquiring recycled-content, environmentally preferable, and biobased products and services. In 1999 the White House Task Force on Greening the Government, chaired by the Federal Environmental Executive, issued a strategic plan that calls upon all executive agencies to demonstrate significant increases in the procurement of recycled-content products from each preceding year through 2005. Each agency’s environmental executive is responsible for overseeing the implementation of the agency’s affirmative procurement program and for setting goals to increase purchases of recycled-content products in accordance with the White House Task Force’s strategic plan. Although all procuring agencies are required to have an affirmative procurement program and to track their purchases of recycled-content products, the Office of Federal Procurement Policy and the Office of the Federal Environmental Executive limit their annual reporting requirement to the top six procuring agencies. These six agencies are the departments of Defense, Energy, Transportation, and Veterans Affairs; GSA; and NASA. Two of these agencies, Defense and GSA, have a dual role—first, as procuring agencies subject to RCRA and the executive order and second, as major suppliers of goods and services to other federal agencies. As such, both use recycled-content products and supply other federal agencies with recycled-content products. The Office of Federal Procurement Policy and the Office of the Federal Environmental Executive issue a joint report to the Congress every 2 years on these agencies’ progress in purchasing the EPA-designated products. Federal agencies must also comply with acquisition reform legislation enacted during the 1990s. In response to concerns about the government’s ability to take advantage of the opportunities offered by the commercial marketplace, these reforms streamlined the way that the federal government buys its goods and services. For example, the reforms introduced governmentwide commercial purchase cards, similar to corporate credit cards, to acquire and pay for goods and services of $2,500 or less. The cards, known as federal purchase cards, are issued to a broad range of personnel. EPA accelerated its efforts in the 1990s to identify and issue guidance on procuring products with recycled content, but the extent to which the major federal procuring agencies, with the exception of Energy, have purchased these products cannot be determined because they do not have data systems that clearly identify purchases of recycled-content products. In addition, these agencies do not receive complete data from their headquarters and field offices or their contractors and grantees. As a result, they generally provide estimates, not actual purchase data, to the Office of Federal Procurement Policy and the Office of the Federal Environmental Executive. According to three of the major procuring agencies—including Defense, which accounts for over 65 percent of federal government procurements—even these estimates are not reliable. In addition, agencies’ efforts to promote awareness of purchase requirements for recycled-content products have had limited success, and their efforts to monitor progress have principally relied on the estimated data they report. A White House task force has made a number of recommendations to improve data collection, particularly from federal purchase card users and contractors. In the early 1980s, the Congress directed EPA to issue guidance for five products with recycled content, three of which the Congress designated: cement and concrete containing fly ash, recycled paper and paper products, and retread tires. Between 1983 and 1989, EPA issued guidance for these three products and also issued guidance for re-refined lubricating oil and building insulation. EPA did not issue guidance for any more products until 1995. Between 1995 and 2000, EPA increased the total number of designated products to 54 and issued comprehensive procurement guidance to use in purchasing these products. Figure 1 shows the increases in the number of designated products with recycled content. EPA has identified eight categories of recycled-content products. These are listed below, with examples of products in each category. Construction products: building insulation containing recycled paper or fiberglass; carpeting containing recycled rubber or synthetic fibers; floor tiles made with recycled rubber or plastic. Landscaping products: landscaping timbers and posts containing a mix of plastic and sawdust or made of fiberglass; hydraulic mulch containing paper; compost made from yard trimmings and/or food waste. Nonpaper office products: trash bags containing recycled plastic; waste receptacles containing recycled plastic or steel; and binders containing recycled plastic or pressboard. Paper and paper products: copier paper, newsprint, file folders, and paper towels and napkins, all of which have recycled fiber content. Park and recreation products: picnic tables and park benches containing recycled plastics or aluminum; playground equipment containing recycled plastic or steel; fencing using recycled plastic. Transportation products: parking stops containing recycled plastic or rubber; traffic barricades containing steel or recycled fiberglass; traffic cones containing recycled PVC or rubber. Vehicular products: engine coolants (antifreeze), re-refined motor oil and retread tires, all of which contain recycled content materials. Miscellaneous products: awards and plaques containing glass, wood, or paper; drums containing steel or plastic; signs and sign posts containing plastic steel or aluminum. EPA officials have also identified 10 additional recycled-content products for designation and expect to issue purchasing guidelines for them in 2001. They also plan to designate more products as they become available. According to EPA officials, the list of possible products continues to evolve because new products are always being developed and existing products may be changed, adding more recycled material. RCRA outlines criteria for determining which items to designate as recycled-content products. EPA’s guidance expands on these criteria, which include the following: the availability of the item, including whether it is obtainable from an adequate number of sources to ensure competition; the effect of the procurement on the amount of solid waste diverted from landfills; the capability of the item to meet the agency’s needs and the item’s cost in relationship to products that do not have recycled content; and the determination of whether the item will have a negative impact on (1) other recycled-content products by displacing one recovered material for another recovered material–resulting in no net reduction in materials requiring disposal; (2) the supply of recovered materials due to the diversion of recovered materials from one product to another–resulting in shortages of materials for one or both products; and (3) the availability of supplies to manufacture the product–resulting in insufficient supplies over time. In reviewing EPA’s files for all products designated since 1995, we found that EPA had considered these criteria. Furthermore, EPA had not failed to list any major product containing recycled materials that was likely to be purchased by federal agencies, according to the four major procuring agencies and the National Recycling Coalition, an organization that represents recycling groups, large and small businesses, and federal, state, and local governments. However, the four major procuring agencies said that the list contains more items than they can feasibly track the purchases of and that targeting their tracking efforts on the major items they purchase would be a better use of their resources. For example, NASA officials told us that they annually purchase only about 100 traffic cones—one of the designated items—but have to bear the burden of tracking these purchases to prove that they do not exceed the $10,000 threshold, which would trigger the annual reporting requirement. The four agencies also told us that it is costly and burdensome to update their tracking programs each time EPA adds new items and to document whether or not their purchases of these products meet the $10,000 threshold. Defense and GSA officials added that instead of continuing to add products to the designated list, EPA should work with the agencies to assist them in buying products already identified. Although EPA has a Web site that provides some information regarding a product’s availability, agency officials indicated that the information is not easily accessible or kept up to date. For example, Defense and GSA officials said that (1) EPA should provide more information on the availability of the individual products, since listed products may not be available in all regions of the country, and (2) EPA should identify the manufacturers and costs of the recycled-content products and take the lead in promoting them, thus making it easier for federal agencies to buy these products. Officials at the Office of the Federal Environmental Executive agreed with Defense’s and GSA’s assessment regarding purchasing difficulties. Three of the four major procuring agencies do not provide credible and complete information on their purchases of recycled-content products because (1) they do not have automated tracking systems for these products, and (2) the information they do collect and report does not include a significant portion of their procurements, such as those made by contractors. As a result, they estimate the extent of their purchases in reporting to the Office of Federal Procurement Policy and the Office of the Federal Environmental Executive. However, agency officials acknowledge that these estimates are not reliable. Defense, GSA, and NASA reported that they cannot use their automated procurement systems to track recycled-content products purchased by officials in their headquarters and field offices and by their contractors and grantees. As a result, they collect information manually, a process they find costly and time-consuming. This is particularly the case for agencies with large field structures. For example, Defense said that to satisfy the Office of Federal Procurement Policy and the Office of the Federal Environmental Executive reporting requirements, it must collect information manually from the thousands of installations managed by the Army, Navy, Air Force, and Defense Logistics Agency. Defense requests the necessary information from these units, but does little if they do not provide the data. Similarly, GSA reported that it manually collects purchase data on recycled-content products from its headquarters and field offices. However, Defense and GSA reported that they can electronically track actual purchases of recycled-content products made through their automated central supply systems, which also records purchases made by other agencies, if the products are included in Defense and GSA stock inventories. The systems do not track items purchased from vendor lists. According to Defense and GSA officials, recent improvements to these central supply systems include electronic catalogues of environmentally friendly products linked to an automated shopping system, which will allow the agencies to better track and report on other agencies’ purchases of recycled-content products. NASA and Energy offices also manually collect purchase data on recycled- content products but enter the information into automated systems for tracking and reporting. However, they have not integrated these automated systems with their agencywide procurement systems. Despite this lack of integration, Energy officials indicated that, with their current tracking system, they are able to determine the extent to which most of their offices and contractors are purchasing recycled-content products. NASA officials reported that their system provides more limited data on some contractors. Defense and GSA officials acknowledged that their data collection would improve if they had on-line electronic systems for recycled-content products linked to agencywide procurement systems. However, the additional cost of developing such an integrated system would not be worthwhile, according to these officials. For example, Defense believes that the cost of developing and maintaining a reliable system to produce the data needed to comply with current reporting requirements would far exceed the value of the information produced. The data the agencies collect and report to the offices of Federal Procurement Policy and of the Federal Environmental Executive generally exclude several sources of information. One source is federal purchase card acquisitions, which are increasing and now account for about 5 percent of all federal purchases. The four procuring agencies reported that they cannot track federal card purchases of recycled-content products made in the private sector, such as desk accessories, tires, and lubricating oil, unless they establish an internal system that relies on the card users to keep records. Defense and GSA reported that they do not have such systems. Defense officials noted that requiring purchase card users to keep logs is in conflict with acquisition reforms intended to simplify the procurement process for purchases below $2,500 (micropurchases).Energy and NASA officials stated they do track and report purchases of recycled-content products through federal purchase cards and have established processes for staff to keep records for entry into their database for the recycled content program. The agencies’ data are also incomplete because they may exclude information on purchases made by some of their component organizations. For example, Defense reported that the military services provide mostly estimated data, which they do not verify to determine accuracy and completeness. Furthermore, these estimates do not include all of the services. For example, the Army provided no information for Defense’s report to the Office of Federal Procurement Policy and the Office of the Federal Environmental Executive for fiscal years 1998 and 1999, and the Air Force and Navy provided limited purchase data. The lack of reliable data from Defense is of particular concern in evaluating the effectiveness of the RCRA program because Defense’s procurements account for over 65 percent of total federal procurements reported for fiscal year 1999. Defense reported that it purchased recycled-content products worth about $157 million out of total fiscal year 1999 procurements of about $130 billion. (The total fiscal year procurement figure of $130 billion includes $20 billion for research and development and $50 billion for major weapons systems, which are unlikely to involve the procurement of recycled-content products. In addition, $53 billion for service contracts may or may not involve the purchase of recycled-content products. Defense officials indicated that some of these figures may overlap.) Finally, the agencies lack complete data on purchases made by contractors and grantees. This data gap is potentially significant because contracts over $25,000 account for almost 90 percent of all federal procurements. The agencies reported the following: Defense has no information on contractors’ purchases. GSA has limited information on some contractors’ purchases. Energy, which spends about 94 percent of its appropriations on contractors, collects purchase information from about 86 percent of its contractors. NASA collects purchase data from on-site contractors but receives little or no data from off-site contractors. RCRA requires federal contractors to estimate the percentage of recycled- content material used to fulfill their contracts (not the specific products) and to certify that they have met the minimum requirements for recycled content. The Federal Acquisition Streamlining Act established that the estimation requirement under RCRA applies only to contracts exceeding $100,000. However, for individual purchases by federal agencies that exceed $10,000, the Office of Federal Procurement Policy requires the agencies to track and report the total dollar amount by product and, in some cases, to report the volume of recycled-content products. The agencies reported that it is difficult, if not impossible, for them to separate information on products with recycled content from information on other products without such content (virgin materials). For example, according to GSA, when it lets a contract for remodeling offices, the contract does not necessarily distinguish between the cost of carpeting containing recycled content and of virgin-content carpet. It may provide information on only the total cost of carpeting. The contractor might have to purchase virgin-content carpeting for certain areas (e.g., high-traffic hallways) and might be able to use carpeting with recycled content in other areas (e.g., staff offices). In such a case, GSA would provide only an estimate to the Office of Federal Procurement Policy of the value of carpet with recycled material. GSA officials also pointed out that performance-based contracts do not include detailed product estimates. For example, a contract to construct a building may not indicate either the amount or cost of the recycled- content concrete used. Finally, the agencies lack data on grantee purchases. State and local agencies receiving federal grants may be “procuring agencies” under RCRA. If they meet the $10,000 threshold—that is, if they spend more than $10,000 on a designated item—they are subject to the affirmative procurement program requirement and to buying the recycled-content products on EPA’s list. However, grantees are not required to report their purchases of EPA-designated products with recycled content. Also, executive orders do not apply to grantees. Because of overall federal efforts to reduce the paperwork (reporting) burden on grantees, federal agencies stated that they cannot request information from grantees without OMB approval. Consequently, six of the agencies we reviewed, including the major grant making agencies—DOT and HUD—reported that they do not obtain any information on grantees’ purchases. A White House task force workgroup on streamlining and improving reporting and tracking, cochaired by the Federal Environmental Executive and OMB’s Office of Federal Procurement Policy, has made a number of recommendations to improve data collection from federal purchase card users and contractors. It recommended that it begin a pilot project with banks and willing vendors to identify and report recycled-content product purchases made with federal purchase cards. We believe that this effort would provide useful additional information regarding purchase card users’ compliance with the RCRA requirements. With respect to contractors, the task force workgroup and various agencies recommended revisions to the Federal Procurement Data System—a system that collects information on procurements on a governmentwide basis for contracts over $25,000. The revised data system would require the procuring official to indicate whether the contract includes (1) recycled-content products and identifies the reasons for granting waivers, and (2) appropriate language from the Federal Acquisition Regulations to ensure that the contractor is notified of the requirements for purchasing recycled-content products. These proposed revisions are currently being circulated to the agencies for comment. If these changes are implemented, the agencies will no longer have to manually collect and report on their individual purchases of recycled- content products. Although the revised system will not provide information on the products themselves or of the dollar amount associated with them, it would allow agencies for the first time to identify contracts subject to purchases of recycled-content products and to measure their annual progress in increasing the percentage of contracts containing affirmative procurement clauses. The four major procuring agencies have ongoing efforts, and are developing strategies, to promote awareness of the requirement to purchase recycled-content products, but several studies indicate that the success of these efforts to date has been limited. In addition, although RCRA requires federal agencies to review and monitor the effectiveness of their RCRA program efforts, only Energy has taken any steps beyond the data collection efforts discussed earlier. Studies of the agencies’ affirmative procurement programs report that the agencies are not effectively educating procurement officials about the requirement to buy EPA-designated recycled-content products. This lack of awareness is a major or contributing factor to inaccurate data and noncompliance with implementing affirmative procurement programs, according to our survey of the agencies, as well as the reports by the GSA and NASA inspectors general, the Air Force’s Internal Audit Agency, and a fiscal year 2000 EPA survey of 72 federal facilities. Efforts to promote the purchase of recycled-content products by government agencies, their contractors, and grantees can occur government- or agencywide. Governmentwide efforts include those conducted by the Office of the Federal Environmental Executive, which actively promotes, coordinates, and assists federal agencies’ efforts to purchase EPA-designated items. For example, the Office of the Federal Environmental Executive has helped increase agency purchases of EPA- designated products by encouraging GSA, the Defense Logistics Agency, and the Government Printing Office to automatically substitute recycled- content products in filling orders for copier paper (begun in 1992) and lubricating oil (begun in 1999). This effort has increased sales of recycled- content copier paper from a level of 39 percent to a level of 98 percent at GSA and the Government Printing Office, according to the Office of the Federal Environmental Executive. GSA now carries only recycled-content copier paper. The Defense Logistics Agency reported that its sales of re- refined lubricating oil increased over 50 percent from fiscal year 1999 to fiscal year 2000. Given the success of the automatic substitution program for these products, the Office of the Federal Environmental Executive is strongly encouraging agencies to identify other recycled-content products for which automatic substitution policies might be appropriate. However, this program does not apply to purchases made outside of the federal supply centers. GSA and Defense have also placed symbols in their printed and electronic catalogues and in their electronic shopping systems to identify recycled- content products. Using the electronic catalogue, agencies can then go directly into the electronic shopping system to order these products. They will also be able to track and report these purchases–including those made with purchase cards. Defense and GSA are also working jointly to modify the Federal Logistics Information System to add environmental attribute codes to the products listed in that system to more easily identify environmentally friendly products. The modification’s usefulness may be limited, however, because this system does not automatically link the user to a system for purchasing the products identified, according to agency officials. In addition to governmentwide promotion efforts, agencies reported using a variety of techniques to make their decentralized organizations aware of the RCRA requirements. The agencies provide classroom and computer- assisted training on purchasing recycled-content products and on incorporating the RCRA purchasing requirements into program manuals. The four major procuring agencies also reported that they promoted the procurement of EPA-designated products through such mechanisms as their Web sites, telephone and videoconferences, videotapes, electronic newsletters, workshops, and conferences. As indicated by the Inspectors Generals’ reports and agency studies, and our own analysis, even though the agencies have used many techniques to inform their staff of these requirements, staff awareness, particularly in field offices, remains a problem. Agencies generally must rely on methods less direct than providing classroom training, or having workshops or conferences, for making their contractors aware of the requirement to purchase recycled-content products. Accordingly, Energy, GSA, and Defense’s Air Force, Navy, and Army Corps of Engineers have initiated alternative efforts to inform contractors of these requirements. Energy makes its major facility management contractors part of its affirmative procurement program team to help implement the program. Moreover, in May 2000, Energy established “green acquisition advocates” at each of its major contracting facilities. Among their duties, these advocates are to promote the RCRA program to the contractors. GSA and the three Defense components have developed “green” construction and/or lease programs that promote the use of EPA-designated products. In addition, all the agencies we reviewed have incorporated the Federal Acquisition Regulation clauses pertaining to the RCRA program into their contracts. GSA also reported that it plans to modify its acquisition manual to include a review of the list of EPA- designated products with contractors in post-award conferences. In addition, GSA’s regional offices have begun evaluating the effectiveness of their affirmative procurement programs. The agencies we examined have generally not developed agency-specific mechanisms for advising grantees of their responsibility to purchase recycled-content products. Instead, they rely on OMB Circular A-102. This circular refers to RCRA and contains a general statement on the requirement for grantees to give preference in their purchases to the EPA- designated products. It does not inform them of the specific requirements they need to follow, such as developing affirmative procurement programs. Three of the four procuring agencies and the two major grant- awarding agencies that we reviewed—the departments of Housing and Urban Development and Transportation—rely on either this circular or a similar general statement to inform grantees of the RCRA requirements. Only Energy, in its financial assistance regulations, requires its grant- awarding program offices to inform grantees of the RCRA requirement. According to officials at the Office of the Federal Environmental Executive, grantees could obtain specific information about RCRA requirements if OMB included that information in the “common rule” under Circular A-102. The common rule, directed by a March 1987 presidential memo and adopted in individual agency regulations, provides supplemental information, generally in the form of more detailed instructions on processes grantees should follow in implementing the circular’s requirements. However, it does not mention RCRA’s requirements for an affirmative procurement program. Officials at the departments of Transportation and Housing and Urban Development did not know whether their grantees had an affirmative procurement program or whether grantees were aware of the requirements to purchase recycled- content products. Officials said that unless they were specifically directed by OMB, seeking this information could be interpreted as an additional burden on grantees and an unfunded mandate. In April 2000, the Federal Environmental Executive recommended to the President that OMB revise the Circular A-102 common rule to “require recipients of federal assistance monies to comply with the RCRA buy-recycled requirements.” She added that federal agencies administering grant programs should educate state and local government recipients about the requirements. RCRA requires federal agencies to review and monitor the effectiveness of their recycled-content programs; however, it does not define what this review and monitoring should consist of. With the exception of Energy, which has established purchasing goals that its contractors must meet, the major procuring agencies limit their required annual review and monitoring functions to compiling data on their purchases of recycled- content products in order to report to the Office of the Federal Environmental Executive and the Office of Federal Procurement Policy. But as the agencies admit, these data are unreliable and incomplete. Consequently, these data do not allow the agencies to assess their progress in purchasing recycled-content products or review the effectiveness of their recycled-content purchase programs. However, Defense procurement officials believe that legislation like RCRA, because of its review and monitoring requirements, is in conflict with the streamlining reforms that are intended to ease the administrative burden associated with government purchases. An indication of the agencies’ lack of monitoring is the scarcity of information on exemptions or waivers. Agencies may waive the RCRA requirement to purchase recycled-content products if the recycled product is too costly, does not meet appropriate performance standards, or is not available. The number of waivers approved, when compared with purchases of products both with and without recycled content, would tell the agencies how far they are from meeting the goal of purchasing only recycled-content products designated by EPA. In addition, a review of these waivers will allow the agencies to identify the reasons for not purchasing these products and identify potential problems. Although the four major procuring agencies said that they do require this justification, only Energy has analyzed the waivers to determine reasons for not purchasing recycled-content products and how close it is to meeting the goal of purchasing only recycled-content products, where appropriate. Energy has concluded that it is making steady progress in its purchases of recycled-content products. However, as EPA adds new items to the list, Energy officials told us that progress tends to level off until staff become familiar with the new products. The procuring agencies reported little progress in purchasing environmentally preferable products, in part because both EPA and USDA have taken longer to issue guidance than provided for by the executive orders. Moreover, while EPA has issued final guidance to help agencies identify environmentally preferable products, it is not required to develop a list of these products. According to the agencies, implementing the EPA guidance for determining what constitutes an environmentally preferable product is difficult and time-consuming. In addition, USDA has not published a list of biobased products for procuring agencies’ consideration, as required by the executive order. USDA plans to have a list available by fiscal year 2002, but this effort is only one of a number of projects competing for the same resources. Although the purchase of these products is not required by statute, the agencies we reviewed plan to modify their procurement programs to encourage the purchase of such products after the list is published. EPA published final guidance for federal procuring agencies to use in purchasing environmentally preferable products in 1999, 5 years later than directed by the 1993 executive order. EPA’s Office of Pollution Prevention and Toxics, which is charged with issuing the guidance, stated that it delayed issuance until it had results from some agencies’ pilot projects. These projects tested concepts and principles for their applicability to actual purchasing decisions. EPA noted that because an environmentally preferable product can have multiple attributes—such as having recycled content, conserving water and/or energy, and/or emitting a low level of volatile organic compounds—defining environmental preferability depends on the product’s use. The guidance is not intended to be a step-by-step plan or “how to” guide for agencies’ use in deciding to purchase specific products; nor is it intended to provide a list of products. Instead, it is intended to help executive agencies systematically integrate the purchasing of these products into their buying decisions. Under the process outlined, the agencies are to use the guidance in choosing which products are environmentally preferable and meet their needs. This process involves assessing a product’s life-cycle, which may include a comprehensive examination of a product’s environmental and economic aspects and potential impacts throughout its lifetime, including raw material transportation, manufacturing, use, and disposal. EPA has also focused on identifying approaches for purchasing environmentally preferable products by encouraging agencies to participate in pilot projects. In selecting pilot projects, agencies are encouraged to use a list that EPA has developed of the top 20 product and service categories, which represent a large volume of federal procurement or have significant environmental impacts. For example, a U.S. Army installation conducted a pilot on paint products, which are on the list and are known to contain significant quantities of volatile organic compounds. Accordingly, the intent of the pilot was to identify paints that had a lesser adverse environmental impact on air quality, which is a particular concern for this installation. In addition, EPA has enlisted the assistance of two standard-setting organizations—Underwriters Laboratory and NSF International—to develop environmental standards that may be used in federal purchasing. Underwriters Laboratory is helping to identify consensus-based industry standards for a more environmentally friendly stretch wrap for packing and shipping, while NSF International is working with EPA to develop standards for institutional cleaners and carpeting. Federal agencies’ development and implementation of programs to encourage purchases of environmentally preferable products has not resulted in significant changes to agencies’ procurement practices. The procuring agencies in our review participated in pilot projects and, with the exception of Defense, have changed, or are in the process of changing, their affirmative procurement programs to include the executive order requirements regarding environmentally preferable products. However, agency officials said it is difficult to incorporate EPA’s guidance into procurement activities. This difficulty occurs in part because implementing the guidance is a time-consuming process that procuring officials are unlikely to undertake because they lack knowledge in this area. In addition, EPA has not provided a clear definition or list of environmentally preferable products. For example, in purchasing environmentally friendly paint, a number of products are available—one may be a paint with recycled content, resulting in a reduced environmental impact on the waste stream, while another may have lower volatile organic compounds and thus lessen the adverse impact on air quality. Agencies are not statutorily required to purchase environmentally preferable products and the difficulties associated with this process are a disincentive. EPA officials acknowledge that there is a scarcity of information about the environmental performance of products and services, particularly regarding the various life-cycle stages of manufacturing, distribution, use and disposal related to a product. They also acknowledge that progress is somewhat slow in getting federal procurement agencies to adopt the “environmentally preferable” process as part of their procurement practices. However, over time and with the dissemination of more information and tools for agencies to use, considering environmentally preferable products in purchasing decisions will become more routine, EPA officials believe. EPA has developed a Web site to provide some of the support and tools that procuring agencies need. The Web site provides information on agencies’ pilot projects; environmental standards; product information, such as the results of assessments of life-cycle and case studies; and lessons learned by agencies that have purchased environmentally preferable products and services. EPA has also developed an interactive training module and a guide with examples of specific contract language that purchasing agencies have used. USDA has not published a list of biobased products for agencies to consider in their purchasing practices, as directed by the 1998 executive order. It has, however, published a notice for comment (in the August 1999 Federal Register) on suggested criteria for listing biobased products. Agency officials explained that the delay in publishing the list is due, in part, to the lack of funding for this effort and that the work to develop the list must compete with other projects for the same resources. A list should be available in fiscal year 2002, according to USDA officials. All four procuring agencies said that they will include the published list of biobased products in their affirmative procurement programs. However, although a list of biobased products will make it easier to identify these products for purchasing, officials at Defense, GSA, and NASA indicated that the list will not necessarily ensure that staff will purchase them. Officials noted that these purchases are not mandatory under the executive order, which only calls for agencies to modify their affirmative procurement programs to give consideration to the biobased products. The officials added that the lack of knowledge and education about biobased products is a major barrier to ensuring that staff will consider and purchase these products. As a result, they have not made any significant changes in their procurement practices. However, Energy, GSA, and NASA have taken steps to amend their regulations to include the updated Federal Acquisition Regulations published in June 2000, which, among other things, encourages the purchasing of biobased products. Also, both GSA and Energy have included information on these products in their training programs to make staff aware of biobased products and the upcoming list. USDA officials told us that they would like the biobased program to be statutorily based, like the recycled-content program. Agencies would then be mandated to purchase these products. In addition, they believe that the program will be much more effective if there is an assessment of a product’s life-cycle and such products are required to meet performance standards set by independent standard setting or testing organizations. The officials believe that the absence of life-cycle and performance information will be a major barrier to the agencies’ purchasing biobased products unless they have the information to show the long-term benefits of the items. However, because of resource constraints, USDA is instead relying on manufacturers to self-certify their products. The officials added that biobased products are considered risky purchases by federal agencies because of the lack of information available on their performance and are generally purchased by agencies only after they hear about the product anecdotally—an inefficient way to bring products to the market. Twenty-five years after RCRA was to launch a revolution in federal purchases of recycled-content products, the success of this effort is largely uncertain. For many years, until the 1990s, little action was taken to promote such purchases on a governmentwide or agencywide basis. Even today, many procuring officials and other federal purchasers either do not know about or implement the RCRA requirements for establishing affirmative procurement programs, particularly promotion and review and monitoring. Although some progress in implementing the RCRA requirements has occurred, such as EPA’s accelerating its designation of recycled-content products, procuring agencies report that EPA’s designation of these products, by itself, is not sufficient to ensure that they are purchased. The agencies told us that staff often are either not aware of these products or not able to locate them in their area. Furthermore, the agencies have made little effort to ensure that grantees are aware of their obligations to purchase recycled-content products. Even if recycled-content products were more widely available and promoted more effectively, most agencies–with the exception of Energy– cannot determine the success of their efforts to increase the purchases of such products. They have not developed systems to track their purchases of such products, relying instead on inadequate estimates. Nor have they put programs in place to review and monitor progress. Moreover, most agencies lack data about purchases of recycled-content products by contractors and grantees. These agencies do not have any reliable means of even identifying contracts that call for the use of recycled-content products. In this regard, we support the White House task force recommendation to revise the Federal Procurement Data System to identify such contracts. While this revision will not provide agencies with information on specific purchases, it will enable them to periodically review and monitor their contractors for compliance with the RCRA requirements. Demonstrating that an agency is meeting the RCRA requirements can be administratively difficult. The major procuring agencies noted that it is costly and burdensome to update their purchase tracking programs each time EPA designates recycled-content products; each relies on costly and time-consuming manual data collection. Defense, the largest procuring agency, believes efforts to monitor and report on recycled-content product purchases conflict with the streamlining goals of procurement reform. We recognize that review and monitoring of recycled-content products entails administrative costs. Nonetheless, RCRA requires such information. To help agencies purchase recycled content products, we recommend that the Federal Environmental Executive and the Administrator of EPA work with officials at the major procuring agencies to develop a process to provide the procuring agencies with current information on the availability of the designated recycled-content products. In addition, these officials should determine how these products can be more effectively promoted. To help agencies implement the RCRA requirement to annually monitor and review the effectiveness of affirmative procurement programs, we recommend that the Director, OMB, instruct the Director of the Office of Federal Procurement Policy to provide procuring agencies with more specific guidance on fulfilling the RCRA review and monitoring requirements and, in conjunction with the Federal Environmental Executive, use the results of the agencies’ efforts in their reports to the Congress and the President. If the White House Task Force recommendation revising the Federal Procurement Data System (or its replacement) is implemented, then the Director, OMB, should instruct the Director of the Office of Federal Procurement Policy to provide guidance to the federal procuring agencies on using the information added to the system to periodically review contractors’ compliance with the RCRA requirements for purchasing recycled-content products. To ensure that grantees purchase recycled-content products as required by RCRA, the Director of the Office of Federal Financial Management, OMB, in coordination with federal agencies, should amend the common rule so that it incorporates the RCRA requirements, as recommended in the Federal Environmental Executive’s Report to the President entitled Greening the Government. We provided a draft of this report to the following agencies for their review and comment: EPA, OMB, the Office of the Federal Environmental Executive, GSA, NASA, and the departments of Agriculture, Defense, Energy, Housing and Urban Development, and Transportation. EPA and the departments of Agriculture, Housing and Urban Development, and Transportation declined to formally comment on the report. However, some of these agencies provided technical suggestions that we incorporated into the report as appropriate. The remaining agencies generally agreed with the facts presented in the report. Specifically, with the exception of Energy, they agreed that the current data systems do not identify the extent of their purchases of recycled-content products. OMB, to whom three of the four recommendations are directed, expressed general agreement with them. However, Energy pointed out that in several instances in the report, we did not make clear that it does have a data system that identifies actual data purchases of recycled-content products made by the agency or its contractors. We have clarified the report where appropriate. OMB, the Office of the Federal Environmental Executive, GSA, NASA, and the departments of Energy and Defense’s comments and our responses are in appendixes II-VII. We conducted our review between June 2000 and April 2001 in accordance with generally accepted government auditing standards. A detailed discussion of our objectives, scope, and methodology is presented in appendix I. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it for 30 days. We will then send copies of this report to appropriate congressional committees and other interested Members of Congress; the Administrator of EPA; the Director of OMB; the Office of the Federal Environmental Executive; the Secretaries of Agriculture, Defense, Energy, HUD, and Transportation; and the Administrators of NASA and GSA. We will also make copies available to others upon request. If you or your staff have any further questions about this report, please call me at (202) 512-6878. Key contributors to this report were James Donaghy, Patricia Gleason, Maureen Driscoll, William Roach Jr., Paul Schearf, and Carol Herrnstadt Shulman. In January 2000, Senators Max Baucus, Tom Harkin, James Jeffords, and Carl Levin sent us a series of questions related to federal government agencies’ purchases of recycled-content products, environmentally preferable products, and biobased products. After discussing these issues with their offices, we agreed to address the status of, and barriers to, federal agencies efforts to (1) implement the Resource Conservation and Recovery Act’s requirements for procuring products with recycled content, and (2) procure environmentally preferable and biobased products. In conducting this review, we focused on four agencies that account for about 85 percent of all federal procurements—the departments of Defense and Energy, General Services Administration, and the National Aeronautics and Space Administration. In addition to interviewing appropriate officials at these agencies, we conducted a written survey in which we asked them a series of questions related to their purchases of products with recycled content, environmentally preferable products, and/or biobased products. We also surveyed two major grant-awarding agencies—the departments of Transportation and of Housing and Urban Development—to determine the extent of such purchases made by their grantees. In addition, we contacted the Environmental Protection Agency (EPA), the Office of Federal Procurement Policy within the Office of Management and Budget, and the White House Office of the Federal Environmental Executive to determine their effectiveness in managing and overseeing the agencies’ implementation of the programs for procuring products with recycled content and environmentally preferable and biobased products. We also reviewed EPA’s progress in designating products with recycled content and in issuing guidance on environmentally preferable products to other agencies, as well as the Department of Agriculture’s progress in identifying and publishing information on biobased products. We obtained and analyzed information on the procurement of products with recycled content, as designated by EPA, and barriers to full implementation of the affirmative procurement programs of the four largest federal procuring agencies. We also contacted the Office of the Inspector General for each of the federal agencies in our review to identify whether the offices had conducted any formal reviews or audits of the agencies’ affirmative procurement programs. Finally, we contacted industry and environmental groups to obtain their perspectives and to identify whether any additional data existed related to the procurement of the EPA-listed products with recycled content. The following is GAO’s comment on the Office of Management and Budget’s letter dated May 17, 2001. 1. This section of the report has been revised to include the latest information on the changes to the FPDS system. The following are GAO’s comments on GSA’s letter dated May 15, 2001. 1. On p.11 of the report, we state that GSA can electronically track purchases of recycled-content products made through their automated central supply system. 2. This information has been included in the report on p. 17. 3. The report has been clarified to identify the difficulties that procurement officials face in assessing a product’s life cycle. The following are GAO’s comments on the Department of Energy’s letter dated May 21, 2001. 1. The report has been revised as appropriate to clarify that Energy tracks and reports actual purchases of recycled-content products for the agency and its contractors. 2. The word “toxic” has been added as suggested. | The federal government buys about $200 billion worth of goods and services each year. Through its purchasing decisions, the federal government can signal its commitment to preventing pollution, reducing solid waste, increasing recycling, and stimulating markets for environmentally friendly products. The Resource Conservation and Recovery Act of 1976 (RCRA) directs the Environmental Protection Agency (EPA) to identify products made with recycled waste materials or solid waste by-products and to develop guidance for purchasing these products. The act also requires procuring agencies to establish programs for purchasing them. This report examines efforts by federal agencies to (1) implementation of RCRA requirements for procuring products with recycled content and (2) the purchase of environmentally preferable and bio-based products. EPA accelerated its efforts in the 1990s to identify recycled-content products, but the status of agencies' efforts to implement the RCRA purchasing requirements for these products is uncertain. The four major procuring agencies report that, for many reasons, their procurement practices have not changed to increase their purchases of environmentally preferable and bio-based products. One reason for the lack of change is that EPA and the U.S. Department of Agriculture have been slow to develop and implement the programs. |
As you know, Mr. Chairman, for over two decades, we have reported on problems with DOD’s personnel security clearance program as well as the financial costs and risks to national security resulting from these problems (see Related GAO Reports at the end of this statement). For example, at the turn of the century, we documented problems such as incomplete investigations, inconsistency in determining eligibility for clearances, and a backlog of overdue clearance reinvestigations that exceeded 500,000 cases. More recently in 2004, we identified continuing and new impediments hampering DOD’s clearance program and made recommendations for increasing the effectiveness and efficiency of the program. Also in 2004, we testified before this committee on clearance- related problems faced by industry personnel. A critical step in the federal government’s efforts to protect national security is to determine whether an individual is eligible for a personnel security clearance. Specifically, an individual whose job requires access to classified information must undergo a background investigation and adjudication (determination of eligibility) in order to obtain a clearance. As with federal government workers, the demand for personnel security clearances for industry personnel has increased during recent years. Additional awareness of threats to our national security since September 11, 2001, and efforts to privatize federal jobs during the last decade are but two of the reasons for the greater number of industry personnel needing clearances today. As of September 30, 2003, industry personnel held about one-third of the approximately 2 million DOD-issued clearances. DOD’s Office of the Under Secretary of Defense for Intelligence has overall responsibility for DOD clearances, and its responsibilities also extend beyond DOD. Specifically, that office’s responsibilities include obtaining background investigations and adjudicating clearance eligibility for industry personnel in more than 20 other federal agencies, as well as the clearances of staff in the federal government’s legislative branch. Problems in the clearance program can negatively affect national security. For example, delays reviewing security clearances for personnel who are already doing classified work can lead to a heightened risk of disclosure of classified information. In contrast, delays in providing initial security clearances for previously noncleared personnel can result in other negative consequences, such as additional costs and delays in completing national security-related contracts, lost-opportunity costs, and problems retaining the best qualified personnel. Longstanding delays in completing hundreds of thousands of clearance requests for servicemembers, federal employees, and industry personnel as well as numerous impediments that hinder DOD’s ability to accurately estimate and eliminate its clearance backlog led us to declare the program a high-risk area in January 2005. The 25 areas on our high-risk list at that time received their designation because they are major programs and operations that need urgent attention and transformation in order to ensure that our national government functions in the most economical, efficient, and effective manner possible. Shortly after we placed DOD’s clearance program on our high-risk list, a major change in DOD’s program occurred. In February 2005, DOD transferred its personnel security investigations functions and about 1,800 investigative positions to OPM. Now DOD obtains nearly all of its clearance investigations from OPM, which is currently responsible for 90 percent of the personnel security clearance investigations in the federal government. DOD retained responsibility for adjudication of military personnel, DOD civilians, and industry personnel. Other recent significant events affecting DOD’s clearance program have been the passage of the Intelligence Reform and Terrorism Prevention Act of 2004 and the issuance of the June 2005 Executive Order No. 13381, Strengthening Processes Relating to Determining Eligibility for Access to Classified National Security Information. The act included milestones for reducing the time to complete clearances, general specifications for a database on security clearances, and requirements for greater reciprocity of clearances (the acceptance of a clearance and access granted by another department, agency, or military service). Among other things, the executive order resulted in the Office of Management and Budget (OMB) taking a lead role in preparing a strategic plan to improve personnel security clearance processes governmentwide. Using this context for understanding the interplay between DOD and OPM in DOD’s personnel security clearance processes, my statement addresses two objectives in this statement: (1) key points of a billing dispute between DOD and OPM and (2) some of the major impediments affecting clearances for industry personnel. As requested by this committee, we have an ongoing examination of the timeliness and completeness of the processes used to determine the eligibility of industry personnel to receive top secret clearances. We expect to present the results of this work in the fall. My statement today, however, is based primarily on our completed work and our institutional knowledge from our prior reviews of the steps in the clearance processes used by DOD and, to a lesser extent, other agencies. In addition, we used information from the Intelligence Reform and Terrorism Prevention Act of 2004; executive orders; and other documents, such as a memorandum of agreement between DOD and OPM. We conducted our work in accordance with generally accepted government auditing standards in May 2006. DOD stopped processing applications for clearance investigations for industry personnel on April 28, 2006, despite an already sizeable backlog. DOD attributed its actions to an overwhelming volume of requests for industry personnel security investigations and funding constraints. We will address the issue of workload projections later when we discuss impediments that affect industry personnel as well as servicemembers and federal employees, but first we would like to talk about the issue of funding. An important consideration in understanding the funding constraints that contributed to the stoppage is a DOD-OPM billing dispute, which has resulted in the Under Secretary of Defense for Intelligence requesting OMB mediation. The dispute stems from the February 2005 transfer of DOD’s personnel security investigations function to OPM. The memorandum of agreement signed by the OPM Director and the DOD Deputy Secretary prior to the transfer lists many types of costs that DOD may incur for up to 3 years after the transfer of the investigations function to OPM. One cost, an adjustment to the rates charged to agencies for clearance investigations, provides that “OPM may charge DOD for investigations at DOD’s current rates plus annual price adjustments plus a 25 percent premium to offset potential operating losses. OPM will be able to adjust, at any point of time during the first three year period after the start of transfer, the premium as necessary to cover estimated future costs or operating losses, if any, or offset gains, if any.” The Under Secretary’s memorandum says that OPM has collected approximately $50 million in premiums in addition to approximately $144 million for other costs associated with the transfer. The OPM Associate Director subsequently listed costs that OPM has incurred. To help resolve this billing matter, DOD requested mediation from OMB, in accordance with the memorandum of agreement between DOD and OPM. Information from the two agencies indicates that in response to DOD’s request, OMB has directed them to continue to work together to resolve the matter. The DOD and OPM offices of inspector general are currently investigating all of the issues raised in the Under Secretary’s and Associate Director’s correspondences and have indicated that they intend to issue reports on their reviews this summer. Some impediments, if not effectively addressed, could hinder the timely determination of clearance eligibility for servicemembers, civilian government employees, and industry personnel; whereas other impediments would mainly affect industry personnel. The inability to accurately estimate the number of future clearance requests and the expiration of the previously mentioned executive order that resulted in high-level involvement by OMB could adversely affect the timeliness of eligibility determinations for all types of employee groups. In contrast, an increased demand for top secret clearances for industry personnel and the lack of reciprocity would primarily affect industry personnel. A major impediment to providing timely clearances is the inaccurate projections of the number of requests for security clearances DOD-wide and for industry personnel specifically. As we noted in our May 2004 testimony before this committee, DOD’s longstanding inability to accurately project its security clearance workload makes it difficult to determine clearance-related budgets and staffing requirements. In fiscal year 2001, DOD received 18 percent (about 150,000) fewer requests than it expected, and in fiscal years 2002 and 2003, it received 19 and 13 percent (about 135,000 and 90,000) more requests than projected, respectively. In 2005, DOD was again uncertain about the number and level of clearances that it required, but the department reported plans and efforts to identify clearance requirements for servicemembers, civilian employees, and contractors. For example, in response to our May 2004 recommendation to improve the projection of clearance requests for industry personnel, DOD indicated that it was developing a plan and computer software that would enable the government’s contracting officers to (1) authorize a certain number of industry personnel clearance investigations for any given contract, depending on the number of clearances required to perform the classified work on that contract, and (2) link the clearance investigations to the contract number. Another potential impediment that could slow improvements in personnel security clearance processes in DOD—as well as governmentwide—is the July 1, 2006, expiration of Executive Order No. 13381. Among other things, this executive order delegated responsibility for improving the clearance process to the Director of OMB for about 1 year. We have been encouraged by the high level of commitment that OMB has demonstrated in the development of a governmentwide plan to address clearance-related problems. Also, the OMB Deputy Director met with GAO officials to discuss OMB’s general strategy for addressing the problems that led to our high-risk designation for DOD’s clearance program. Demonstrating strong management commitment and top leadership support to address a known risk is one of the requirements for removing DOD’s clearance program from GAO’s high-risk list. Because there has been no indication that the executive order will be extended, we are concerned about whether such progress will continue without OMB’s high-level management involvement. While OPM has provided some leadership in assisting OMB with the development of the governmentwide plan, OPM may not be in a position to assume additional high-level commitment for a variety of reasons. These reasons include (1) the governmentwide plan lists many management challenges facing OPM and the Associate Director of its investigations unit, such as establishing a presence to conduct overseas investigations and adjusting its investigative workforce to the increasing demand for clearances; (2) adjudication of personnel security clearances and determination of which organizational positions require such clearances are outside the current emphases for OPM; and (3) agencies’ disputes with OPM—such as the current one regarding billing—may require a high-level third party to mediate a resolution that is perceived to be impartial. As we have previously identified, an increase in the demand for top secret clearances could have workload and budgetary implications for DOD and OPM if such requests continue to occur. In our 2004 report, we noted that the proportion of requests for top secret clearances for industry personnel increased from 17 to 27 percent from fiscal years 1995 through 2003. This increase has workload implications because top secret clearances (1) must be renewed every 5 years, compared to every 10 years for secret clearances, and (2) require more information about the applicant than secret clearances do. Our 2004 analyses further showed that the 10-year cost to the government was 13 times higher for a person with a top secret clearance ($4,231) relative to a person with a secret clearance ($328). Thus, if clearance requirements for organizational positions are set higher than needed, the government’s capacity to decrease the clearance backlog is reduced while the cost of the clearance program is increased. When the reciprocity of clearances or access is not fully utilized, industry personnel are prevented from working. In addition to having a negative effect on the employee and the employer, the lack of reciprocity has adverse effects for the government, including an increased workload for the already overburdened staff who investigate and adjudicate security clearances. Problems with reciprocity of clearances or access, particularly for industry personnel, have continued to occur despite the establishment in 1997 of governmentwide investigative standards and adjudicative guidelines. The Reciprocity Working Group, which helped to prepare information for the governmentwide plan to improve the security clearance process, noted that “a lack of reciprocity often arises due to reluctance of the gaining activity to inherit accountability for what may be an unacceptable risk due to poor quality investigations and/or adjudications.” Congress enacted reciprocity requirements in the Intelligence Reform and Terrorism Prevention Act of December 2004, and OMB promulgated criteria in December 2005 for federal agencies to follow in determining whether to accept security clearances from other government agencies. Because of how recently these changes were made, their impact is unknown. We will continue to assess and monitor DOD’s personnel security clearance program at your request. We are conducting work on the timeliness and completeness of investigations and adjudications for top secret clearances for industry personnel and we will report that information to this committee this fall. Also, our standard steps of monitoring programs on our high-risk list require that we evaluate the progress that agencies make toward being removed from the list. Lastly, we monitor our recommendations to agencies to determine whether steps are being taken to overcome program deficiencies. For further information regarding this testimony, please contact me at (202)512-5559 or [email protected]. Individuals making key contributions to this testimony include Jack E. Edwards, Assistant Director; Jerome Brown; Kurt A. Burgeson; Susan C. Ditto; David Epstein; Sara Hackley; James Klein; and Kenneth E. Patton. Managing Sensitive Information: Departments of Energy and Defense Policies and Oversight Could Be Improved. GAO-06-369. Washington, D.C.: March 7, 2006. Managing Sensitive Information: DOE and DOD Could Improve Their Policies and Oversight. GAO-06-531T. Washington, D.C.: March 14, 2006. GAO’s High-Risk Program. GAO-06-497T. Washington, D.C.: March 15, 2006. Questions for the Record Related to DOD’s Personnel Security Clearance Program and the Government Plan for Improving the Clearance Process. GAO-06-323R. Washington, D.C.: January 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06- 233T. Washington, D.C.: November 9, 2005. Defense Management: Better Review Needed of Program Protection Issues Associated with Manufacturing Presidential Helicopters. GAO-06- 71SU. Washington, D.C.: November 4, 2005. DOD’s High-Risk Areas: High-Level Commitment and Oversight Needed for DOD Supply Chain Plan to Succeed. GAO-06-113T. Washington, D.C.: October 6, 2005. Questions for the Record Related to DOD’s Personnel Security Clearance Program. GAO-05-988R. Washington, D.C.: August 19, 2005. Industrial Security: DOD Cannot Ensure Its Oversight of Contractors under Foreign Influence Is Sufficient. GAO-05-681. Washington, D.C.: July 15, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. Defense Management: Key Elements Needed to Successfully Transform DOD Business Operations. GAO-05-629T. Washington, D.C.: April 28, 2005. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C.: April 15, 2005. DOD’s High-Risk Areas: Successful Business Transformation Requires Sound Strategic Planning and Sustained Leadership. GAO-05-520T. Washington, D.C.: April 13, 2005. GAO’s 2005 High-Risk Update. GAO-05-350T. Washington, D.C.: February 17, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. Intelligence Reform: Human Capital Considerations Critical to 9/11 Commission’s Proposed Reforms. GAO-04-1084T. Washington, D.C.: September 14, 2004. DOD Personnel Clearances: Additional Steps Can Be Taken to Reduce Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-632. Washington, D.C.: May 26, 2004. DOD Personnel Clearances: Preliminary Observations Related to Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-202T. Washington, D.C.: May 6, 2004. Security Clearances: FBI Has Enhanced Its Process for State and Local Law Enforcement Officials. GAO-04-596. Washington, D.C.: April 30, 2004. Industrial Security: DOD Cannot Provide Adequate Assurances That Its Oversight Ensures the Protection of Classified Information. GAO-04-332. Washington, D.C.: March 3, 2004. DOD Personnel Clearances: DOD Needs to Overcome Impediments to Eliminating Backlog and Determining Its Size. GAO-04-344. Washington, D.C.: February 9, 2004. Aviation Security: Federal Air Marshal Service Is Addressing Challenges of Its Expanded Mission and Workforce, but Additional Actions Needed. GAO-04-242. Washington, D.C.: November 19, 2003. Results-Oriented Cultures: Creating a Clear Linkage between Individual Performance and Organizational Success. GAO-03-488. Washington, D.C.: March 14, 2003. Defense Acquisitions: Steps Needed to Ensure Interoperability of Systems That Process Intelligence Data. GAO-03-329. Washington D.C.: March 31, 2003. Managing for Results: Agency Progress in Linking Performance Plans With Budgets and Financial Statements. GAO-02-236. Washington D.C.: January 4, 2002. Central Intelligence Agency: Observations on GAO Access to Information on CIA Programs and Activities. GAO-01-975T. Washington, D.C.: July 18, 2001. Determining Performance and Accountability Challenges and High Risks. GAO-01-159SP. Washington, D.C.: November 2000. DOD Personnel: More Consistency Needed in Determining Eligibility for Top Secret Clearances. GAO-01-465. Washington, D.C.: April 18, 2001. DOD Personnel: More Accurate Estimate of Overdue Security Clearance Reinvestigations Is Needed. GAO/T-NSIAD-00-246. Washington, D.C.: September 20, 2000. DOD Personnel: More Actions Needed to Address Backlog of Security Clearance Reinvestigations. GAO/NSIAD-00-215. Washington, D.C.: August 24, 2000. Security Protection: Standardization Issues Regarding Protection of Executive Branch Officials. GAO/T-GGD/OSI-00-177. Washington, D.C.: July 27, 2000. Security Protection: Standardization Issues Regarding Protection of Executive Branch Officials. GAO/GGD/OSI-00-139. Washington, D.C.: July 11, 2000. Computer Security: FAA Is Addressing Personnel Weaknesses, But Further Action Is Required. GAO/AIMD-00-169. Washington, D.C.: May 31, 2000. DOD Personnel: Weaknesses in Security Investigation Program Are Being Addressed. GAO/T-NSIAD-00-148. Washington, D.C.: April 6, 2000. DOD Personnel: Inadequate Personnel Security Investigations Pose National Security Risks. GAO/T-NSIAD-00-65. Washington, D.C.: February 16, 2000. DOD Personnel: Inadequate Personnel Security Investigations Pose National Security Risks. GAO/NSIAD-00-12. Washington, D.C.: October 27, 1999. Background Investigations: Program Deficiencies May Lead DEA to Relinquish Its Authority to OPM. GAO/GGD-99-173. Washington, D.C.: September 7, 1999. Department of Energy: Key Factors Underlying Security Problems at DOE Facilities. GAO/T-RCED-99-159. Washington, D.C.: April 20, 1999. Performance Budgeting: Initial Experiences Under the Results Act in Linking Plans With Budgets. GAO/AIMD/GGD-99-67. Washington, D.C.: April 12, 1999. Military Recruiting: New Initiatives Could Improve Criminal History Screening. GAO/NSIAD-99-53. Washington, D.C.: February 23, 1999. Executive Office of the President: Procedures for Acquiring Access to and Safeguarding Intelligence Information. GAO/NSIAD-98-245. Washington, D.C.: September 30, 1998. Inspectors General: Joint Investigation of Personnel Actions Regarding a Former Defense Employee. GAO/AIMD/OSI-97-81R. Washington, D.C.: July 10, 1997. Privatization of OPM’s Investigations Service. GAO/GGD-96-97R. Washington, D.C.: August 22, 1996. Cost Analysis: Privatizing OPM Investigations. GAO/GGD-96-121R. Washington, D.C.: July 5, 1996. Personnel Security: Pass and Security Clearance Data for the Executive Office of the President. GAO/NSIAD-96-20. Washington, D.C.: October 19, 1995. Privatizing OPM Investigations: Implementation Issues. GAO/T-GGD-95- 186. Washington, D.C.: June 15, 1995. Privatizing OPM Investigations: Perspectives on OPM’s Role in Background Investigations. GAO/T-GGD-95-185. Washington, D.C.: June 14, 1995. Security Clearances: Consideration of Sexual Orientation in the Clearance Process. GAO/NSIAD-95-21. Washington, D.C.: March 24, 1995. Background Investigations: Impediments to Consolidating Investigations and Adjudicative Functions. GAO/NSIAD-95-101. Washington, D.C.: March 24, 1995. Managing DOE: Further Review Needed of Suspensions of Security Clearances for Minority Employees. GAO/RCED-95-15. Washington, D.C.: December 8, 1994. Personnel Security Investigations. GAO/NSIAD-94-135R. Washington, D.C.: March 4, 1994. Classified Information: Costs of Protection Are Integrated With Other Security Costs. GAO/NSIAD-94-55. Washington, D.C.: October 20, 1993. Nuclear Security: DOE’s Progress on Reducing Its Security Clearance Work Load. GAO/RCED-93-183. Washington, D.C.: August 12, 1993. Personnel Security: Efforts by DOD and DOE to Eliminate Duplicative Background Investigations. GAO/RCED-93-23. Washington, D.C.: May 10, 1993. Administrative Due Process: Denials and Revocations of Security Clearances and Access to Special Programs. GAO/T-NSIAD-93-14. Washington, D.C.: May 5, 1993. DOD Special Access Programs: Administrative Due Process Not Provided When Access Is Denied or Revoked. GAO/NSIAD-93-162. Washington, D.C.: May 5, 1993. Security Clearances: Due Process for Denials and Revocations by Defense, Energy, and State. GAO/NSIAD-92-99. Washington, D.C.: May 6, 1992. Due Process: Procedures for Unfavorable Suitability and Security Clearance Actions. GAO/NSIAD-90-97FS. Washington, D.C.: April 23, 1990. Weaknesses in NRC’s Security Clearance Program. GAO/T-RCED-89-14. Washington, D.C.: March 15, 1989. Nuclear Regulation: NRC’s Security Clearance Program Can Be Strengthened. GAO/RCED-89-41. Washington, D.C.: December 20, 1988. Nuclear Security: DOE Actions to Improve the Personnel Clearance Program. GAO/RCED-89-34. Washington, D.C.: November 9, 1988. Nuclear Security: DOE Needs a More Accurate and Efficient Security Clearance Program. GAO/RCED-88-28. Washington, D.C.: December 29, 1987. National Security: DOD Clearance Reduction and Related Issues. GAO/NSIAD-87-170BR. Washington, D.C.: September 18, 1987. Oil Reserves: Proposed DOE Legislation for Firearm and Arrest Authority Has Merit. GAO/RCED-87-178. Washington, D.C.: August 11, 1987. Embassy Blueprints: Controlling Blueprints and Selecting Contractors for Construction Abroad. GAO/NSIAD-87-83. Washington, D.C.: April 14, 1987. Security Clearance Reinvestigations of Employees Has Not Been Timely at the Department of Energy. GAO/T-RCED-87-14. Washington, D.C.: April 9, 1987. Improvements Needed in the Government’s Personnel Security Clearance Program. Washington, D.C.: April 16, 1985. Need for Central Adjudication Facility for Security Clearances for Navy Personnel. GAO/GGD-83-66. Washington, D.C.: May 18, 1983. Effect of National Security Decision Directive 84, Safeguarding National Security Information. GAO/NSIAD-84-26. Washington, D.C.: October 18, 1983. Faster Processing of DOD Personnel Security Clearances Could Avoid Millions in Losses. GAO/GGD-81-105. Washington, D.C.: September 15, 1981. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Defense (DOD) is responsible for about 2 million active personnel security clearances. About one-third of the clearances are for industry personnel working on contracts for DOD and more than 20 other executive agencies. Delays in determining eligibility for a clearance can heighten the risk that classified information will be disclosed to unauthorized sources and increase contract costs and problems attracting and retaining qualified personnel. On April 28, 2006, DOD announced it had stopped processing security clearance applications for industry personnel because of an overwhelming volume of requests and funding constraints. GAO has reported problems with DOD's security clearance processes since 1981. In January 2005, GAO designated DOD's program a high-risk area because of longstanding delays in completing clearance requests and an inability to accurately estimate and eliminate its clearance backlog. For this statement GAO addresses: (1) key points in the billing dispute between DOD and OPM and (2) some of the major impediments affecting clearances for industry personnel. The costs underlying a billing dispute between DOD and OPM are contributing to further delays in the processing of new security clearance requests for industry personnel. The dispute stems from the February 2005 transfer of DOD's personnel security investigations function to OPM and associated costs for which DOD agreed to reimburse OPM. Among other things, the two agencies' memorandum of agreement for the transfer allows OPM to charge DOD annual price adjustments plus a 25 percent premium, in addition to the rates OPM charges to other federal government agencies. A January 20, 2006, memorandum from the Under Secretary of Defense for Intelligence to the Office of Management and Budget (OMB) questioned the continued need for the premiums and requested mediation from OMB. According to DOD and OPM, OMB has directed the two agencies to continue to work together to resolve the matter. The inspectors general for both DOD and OPM are expected to report on the results of their investigations into the dispute this summer. Other impediments, if not effectively addressed, could negatively affect the timeliness of clearance-eligibility determinations for one or more of the following employee groups: industry personnel, servicemembers, and civilian government employees. All three groups are affected by DOD's longstanding inability to accurately estimate the size of its security clearance workload. Inaccurate estimates of the volume of clearances needed make it difficult to determine clearance-related budgets and staffing requirements. Similarly, the July 1, 2006, expiration of Executive Order 13381, which delegated responsibility for improving the clearance process to OMB, could potentially slow improvements in personnel security clearance processes DOD-wide as well as governmentwide. GAO has been encouraged by OMB's high level of commitment to activities such as the development of a government plan to improve personnel security clearance processes governmentwide but is concerned about whether such progress will continue after the executive order expires. In contrast, demand for top secret clearances for industry personnel and the lack of reciprocity (the acceptance of a clearance and access granted by another department, agency, or military service) are impediments that mainly affect industry personnel. A previously identified increase in the demand for top secret clearances for industry personnel has workload and budgetary implications for DOD and OPM if such requests continue to occur. Finally, the lack of reciprocity has a negative effect on employees and employers, and increases the workload for already overburdened investigative and adjudicative staff. Reciprocity problems have occurred despite the issuance of governmentwide investigative standards and adjudicative guidelines in 1997. |
Since 1955, federal agencies have been encouraged to obtain commercially available goods and services from the private sector if doing so is cost-effective. In 1966, OMB issued Circular A-76, which established federal policy for the government’s performance of commercial activities and set forth the procedures for studying them for potential contracting. In 1979, OMB issued a supplemental handbook to the circular that included cost comparison procedures for determining whether commercial activities should be performed in-house, by another federal agency through an interservice support agreement, or by the private sector. OMB updated this handbook in 1983 and again in March 1996. The March 1996 Revised Supplemental Handbook clarified numerous areas, including the application of the A-76 cost comparison requirements. The handbook’s introduction describes a wide range of options government officials must consider as they contemplate reinventing government operations. They include “the consolidation, restructuring or reengineering of activities, privatization options, make or buy decisions, the adoption of better business management practices, the development of joint ventures with the private sector, asset sales, the possible devolution of activities to state and local governments and the termination of obsolete services or programs.” The introduction also explains that “in the context of this larger reinvention effort, the scope of the Supplemental Handbook is limited to conversion of recurring commercial activities to or from in-house, contract or interservice support agreement performance.” Where A-76 cost comparison procedures apply, the initial step is to develop a performance work statement describing what is needed to perform the activity. That statement is used as the technical performance section of a solicitation for private-sector offers. The government also develops a management plan that describes the most efficient organization for in-house performance of the activity described in the performance work statement. The cost of performance by the government in accordance with the most efficient organization is compared to the cost proposed by the private-sector source selected pursuant to the solicitation. The activity will be converted to performance by the private sector if the private sector’s offer represents a reduction of at least 10 percent of direct personnel costs or $10 million over the performance period. Further information about the A-76 process is included in appendix I. In addition to A-76, the Department of Defense (DOD) must consider the effect of 10 U.S.C. 2461 when it plans changes to an industrial or commercial type function performed by its civilian employees. Section 2461, as amended by the Strom Thurmond National Defense Authorization Act for Fiscal Year 1999, Public Law 105-261, requires an analysis of the activity, including a comparison of the cost of performance by DOD civilian employees and by a contractor, to determine whether contractor performance could result in a savings to the government. It also requires DOD to notify Congress of the analysis and to provide other information prior to instituting a change in performance. The 38th EIW is an active component Air Force unit with a wartime support mission that has been greatly diminished since the end of the Cold War. Deactivation of the 38th EIW will involve multiple actions to realign the wartime mission and reassign other peacetime roles. The 38th EIW provides engineering and installation (E&I) services in support of the Air Force’s communications needs. It supports flight facilities, intrusion detection, ground radio, wideband/satellite systems, local area networks, cable/fiber optic distribution systems, switching systems, and other communications systems. The 38th EIW is an Air Force Materiel Command unit headquartered at Tinker AFB, Oklahoma, with squadrons at Keesler AFB, Mississippi; Kelly AFB, Texas; and McClellan AFB, California. In addition, an active duty military advisor is stationed at each of the 19 ANG units—units that also provide engineering and installation services. Currently, the 38th EIW consists of 2,343 personnel (1,358 military and 985 civilian) at these bases and various ANG locations. Table 1 shows the active component military and civilian personnel authorized for the 38th EIW at each location. The squadrons at Keesler, Kelly, and McClellan AFBs are composed primarily of military personnel. About a third of the total EIW authorized personnel (726 military and 40 civilian) perform installation services. The remainder of the military and civilian personnel perform engineering; logistics; and other support functions. The 19 ANG units noted above have 2,314 authorized guard personnel: they perform peacetime installation services as part of their training. Further, the Air Force relies on the private sector to provide E&I services using approximately 40 different indefinite delivery/indefinite quantity contracts. The 38th EIW’s structure was premised on its cold war mission of reconstituting damaged fixed communications systems (radars, phone lines, cables, etc.) at overseas bases. However, under the new Air Expeditionary Force concept, existing military forces will go into bare bases and use tactical, or mobile, communications gear. Consequently, the need to repair these fixed communications is reduced and there is greater reliance on tactical communications. Based on the reassessment of its wartime mission requirements and the Quadrennial Defense Review process (which recommended DOD improve the efficiency and performance of support activities by reengineering), the Air Force decided that the wartime E&I mission could be transferred to the ANG. At the same time, the Air Force would retain a minimal active-duty capability, provided by a new rapid response squadron at Keesler AFB. Since there will no longer be a need for the 38th EIW to supply the Air Force’s peacetime E&I needs in order to maintain wartime skills, the Air Force no longer has a requirement to maintain the large E&I infrastructure of the 38th EIW. As currently proposed, the deactivation of the 38th EIW would eliminate 1,200 of its 1,358 military positions and 552 of its 985 civilian positions. After the wing is deactivated, the remaining 158 military personnel and 433 civilian personnel will be reassigned to existing or new organizations, located principally at Tinker and Keesler AFBs. With the deactivation of the 38th EIW and transfer of the wartime mission to the ANG, other actions will also occur: The Kelly and McClellan squadrons will be disestablished concurrent with the realignment and closure actions being implemented as part of the 1995 base realignment and closure decision. All 19 active-duty authorizations at the ANG units will be eliminated. The squadron at Keesler AFB will become a rapid response squadron whose mission would be wartime deployment, and also provide a quick reaction E&I capability for emergency needs, and provide specialized engineering. A portion of the positions formerly with the 38th EIW will be reassigned to a new organizational unit at Tinker AFB that will become a base communication and information infrastructure planning and program management office. Fifty civilian authorizations which are being eliminated at Tinker will be transferred to the Air Force Communications Agency at Scott AFB, Illinois, to more closely align their telecommunications sustainment workload with the Air Force unit responsible for telecommunications policy. The wartime E&I mission will be substantially transferred to the existing ANG E&I units without an increase in authorized positions. Figure 1 portrays the planned actions. Viewed another way, of 1,358 authorized military positions, over 88 percent would be eliminated and out of the 985 authorized civilian positions, 56 percent would be eliminated, while the remainder would be shifted to other organizations. Table 2 shows the number of 38th EIW military and civilian positions that would be reduced at affected bases and the numbers reassigned to other organizations. As a result of the deactivation and restructuring, 1,752, or 75 percent, of the unit’s 2,343 positions would be eliminated, while 591 would be reassigned elsewhere. Following the deactivation of the 38th EIW, the responsibility for obtaining peacetime E&I services will be transferred to the individual major commands. These commands may acquire such services from (1) contracts, (2) the ANG E&I units, or (3) the rapid response squadron at Keesler, based on availability. The military units will need to perform some of this peacetime work to maintain their wartime skills. OMB Circular A-76 and the cost comparison requirements of its accompanying handbook apply to the conversion of the performance of a commercial activity from government civilian employees to the private sector. According to the Air Force, A-76 does not apply to its plan because the deactivation of the 38th EIW does not constitute a conversion of the performance of an activity by civilian DOD employees as envisioned under the circular. The Air Force’s changed wartime requirements have caused it to propose a realignment of the responsibilities and missions of the 38th EIW. Consequently, the original function of the 38th EIW has been fundamentally altered and the need for civilian employee support is significantly reduced. We find the Air Force’s conclusion that A-76 does not apply to be reasonable. The Air Force made a reasonable judgment in deciding that its deactivation of the 38th EIW and the restructuring of the delivery of E&I services is not subject to the requirements of 10 U.S.C. 2461 since the plan does not constitute a change from performance of a particular workload by DOD civilian employees to private sector performance. The handbook does not provide detailed guidance as to what constitutes a conversion of a commercial activity for purposes of A-76. Between 1979 and 1994, DOD conducted over 2,000 competitions using the A-76 process. Most of these involved activities, such as groundskeeping, laundry, and food service, where the conversions proposed were straightforward exchanges of a government employee workforce for a contractor workforce to perform a particular service. An agency must base its judgment about whether A-76 applies on the individual facts of each initiative. As each case usually involves a unique situation, an agency has the discretion to determine the applicability of A-76 to its particular initiative as long as the agency has exercised its judgment reasonably. The handbook introduction explains that a commercial activity is a process resulting in a product or service that may be obtained from the private sector and that some management initiatives, such as “reengineering,” “privatization,” or “restructuring,” involving such activities are beyond conversions and are not subject to the cost-comparison requirements of A-76. Therefore, it is reasonable to interpret the guidance to mean that A-76 conversions are not intended to encompass every initiative that results in the loss of civilian government jobs. Further, the handbook provides that it is not to apply to the conversion of activities performed by uniformed military personnel. The Air Force plan to deactivate the 38th EIW and transfer its E&I activities to other organizations within the Air Force or to the ANG is a comprehensive change to the missions and responsibilities of the 38th EIW. The Air Force has decided that the 38th EIW’s wartime mission should be transferred to the ANG. As a result, it appears that the Air Force no longer has a requirement to maintain a large, centralized E&I infrastructure to train personnel to meet this mission. The peacetime E&I work was performed by the 38th EIW, in large part, to maintain its skills and capabilities to perform its wartime mission. This included a large civilian workforce performing peacetime E&I work to support the wartime mission of the uniformed military personnel. Now that this wartime mission has been transferred to the ANG, which does not need this civilian support, there is no longer a requirement to maintain the infrastructure. The type of E&I work being impacted by the Air Force plan would generally fit within the definition of commercial activity for A-76 purposes. However, the Air Force plan is not simply a changeover of this commercial activity from performance by civilian employees to private sector workers. In fact, the majority of positions affected are uniformed military personnel, which are not subject to A-76. Of 1,358 military personnel assigned to the 38th EIW, only 158 will remain. The civilians in the 38th EIW were primarily performing commercial E&I activities to provide continuity during contingencies and support for military personnel to enhance their wartime skills. Absent the military requirement, the E&I services could have been supplied by contract with the private sector. Under the restructuring, civilian positions will be lost and the different Air Force units could meet some of their new responsibilities by obtaining E&I services through contractors. However, this is an incidental result of a plan that primarily involves the reassignment of uniformed military personnel and the transfer of their responsibilities to other organizations. The civilian performance of the commercial E&I activity was essentially an adjunct of the military mission. The civilians who remain will be reassigned to different organizations and locations. Thus, we find reasonable the Air Force decision that its plan to change the wartime mission of the 38th EIW is not the type of management initiative that is subject to A-76. We believe that the Air Force made a reasonable judgment in deciding that its deactivation of the 38th EIW and the restructuring of the delivery of E&I services which that necessitates is not subject to the requirements of section 2461 since the plan does not constitute a change from performance of a particular workload by DOD civilian employees to private-sector performance. Section 2461 requires that before any commercial or industrial type function is changed from performance by DOD civilian employees to private-sector performance, DOD must report to Congress and perform an analysis showing that private sector performance will result in a savings to the government over the life of the contract. As under A-76, the cost of performance of the function by the government employees is to be based on an estimate of their most cost-effective manner for performance of the function. Section 2461 applies to initiatives that result in functions performed by DOD civilian employees being changed to performance by private-sector employees. As discussed earlier, the Air Force proposal is more than just a change of the 38th EIW function from DOD civilian employees to contractors. Rather, it is a transfer of the E&I wartime mission to the ANG units which primarily affects uniformed military personnel who are not subject to 10 U.S.C. 2461. Once this occurs, there will no longer be a need for the 38th EIW, which was designed to support the military personnel and their wartime mission. The action being taken in this case is not a change of the kind contemplated by section 2461. While neither an A-76 cost comparison nor a section 2461 cost study was required, the Air Force nevertheless did complete a business case analysis to estimate the cost-effectiveness of restructuring the 38th EIW. That analysis showed an estimated annual recurring savings of approximately $28 million, based on reported fiscal year 1997 costs and projected costs (including contract costs) of the restructured organizations. The estimated contract costs were based on existing negotiated contract rates for an equivalent level of effort. The analysis showed that most of the recurring savings would result from engineer and installer manpower cuts and unit operations and maintenance reductions. Also, the business case analysis found that the Air Force will realize an estimated one-time savings of $33 million, of which $28 million is due to the cancellation of planned Base Realignment and Closure construction projects associated with the future realignment of Kelly AFB and the closure of McClellan AFB. (These construction projects were planned at other bases in order to accommodate the E&I workload being transferred from the squadrons at McClellan and Kelly AFBs as the result of 1995 base realignment and closure decisions.) The study also found that another $5 million will be saved due to the cancellation of building construction projects at Tinker AFB. The Air Force Audit Agency performed a management advisory review of the 38th EIW business case analysis. The Audit Agency sampled two of the five wing functions, representing 78 percent of the wing’s total functions. It concluded that the methodology the Air Force had used for its analysis was sound and that the analysis was materially correct and well documented. It also concluded that the estimate of expected savings was conservative because the Air Force used the most conservative rates in place. We also found the analysis to be reasonable based on the cost factors and type of methodology the Air Force used. The Air Force’s proposal concerning the 38th EIW is a comprehensive change to the missions and responsibilities performed by the 38th EIW and does not constitute a conversion of civilian to contractor personnel as envisioned under A-76. Thus, the Air Force was reasonable in concluding that it did not have to undergo the A-76 process in this instance. Similarly, the planned action is not a change of the kind contemplated by 10 U.S.C. 2461. Accordingly, the Air Force was not required to perform the cost study and provide congressional notification under that provision. At the same time, the Air Force’s business case analysis supports the cost-effectiveness of the proposed action, with the reduction of a significant number of personnel. We requested comments on a draft of this report from the Secretary of Defense or his designee. On February 11, 1999, DOD officials concurred with the report findings. They also provided technical comments which have been incorporated as approriate. To determine whether the planned action was subject to the requirements of OMB Circular A-76, we reviewed the Air Force’s programming and implementation plans and reviewed and analyzed Circular A-76. Also, we interviewed senior officials at Air Force Headquarters, Washington, D.C.; the 38th EIW, Tinker AFB, Oklahoma; and the Office of Management and Budget, Washington, D.C. We also reviewed our prior work reviewing A-76 issues. To determine whether the Air Force action was subject to the requirements of 10 U.S.C. 2461, we identified and reviewed relevant legislation and discussed the applicability of section 2461 with senior officials of the Office of Management and Budget and the Air Force’s Office of General Counsel. To determine whether the Air Force analyzed the cost-effectiveness of the proposed action, we reviewed its business case analysis and discussed it with the Air Force Audit Agency. We also reviewed the Air Force’s rates and cost methodology. We conducted our review from May 1998 to January 1999 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Ranking Minority Member of the Subcommittee on Readiness and Management Support, Senate Armed Services Committee; Chairmen and Ranking Minority Members of the Senate and House Committees on Appropriations; the Secretaries of Defense and the Air Force; and the Director of OMB. We will make copies available to others upon request. Please contact me at 202-512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix II. In general, the A-76 process consists of six key activities. They are: (1) developing a performance work statement and quality assurance surveillance plan; (2) conducting a management study to determine the government’s most efficient organization (MEO); (3) developing an in-house government cost estimate for the MEO; (4) issuing a Request for Proposals (RFP) or Invitation for Bid (IFB); (5) evaluating the proposals or bids and comparing the in-house estimate with a private sector offer or interservice support agreement and selecting the winner of the cost comparison; and (6) addressing any appeals submitted under the administrative appeals process, which is designed to ensure that all costs are fair, accurate, and calculated in the manner prescribed by the A-76 handbook. Figure I.1 shows an overview of the process. The solid lines indicate the process used when the government issues an IFB, requesting firm bids on the cost of performing a commercial activity. This process is normally used for more routine commercial activities, such as grass-cutting or cafeteria operations, where the work process and requirements are well defined. The dotted lines indicate the additional steps that take place when the government wants to pursue a negotiated, “best value” procurement. While it may not be appropriate for use in all cases, this process is often used when the commercial activity involves high levels of complexity, expertise, and risk. Most Efficient Organization (MEO) activities Additional steps required for request for proposals (RFP) The circular requires the government to develop a performance work statement. This statement, which is incorporated into either the IFB or RFP, serves as the basis for both government estimates and private sector offers. If the IFB process is used, each private sector company develops and submits a bid, giving its firm price for performing the commercial activity. While this process is taking place, the government activity performs a management study to determine the most efficient and effective way of performing the activity with in-house staff. Based on this “most efficient organization,” the government develops a cost estimate and submits it to the selecting authority. The selecting authority concurrently opens the government’s estimate along with the bids of all private sector firms. According to OMB’s A-76 guidance, the government’s in-house estimate wins the competition unless the private sector’s offer meets a threshold of savings that is at least 10 percent of direct personnel costs or $10 million over the performance period. This minimum cost differential was established by OMB to ensure that the government would not contract out for marginal estimated savings. If the RFP—best value process—is used, the Federal Procurement Regulations and the A-76 Supplemental Handbook require several additional steps. The private sector offerors submit proposals that often include a technical performance proposal, and a price. The government prepares an in-house management plan and cost estimate based strictly on the performance work statement. On the other hand, private sector proposals can offer a higher level of performance or service. The government’s selection authority reviews the private sector proposals to determine which one represents the best overall value to the government based on such considerations as (1) higher performance levels, (2) lower proposal risk, (3) better past performance, and (4) cost to do the work. After the completion of this analysis, the selection authority prepares a written justification supporting its decision. This includes the basis for selecting a contractor other than the one that offered the lowest price to the government. Next, the authority evaluates the government’s offer and determines whether it can achieve the same level of performance and quality as the selected private sector proposal. If not, the government must then make changes to meet the performance standards accepted by the authority. This ensures that the in-house cost estimate is based upon the same scope of work and performance levels as the best value private sector offer. After determining that the offers are based on the same level of performance, the cost estimates are compared. As with the IFB process, the work will remain in-house unless the private offer is (1) 10 percent less in direct personnel costs or (2) $10 million less over the performance period. Participants in the process—for either the IFB or RFP process—may appeal the selection authority’s decision if they believe the costs submitted by one or more of the participants were not fair, accurate, or calculated in the manner prescribed by the A-76 handbook. Appeals must be submitted in writing and within 20 days after the date that all supporting documentation is made publicly available. The appeal period may be extended to 30 days if the cost comparison is particularly complex. Appeals are supposed to be adjudicated within 30 days after they are received. The A-76 Supplemental Handbook provides that, under certain circumstances, agencies may authorize cost comparison waivers and direct conversions to or from in-house, contract or interservice support agreements. A waiver may be granted where: The conversion will result in a significant financial or service quality improvement and a finding that the conversion will not serve to reduce significantly the level or quality of competition in the future award or performance of work; or The waiver will establish why in-house or contract offers have no reasonable expectation of winning a competition conducted under the cost comparison procedures of the Handbook. Additionally, the supplemental handbook provides that under certain circumstances, such as situations involving 65 or less full time equivalent personnel, streamlined cost comparisons may be permitted. Kimberly Seay, Site Senior Bonnie Carter, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO provided information on whether the Air Force complied with relevant policy and congressional notification requirements in reaching a decision to deactivate the 38th Engineering Installation Wing (EIW) at Tinker Air Force Base (AFB), Oklahoma, focusing on: (1) the scope of the Air Force's planned action; (2) whether it is subject to the requirements of Office and Management Budget (OMB) Circular A-76 and 10 U.S.C. 2461; and (3) whether an analysis was completed to examine the cost-effectiveness of the planned action. GAO noted that: (1) the Air Force plans to deactivate the 38th EIW, headquartered at Tinker AFB, Oklahoma, and transfer its wartime mission to the Air National Guard without increasing the Guard's authorized end-strength; (2) about 75 percent, or 1,752, of the unit's authorized positions will be eliminated, while 591 positions will be reassigned to existing or new organizations to assume responsibilities previously assigned to the 38th EIW; (3) these changes are expected to result in one-time savings of $33 million and annual recurring savings of $28 million; (4) the proposed action is a comprehensive restructuring of an active component unit, largely transferring its wartime mission to the National Guard; (5) it is not the type of action historically associated with OMB Circular A-76 and is not a conversion as envisioned under the circular; (6) accordingly, a cost comparison under that circular is not required; (7) likewise, the planned action is not a change in the performance from civilian personnel to contractor employees of the kind subject to the requirements of 10 U.S.C. 2461; (8) accordingly, the Air Force was not required to perform the cost study and provide congressional notification under that provision; and (9) at the same time, the Air Force's business case analysis supports the cost-effectiveness of the proposed action, with the reduction of a significant number of personnel. |
As the nation’s largest health care payer, Medicare provides health insurance coverage to over 36 million elderly and disabled Americans. Medicare part A covers inpatient care in a hospital or skilled nursing facility and home health or hospice care. The care in skilled nursing facilities that part A covers—for which Medicare paid an estimated $6.6 billion in 1995—is limited to relatively short stays for patients who need daily skilled care following hospitalization. Most elderly people in nursing facilities do not qualify for part A coverage. Medicare part B covers physician services, outpatient hospital services, durable medical equipment, and various other health services. Although the vast majority of Medicare patients in nursing facilities do not require skilled nursing care, they are entitled to the full range of services and supplies covered by the Medicare part B program when part A does not pay for the nursing facility services themselves. This care is usually billed directly to Medicare by the providers who serve these patients. In 1995, Medicare paid an estimated $5.5 billion for services and supplies furnished to patients in nursing facilities. HCFA contracts with insurers, such as Blue Cross and Blue Shield plans, Aetna, and The Travelers Insurance Company, to process and pay claims submitted by providers. These contractors—referred to as carriers under Medicare part B—are responsible for the monitoring and analysis of claims both before and after payment to ensure that Medicare dollars are used to pay only reasonable and necessary claims. Carriers’ automated claims processing systems include computerized controls, or screens, that screen claims for diagnosis coding errors, billing abuses, and incorrect or incomplete documentation. Some controls deny or adjust claims payments automatically without human intervention. Others flag claims for further review by carrier personnel if the claims do not conform to payment criteria or are submitted by a provider under carrier scrutiny. Nursing facility patients—many of whom are cognitively impaired—rely on the facility to manage their care needs. Under the nursing home reform provisions established by the Omnibus Budget Reconciliation Act of 1987, nursing facilities have a major role in designing plans of care for each of their patients. The act imposed requirements that nursing facilities provide those services and activities necessary to attain or maintain the highest practicable physical, mental, and psychosocial well-being of each resident. Upon a patient’s admission to a facility, a registered nurse must conduct or coordinate a comprehensive assessment of medical, nursing, mental, and psychosocial needs. The facility must then develop a comprehensive plan of care for the patient that includes measurable objectives and timetables to meet the patient’s needs identified in the comprehensive assessment. Each assessment must be reviewed at least every 3 months; any revisions must be reflected in the corresponding plan of care. Fraudulent and abusive practices by some providers who furnish services and supplies to nursing facility patients entail billing Medicare for unnecessary or undelivered services and supplies or misrepresenting a service or supply item to obtain Medicare reimbursement as the following examples show: A company billed Medicare for heart monitoring services provided to nursing facility patients. The diagnoses entered on the Medicare claims forms by the laboratory were false and did not reflect the patients’ condition at the time services were ordered and rendered. Medicare overpaid this company an estimated $4.3 million. An optometrist filed Medicare claims for services either not medically necessary or not provided to nursing facility patients from 1991 through 1993. Medicare paid this practitioner over $190,000 for these services. A supplier billed Medicare for ostomy, enteral, and surgical dressing supplies that it had not delivered and forged the attending physicians’ signatures on the certificates of medical necessity using samples of signed orders found in patients’ files. This case involved about 4,000 fraudulent claims totaling about $1.5 million. These fraudulent and abusive activities involve a wide array of provider types, including ambulance service companies, dentists, medical equipment suppliers, general practitioners, internists, laboratories, optometrists, podiatrists, psychiatrists, and psychologists. The actual extent of the problem cannot be quantified, but the evidence suggests that it is widespread. Our review of ongoing and completed investigations by Medicare carriers and HHS OIG included cases encompassing at least 41 states, the District of Columbia, and Puerto Rico. Thirty of the providers in these cases operated in multiple states as these examples illustrate: A supplier currently under investigation for allegedly billing Medicare for surgical dressing kits and components that were not provided or medically necessary operates in at least 20 states. Another supplier with offices in at least seven states billed Medicare for incontinence supplies that were inflated in price, not provided, or not medically necessary. Another company under investigation for inappropriately billing Medicare for heart monitoring services operates in at least 11 states. Because data on fraud and abuse have not been accumulated based on place of service, investigators cannot quantify the extent of Medicare fraud and abuse involving the provision of services and supplies to nursing facility patients. Medicare carriers and OIG officials believe, however, that fraud involving services provided in nursing facilities is significant. In 1994, the Senate Special Committee on Aging reported a considerable number of cases involving the targeting of nursing facility patients by industries that supply products and services directly to them. Also in 1994, OIG reported on a completed investigation in which Medicare had paid over $7.4 million to a billing company (representing 70 nursing facilities in 7 states) that had billed for surgical dressings supplied to nursing facility patients who had had no surgery. And in February 1995 testimony before the House Committee on Ways and Means, Subcommittee on Health, the Inspector General reported that about half of the $230 million Medicare approved in 1993 nationally for incontinence supplies was questionable. The Inspector General noted that “the potential for great profit provides an incentive for fraudulent marketing and billing schemes which target the entire nursing home population of Medicare beneficiaries.” The nursing facility setting can be an inviting target of opportunity for the unscrupulous provider of part B services and supplies. First, a vulnerable population grouped together at a single location offers the opportunity for volume billing. Second, HCFA’s provisions for reimbursing providers of these part B services and supplies furnish little early warning of egregious overutilization or rapid increases in billings. Federal requirements call for nursing facilities to perform numerous tasks to monitor and meet patient care needs, but there are no similar requirements to monitor claims submitted directly to Medicare for services or supplies provided to nursing facility patients. HCFA’s reimbursement system for part B services and supplies allows providers to bill Medicare without adequate confirmation that the care or items were necessary or were delivered as claimed. As a result, the program is highly vulnerable to exploitation. Also, despite the emphasis on patient care, the cases we reviewed demonstrate that nursing facilities often do not control provider access to records closely enough and opportunists are permitted to exploit Medicare. Nursing facilities generally do not have the in-house capability to provide all the services and supplies that patients need. Accordingly, outside providers market their services and supplies to nursing facilities to meet the needs of the facilities’ patients. OIG has reported that provider representatives typically enter nursing facilities and offer to handle the entire transaction—from reviewing medical records to identify those patients their products or services can help, to billing Medicare—with no involvement by nursing facility staff. Some facilities allow providers or their representatives to review patient medical records despite federal regulations that prohibit such unauthorized review. These representatives gain access to records not because they have any responsibility for the direct care of those patients, but because they want to market their services or supplies. We found that unscrupulous providers can obtain all the information necessary to order, bill, and be reimbursed by Medicare for services and supplies that are in many instances not necessary or even provided. The following are two examples of this practice: A supplier obtained a list of Medicare patients at three nursing facilities together with their Medicare numbers from another supplier who had access to specific Medicare billing information for certain patients at these facilities. The first supplier billed Medicare for large quantities of supplies that were never provided to these patients and then paid the second supplier half of the approximately $814,000 it received in reimbursement. A group optometrical practice performed routine eye examinations on nursing home patients, a service not reimbursable by Medicare. The optometrist was always preceded by a sales person who targeted the nursing facility’s director of nursing or its social worker and claimed the group was offering eye examinations at no cost to the facility or the patient. The nursing facility gave the sales person access to patients’ records, and this person then obtained the information necessary to file claims. Nursing staff would obtain physicians’ orders for the “free” examinations, and an optometrist would later arrive to conduct the examinations. The billings to Medicare, however, were for services other than eye examinations—services that were never furnished or were unnecessary. The OIG and HHS attorneys with whom we spoke agreed that granting providers of services and supplies unauthorized access to medical records contributes to fraud and billing abuse. They stated that except in specific cases in which a resident’s safety is jeopardized as a result, HHS lacks clear authority to impose monetary or other penalties on nursing facilities solely for providing such access. Although carriers employ a number of effective automated controls to prevent or remedy some inappropriate payments, such as suspending claims for further review that do not meet certain conditions for payment, our work shows that outlandish charges or very large reimbursements routinely escape the controls and typically go unquestioned. The carriers we reviewed had not put any “triggers” in place that would halt payments when cumulative claims exceed reasonable thresholds. Our analysis showed that as a result, Medicare has reimbursed providers, who were subsequently found guilty of fraud or billing abuses, large sums of money over a short period without the carrier becoming suspicious. The following examples highlight the problem: A supplier submitted claims to a Medicare carrier for surgical dressings furnished to nursing facility patients. In the fourth quarter of 1992, the carrier paid the supplier $211,900 for surgical dressing claims. For the same quarter a year later, the contractor paid this supplier more than $6 million without becoming suspicious despite a 2,800-percent increase in the amount paid. A carrier’s payments for a supplier’s body jackets claims averaged about $2,300 per quarter for five consecutive quarters, then jumped to $32,000, $95,000, $235,000, and $889,000 over the next four quarters, with no questions raised by the carrier. In other instances, we found that providers subsequently investigated for wrongdoing billed and were paid for quantities of services or supplies that could not possibly have been furnished or necessary as these examples illustrate: A carrier reimbursed a clinical psychology group practice for individual psychotherapy visits lasting 45 to 50 minutes when the top three billing psychologists in the group were allegedly seeing from 17 to 42 nursing facility patients per day. On many days the leading biller of this group would have had to work more than 24 uninterrupted hours to provide the services he claimed. A carrier paid a podiatrist $143,580 for performing surgical procedures on at least 4,400 nursing facility patients during a 6-month period. For these services to be legitimate, the podiatrist would have had to serve on average at least 34 patients per day, 5 days per week. The Medicare carriers in these two cases did not become suspicious until they received complaints from family members, beneficiaries, or competing providers. In other cases, the carriers initiated their investigations because of their analysis of paid claims (a practice referred to as postpayment medical review), which focuses on those providers that appear to be billing more than their peers for specific procedures. One carrier, for instance, reimbursed a laboratory $2.7 million in 1991 for heart monitoring services allegedly provided to nursing facility patients and $8.2 million in 1992. The carrier was first alerted in January 1993 through its postpayment review efforts when it noted that this laboratory’s claims for monitoring services exceeded the norm for its peers. In all these cases, the large increases in reimbursements over a short period or the improbable cumulative services claimed for a single day should have alerted carriers to the possibility that something unusual was happening and prompted an earlier review. For example, people do not usually work 20-hour days, and billings by a provider for a single procedure do not typically jump 13-fold from one quarter to the next or progressively double every quarter. Although Medicare carrier postpayment reviews do lead to fraud and billing abuse investigations, from the perspective of curtailing fraudulent or abusive billing activities, we found that these reviews often happen too late. In all the cases cited previously, the carriers did investigate the providers, but by the time the investigations began, Medicare had already made millions of dollars in unwarranted payments. The risk to the Medicare program, as evidenced by the fraud cases we reviewed, is that relatively little of the money inappropriately paid out is recovered as the following examples show: The owners of a company that had received over $4.3 million from Medicare based on fraudulent claims for heart monitoring argued before the court that $250,000 was the most they could repay. In the settlement agreement, they agreed to reimburse Medicare $250,000, plus interest, and be excluded from the Medicare program for 5 years. The optometrist mentioned previously who billed for services never rendered and for undocumented consultations agreed, in a civil settlement, to refund $30,000 to Medicare over a 35-month period and to be excluded from the program for 3 years. Although Medicare had reimbursed this provider over $450,000 on the basis of false or misleading claims, the case was settled for less because the provider had no assets from which the overpayments could be recovered. Several companies under investigation since early 1993 for the submission of false claims for heart monitoring services have billed Medicare large sums over a 6- to 9-month period and then have gone out of business when Medicare began making billing inquiries. For example, within a month of Medicare’s contacting one company for medical records, the company—which had already been paid at least $1.4 million—closed its operations making it unlikely that Medicare can recover inappropriate payments. Our analysis showed that a major contributing factor to fraudulent and abusive billings for services and supplies provided nursing facility patients is that no one is ensuring that what is billed for is necessary or actually provided. Because nursing facilities already have a significant role in planning and providing patient care, they are the likely entity to scrutinize providers’ reimbursement claims for services administered to the facilities’ patients. Several options for making nursing facilities accountable for costs incurred on behalf of their patients were suggested during our discussions with federal officials and such nursing facility industry representatives as the American Health Care Association. Each option would require a basic change in the way Medicare reimburses part B services and supplies provided to nursing facility patients: Unified billing by the nursing facility: Under this approach, the nursing facility would bill Medicare for all services it is authorized to furnish to patients, whether payment is sought from part A or B. This would be the case whether the facility provided the care itself or contracted for the services or supplies to be provided by someone else. Outside providers would be prohibited from billing Medicare directly and would, in effect, have to have agreements with nursing homes. Absent an agreement, the nursing facility could not bill Medicare because it would not be financially liable or medically responsible for the care. By contrast, under the current system, outside providers can bill Medicare directly, without scrutiny by anyone where the care is delivered. Unified billing by the nursing facility would make it easier for Medicare to identify all the services furnished to residents, which in turn would make it easier to control payments for those services. Capping payments: An alternative to paying on a fee-for-service basis as Medicare now does is to pay a fixed amount per beneficiary. This approach mirrors the payment method Medicare uses to reimburse most health maintenance organizations (HMO) serving Medicare beneficiaries.As with HMOs, Medicare would pay the nursing facility a fixed amount per month for all part B services and supplies provided to each resident beneficiary. A variant, one that would apply only to skilled nursing facilities, would mirror the way Medicare pays hospitals for inpatient care. As with hospitals, Medicare would pay nursing facilities a predetermined fixed amount per patient based on the type of case or diagnosis-related group into which the patient is classified. Both variants of the capped payment approach would, by definition, limit Medicare outlays for these services, eliminating providers’ incentive to provide too many services. By the same token, the incentive for nursing facilities to profit from accepting fixed payments while providing fewer services than necessary would need to be addressed. A further difficulty in these approaches is in establishing reasonable rates given the wide range of services provided individual nursing facility patients. Under Medicare neither the nursing facility nor the physician ensure that the services and supplies outside providers claim to have furnished to beneficiaries in nursing facilities are in fact necessary and actually provided. As a result, services provided to Medicare beneficiaries in nursing facilities offer a target of opportunity for the fraudulent schemes and billing abuses of the dishonest provider. To address the root cause of this accountability issue, Medicare needs to change the way it reimburses for services and supplies furnished patients in nursing facilities. Options range from those that can be implemented in the short term, such as allowing only nursing facilities to bill for all services and supplies provided to their patients, to the more difficult long-term solutions, such as capitation or prospective payment systems. All these options address the accountability issue, but more study is needed to assess the comparable costs and benefits of these options. Certain immediate actions would help stem losses. First, carriers could establish computerized payment controls that, before payment, would detect and automatically suspend for further review claims that exceeded established thresholds for charges and utilization. The thresholds need not be based on statistical averages, but they should be based on an assumption about what is reasonable activity for specific procedures, provider types, beneficiaries, and places of service. Such controls could provide timely warnings and trigger investigations of potentially fraudulent and abusive billings before large sums were paid out. Second, nursing facilities should be held accountable for the unauthorized disclosure of patient medical records. Giving providers or their representatives inappropriate access to patient medical records was a major contributing cause to the fraud and abuse cases we reviewed. Although such disclosure is contrary to program regulations, HCFA or HHS generally cannot levy penalties, monetary or otherwise, against a nursing facility for that unauthorized disclosure. To curtail the practice of giving providers unauthorized access to beneficiary medical records, the Congress should authorize HHS OIG to establish monetary penalties that could be assessed against nursing facilities that disclose information from patients’ medical records not in accord with existing federal regulation. We recommend that the Secretary of HHS direct the Administrator of HCFA to establish, for procedure billing codes by provider or beneficiary, thresholds for unreasonable cumulative levels or rates of increase in services and charges, and to require Medicare carriers to implement automated screens that would suspend for further review claims exceeding those thresholds; and undertake demonstration projects designed to assess the relative costs and benefits of alternative ways to reimburse nursing facilities for part B services and supplies; these alternatives should include such options as unified billing by the nursing facility and some form of capped payment. We provided HHS an opportunity to comment on our draft report, but it did not do so in time for comments to be included in the final report. We did discuss the report’s contents with HCFA officials who generally agreed with our findings. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days from the date of this letter. At that time, we will send copies of this report to the Secretary of HHS, the Administrator of HCFA, interested congressional committees, officials who assisted our investigation, and other interested parties. We will also make copies available to others upon request. Please call me at (202) 512-7119 or Donald B. Hunter or Roland A. Poirier, Jr. at (617) 565-7500 if you have any questions. To develop the information in this report, we interviewed officials at HCFA’s central office and the HHS Office of the Inspector General in Baltimore; three Medicare carriers responsible for processing part B claims in 9 geographic areas; three regional OIG Offices of Investigations with combined investigative jurisdiction over 17 states; one durable medical equipment regional carrier responsible for processing durable medical equipment, prosthetic, orthotic, and supply claims in 14 states and 2 U.S. territories; and trade associations representing both the nursing home industry and providers of services and supplies. Because information we obtained from the three Medicare carriers and OIG offices involves cases under active investigation, we did not disclose the locations of these offices nor did we reveal identifying details about these cases, such as the providers’ names and the states in which they operate. To examine the nature and extent of inappropriate and abusive billing for services and supplies to nursing facility patients, we (1) asked the carriers’ fraud units to identify cases involving services and supplies provided to patients in nursing facilities—both ongoing cases and those referred to OIG in the previous 24 months; (2) performed a detailed file review of each identified case and obtained and reviewed copies of relevant documents; (3) discussed cases with carrier officials, as needed, and met with OIG officials to discuss the status of cases that carriers had referred to them for investigation; and (4) asked OIG officials to identify any additional investigations completed within the previous 12 months and ongoing investigations dealing with services and supplies furnished to Medicare patients in nursing facilities. In all, we reviewed 70 ongoing or completed investigations. Nearly all of these investigations were initiated by either the carrier or OIG offices during the 1992 to 1994 time frame. We interviewed carrier and OIG officials. We also reviewed several independent studies related to the provision of services and supplies to nursing facility patients done by OIG and the Senate Special Committee on Aging. To examine the reasons inappropriate and abusive billings for services and supplies occur in the nursing facility setting, we reviewed carrier case files for each investigation, obtained court documents when applicable, and interviewed carrier and OIG investigators working on those individual cases. We analyzed this information to try to identify the specific programmatic factors that make services for beneficiaries in nursing facilities vulnerable to fraud and abuse. Finally, to identify options for removing opportunities for fraud and abuse, we reviewed the literature and held discussions with carrier, OIG, and trade association officials to solicit their views on ways to minimize fraud and abusive billing practices in the nursing facility environment. We conducted our review between April 1994 and August 1995 in accordance with generally accepted government auditing standards. Following are summaries of fraud cases that were closed by the time we completed our review. The defendants pled guilty or were convicted of the charges brought against them. In August 1991, a carrier received the first of three complaints from Medicare beneficiaries alleging that an optometrist was billing Medicare for more services than he actually provided. After reviewing the supporting medical records, however, the carrier determined that for these instances the services were documented as billed. In December 1991, another beneficiary called the carrier and stated that she had not received the office services claimed by this optometrist and in fact had not seen any physician during the month he claimed services were provided. Another complaint was received from the daughter of a beneficiary residing in a nursing home. The daughter stated that she had only authorized an eye examination, but the charges submitted to Medicare were for multiple services provided for both eyes—although her father had only one eye. In January 1992, the carrier’s claims processing department alerted the carrier’s program integrity unit of 98 suspicious claims submitted by this provider for services allegedly rendered in December to nursing home patients. All the claims cited exactly the same procedure codes and diagnoses; the only differences were dates of service and beneficiary identification information. From this universe, the carrier randomly sampled 10 beneficiary records from two nursing homes and found that the nursing homes had no records documenting the services claimed by this optometrist. On May 21, 1992, the carrier summarized this information and referred the case to OIG. By that time, Medicare had paid this provider $117,534 since 1990. OIG notified the carrier in June 1992 that it would not pursue this case criminally but that it would retain jurisdiction for civil prosecution. At OIG’s request, the carrier placed this provider on 100-percent prepayment review by the end of July, which meant that every claim the provider submitted was reviewed before payment was issued. Additionally, the carrier completed a postpayment review on this provider’s billings and reported in January 1993 that medical necessity appeared questionable for many claims and that nursing home visits and consultation requests were not documented. The optometrist requested a hearing before the Medicare hearing officer to review the carrier’s continuing reimbursement denials. The hearing officer ruled in November 1993 that all 157 claims submitted for diagnostic eye services for 154 nursing home patients between August and October 1992 were questionable. The carrier had denied all but three of these claims, but the hearing officer ruled that these too were overpaid. According to carrier files, the optometrist was still on prepayment review in March 1994. Although the optometrist’s billing volume had significantly decreased, the carrier noted that he continued to bill for the same type of procedures previously denied. As late as October 1994, the carrier was still receiving complaints about this provider. On January 16, 1995, the optometrist signed a settlement agreement in U.S. District Court whereby he admitted to knowingly filing false, misleading, or otherwise fraudulent Medicare claims for services either not medically necessary or not provided from 1991 through 1993 for which Medicare had paid him over $191,276. Under this agreement he would repay $35,000 and be excluded from the Medicare program for at least 3 years. The exclusion would extend for as long as the $35,000 assessed against him was not repaid. A beneficiary’s husband complained to the Medicare carrier that a supplier was billing Medicare for more supplies than it actually provided for his wife. The carrier referred this case to OIG, which opened its investigation on December 7, 1990. The company in question focused on publicly funded facilities in several states, such as state veterans homes or state-funded chronic disease or long-term care hospitals. The owner would approach facility administrators and offer to provide supplies for Medicare patients at no cost to the facility. This was an attractive proposition to these facilities, which typically have limited financial resources. The facility administrators gave the provider complete access to patient medical records. The suppliers reviewed all records, identified Medicare beneficiaries, obtained their Medicare numbers, developed lists of supplies based on diagnoses, identified attending physicians, and made copies of signed physician orders in the files. The supplier then billed Medicare for ostomy, enteral, or surgical dressing supplies that it delivered after September 1990, but it also billed for supplies it had never delivered to those beneficiaries over the previous 2 years. The supplier provided certificates of medical necessity by forging the attending physicians’ signatures on the certificates using samples of signed orders found in the patients’ files. The case involved about 4,000 fraudulent claims totaling about $1.5 million. The defendant was indicted on April 6, 1994, and pled guilty on May 5, 1994, to mail fraud, engaging in monetary transactions in criminally derived property, money laundering, and Medicare fraud. The defendant agreed to forfeit all property resulting from these fraudulent transactions, including about $328,000 in several bank accounts. The defendant was sentenced on October 19, 1994. This judgment was amended on November 21, 1994, to finalize the overpayment amounts. He was sentenced to 54 months in federal prison. Upon release from imprisonment, the defendant would be on supervised release for a term of 3 years provided he had made restitution as ordered by the court of $971,000 to Medicare and $60,000 to Medicaid. A Medicare carrier’s fraud and abuse unit received over 30 complaints from beneficiaries, family members, and other informants that a provider was billing for supplies that had not been furnished or were not needed. In August 1991, the carrier referred the case to OIG for investigation. OIG found that from April through June 1991, the provider filed claims totaling $666,560 on behalf of 1,096 beneficiaries for 65,833 liquid skin barriers. In addition, the provider filed claims totaling $11,406 on behalf of 319 beneficiaries for 1,210 adhesive removers or solvents. Medicare had paid the provider a total of $263,222 for both items, and most of the claims were for beneficiaries who were residing in nursing homes in another state. In addition to the payments made, the carrier was withholding payment on 9,500 claims totaling about $3.5 million. OIG visited the provider and found that the provider rented a small office and apparently did not furnish any supplies from this location. The provider, in effect, was a storefront operation or shell company. OIG believed the supplies originated from the provider’s parent company located in another state that also owns supply companies in at least six other states. On the basis of its investigation, OIG believed the provider had (1) billed Medicare for items that were not supplied, (2) misrepresented the items that were supplied, (3) misrepresented the place of service, (4) misrepresented the medical conditions of patients who received the supplies, and (5) paid kickbacks to nursing home officials to induce business. Also, OIG found that the provider gained access to patient medical records, obtaining diagnostic information and the names of beneficiaries’ attending physicians. The provider then entered the physicians’ names on claim forms; physician orders were often nonexistent. OIG concluded that the supplier served merely as a conduit for submitting claims to the Medicare carrier and receiving checks. The funds were then transferred to the parent company. On April 24, 1995, an officer of the parent company pled guilty to submitting more than $4.4 million of false and fraudulent claims to one Medicare carrier and similarly submitting millions of dollars worth of false and fraudulent claims to other Medicare carriers. The provider billed Medicare for supplies at inflated rates as well as for supplies that were not provided or not medically necessary—including billing supplies for patients who had died. In one instance, the provider billed Medicare $2,790 for surgical dressing kits that were never provided; instead, the provider furnished the patient 62 gauze bandages costing less than $20. Under a plea agreement, the officer of the company agreed to cooperate with federal authorities in their attempts to identify others involved in the conspiracy. In return, the U.S. Attorney will recommend a prison term not to exceed 15 months. After reviewing a Medicare explanation of benefits notice, a beneficiary’s son questioned why his mother—a nursing home resident—received such large quantities of supplies. In investigating the matter, the Medicare carrier determined that the provider was billing for supplies that were not provided. OIG began its investigation in July 1990. On November 8, 1990, a federal grand jury returned a 623-count indictment against the two providers it alleged obtained payment for false, fictitious, and fraudulent claims submitted for approximately 120 nursing home patients in three facilities in different states. One supplier (supplier #1) submitted the false claims to the carrier knowing that the supplies had never been provided. Another supplier (supplier #2) had given supplier #1 billing information for the patients in the three nursing facilities—patient names, Medicare numbers, diagnoses, treating physicians, and so forth. As part of the conspiracy, supplier #2 received a kickback from supplier #1 equal to 50 percent of the money received from the carrier for the false claims. Between June 7 and August 8, 1990, the carrier paid about $813,973 for false claims—an average of about $6,783 per patient. On September 30, 1991, as a result of plea-bargaining, supplier #2 pled guilty to one count of conspiracy to defraud the United States with respect to claims, one count of submitting false claims, and one count of mail fraud. On January 9, 1992, the vice president of supplier #2 was sentenced to 26 months in jail and ordered to repay $75,000 to HHS and pay a special assessment of $150. The company itself was also fined $5,000 and ordered to pay a special assessment of $100. On November 6, 1991, supplier #1 was found guilty after trial of all 623 counts. On February 4, 1992, the president of supplier #1 was sentenced to 24 months of incarceration and ordered to repay $300,000 to HHS and pay a special assessment of $31,150. The company was also fined $1,000 and ordered to pay a special assessment of $62,200. In November 1994, a civil case against supplier #1 was settled with a summary judgment. Supplier #1—which is out of business and bankrupt—was ordered to pay $4,955,000. Toward this amount, the government applied the $407,121 seized from the company’s corporate account and has up to 30 years to recover the remainder of the funds from the president of the company. On February 3, 1992, a Medicare carrier received a complaint forwarded from the daughter of a beneficiary. The complaint concerned a wheelchair cushion that should have cost less than $100 but that was charged to Medicare as a $1,503 custom-fitted body jacket. After receiving several complaints from family members of other nursing home residents and nursing home staff, the carrier noted the increased billings by the supplier submitting this claim and started to review related nursing home records. Carrier medical staff determined that services the supplier billed Medicare for were not provided, beneficiaries were not measured for a custom fit, the beneficiaries’ condition could not be improved by using the jackets, physical therapy was billed for but neither ordered nor received, and the physicians only signed prescriptions that had been prepared by someone else. By mid-May, the carrier had received 38 complaints against this supplier of which 34 pertained to body jackets. By mid-August, the inquiries concerning this supplier had increased to 50. The carrier referred this case to OIG on August 25, 1992. OIG’s investigation revealed that this supplier actually started marketing wheelchair cushions with adjustable straps as Medicare-reimbursable custom-fitted body jackets in 1991. Sales representatives for the supplier would market the body jackets to the directors of nursing as fully Medicare covered and promise that no real attempt would be made to collect the co-payments from the beneficiaries. They would offer to go into the nursing facility files to obtain the patient information needed to prepare claims and certificates of medical necessity. Usually, the facility allowed such access; in other cases, the facility provided the necessary information. In some cases, the supplier gained access to nursing homes by providing kickbacks in the form of payments on insurance policies or to individuals to induce them to order exclusively from this supplier. The sales people obtained physicians’ signatures on the certificates of medical necessity by assuring physicians that the director of nursing had requested the items and that the items would cost the patients nothing. The certificates would be sent to the supplier’s headquarters, where employees would add or delete wording from the signed certificates to guarantee Medicare payment. By December 1992, Medicare had paid this supplier over $1.6 million for wheelchair cushions misrepresented as body jackets. Seven persons involved in this fraudulent scheme were indicted and pled guilty; all were sentenced in fall 1994. The owner of the company was sentenced to 33 months in jail followed by 3 years on supervised release and was ordered to make restitution to Medicare of $386,508. Two other principal defendants connected with the firm were placed on probation for 5 years and each was ordered to make restitution of $386,508. Two persons convicted of receiving kickbacks were each placed on probation for 5 years and fined $9,000 and $10,000, respectively. (Two other defendants, who assisted in the investigation, pled guilty to lesser charges and executed agreements for pretrial diversion. Under these agreements, if they provided 100 hours of community service and abided by the conditions of the agreement, charges against them would be dismissed after 1 year and the record would be erased.) In September 1992, the granddaughter of a beneficiary reported to the carrier that her grandmother had never received a custom-fitted back brace (commonly referred to as a body jacket) that had been billed to Medicare by this supplier. One month later, a similar complaint was received from the husband of another beneficiary. The following month, the daughter of a third beneficiary wrote to the carrier complaining of a claim for a body jacket that her mother did not need or use. When she went to the nursing home, she learned that her mother and others had refused to accept the body jackets and the nursing home had put them in storage. After talking to the administrator and nurses at the home, the daughter learned that a representative of the supplier came to the nursing home, checked patients’ records, and left forms to be completed for products the patients qualified for. This complainant stated that this supplier appeared to be operating out of a residence: there was no telephone listing for the company. In December, the daughter of another beneficiary wrote to the carrier complaining about the “ridiculous” charge for a body jacket that was provided without her knowledge or consent. This supplier was established for the sole purpose of selling body jackets, which it sold primarily to nursing home patients. Company representatives went to nursing homes to sell wheelchair pads, which they asserted were reimbursable by Medicare. These representatives told nursing home personnel and beneficiaries that the beneficiaries would not be responsible for any costs not reimbursed by Medicare. The company representatives also prepared certificates of medical necessity for the wheelchair pads sold and obtained physicians’ signatures. The company then billed Medicare for body jackets. According to carrier correspondence, this supplier had been reimbursed $564,808 for body jackets from July 1, 1992, through December 31, 1993. On March 24, 1993, the carrier referred this case to OIG, noting that on the basis of the allegations and information it had obtained, the body jackets billed did not appear to be customized and the prescriptions appeared to be completed by someone other than the listed physicians. In its referral letter, the carrier reported that this supplier had received nearly $416,000 from Medicare during the last 6 months of 1992. All claims were for a $1,289 body jacket. On June 16, 1995, a plea agreement was filed in court in which the company pled guilty to one count of mail fraud and agreed to repay $450,000 to Medicare. As part of the consideration for this plea agreement, the president of this company agreed to testify before grand juries or at trials on matters involving other persons under investigation for this same type of fraudulent scheme. A Medicare carrier’s fraud and abuse unit began investigating a laboratory on January 29, 1991, on the basis of a complaint by a beneficiary’s daughter. The daughter stated her mother did not order the services for which Medicare paid the laboratory $142. Nor had the services been ordered by the beneficiary’s doctor or the nursing home where the beneficiary resided. The carrier reviewed the pertinent medical records for 15 beneficiaries in three nursing homes in two different states and found that none of the services were medically necessary. For 14 beneficiaries, there were no physician orders requesting the services, and for the remaining beneficiary, the diagnosis submitted with the claim was not documented. The carrier identified an overpayment of $10,520 for 19 claims paid during a 4-month period. On August 13, 1991, the carrier referred this case to OIG for further investigation. Following this referral, the carrier received several additional complaints against this laboratory. For example, a beneficiary’s legal guardian learned that Medicare paid the laboratory $706 for the telephonic transmission of 21 electrocardiogram (EKG) rhythm strips over a 2-week period. She contacted the beneficiary’s physician and learned that the physician did not authorize or have any prior knowledge of such tests. In another complaint, a former employee stated the laboratory’s salespeople contacted doctors regarding its cardiac monitoring system. If doctors signed up for the service, they had to provide listings of their heart patients in nursing homes. The laboratory then sent people to the nursing homes to perform 21 EKGs on each patient over an 11-day period, with the series of tests to be repeated three more times during the year. The laboratory charged about $1,000 per patient for the 11-day study, or about $4,000 per year per patient. However, the doctors generally did not see the results of the studies. The laboratory operated all over the country; one doctor alone had 275 patients on this program. As a result of its investigation, OIG concluded that most of the laboratory’s billings for these services were inappropriate. OIG found that the laboratory was billing Medicare for EKGs that were rendered as routine screening diagnostic tests, with no evidence of a physician evaluation and no indication that the patients were experiencing an arrhythmia, symptom, or complaint. OIG pursued the case as a civil false claims case because the diagnoses entered on the Medicare claim forms by the laboratory were false—they did not accurately reflect the patients’ conditions at the time the EKG services were ordered and rendered. OIG estimated that the actual Medicare overpayment in this case was about $6 million. On February 10, 1994, the laboratory and its three principal owners entered into an agreement to reimburse HHS $250,000 plus $16,118 in interest for inappropriate claims. In addition, the laboratory waived any future demands for $655,086 in payments (identified through a special prepayment review effort) that the Medicare carrier had withheld—thus resulting in a total settlement of $921,204. Finally, the provider and its three owners were excluded from participating in the Medicare program for a 5-year period effective February 10, 1994. The carrier initiated its review of this optometrist and the group optometry practice he owns on September 11, 1990. This review was based on a beneficiary’s letter to the carrier dated July 7, 1990, alleging that the optometrist performed only an eye examination, not the surgery he had claimed and had been reimbursed for by Medicare. By the time the carrier referred the case to OIG on July 30, 1991, it had received 19 complaints against the group practice and 2 against the individual; 19 of these complaints involved beneficiaries in nursing homes. The complaints alleged that the providers had billed Medicare for services that were never furnished or that were not necessary. According to a nursing home administrator, a representative of the group practice would visit nursing homes, review patient records, and obtain the necessary patient information to file Medicare claims. Several days later, an optometrist would visit the nursing home and render services. After reviewing nursing home medical records for 20 beneficiaries, the carrier concluded that the services billed were not medically necessary or not documented. The provider submitted the same diagnosis and tests on all claims. The claims identified a referring physician and charges for consultations, but the records contained no orders requesting consultations. On May 15, 1992, OIG requested that the carrier place both the individual optometrist and the group practice on 100-percent prepayment review. While this case was open, this optometrist wrote not only to the carrier but also to his congressional delegation alleging discrimination against him as an optometrist. He also enlisted the help of his patients to write to Medicare of his successes and to provide copies of those letters to their Congress Members and Senators. However, the case had nothing to do with the technical merits of his optometry practice. He was billing Medicare for services never rendered to patients and for consultations for which the medical files showed no physician requests. A civil settlement was signed on December 14, 1994, and filed in U.S. District Court on January 20, 1995. Although Medicare reimbursed this provider over $451,617 based on false, misleading, or otherwise fraudulent claims, the settlement involved a refund of $30,000 to be paid over a 35-month period and voluntary exclusion from the Medicare program for 3 years. If the defendant defaults on the financial obligation, the exclusion continues until the obligation is fully satisfied. Fraud and Abuse: Medicare Continues to Be Vulnerable to Exploitation by Unscrupulous Providers (GAO/T-HEHS-96-7, Nov. 2, 1995). Medicare: Modern Management Strategies Could Curb Fraud, Waste, and Abuse (GAO/T-HEHS-95-227, July 31, 1995). Medicare: Adapting Private-Sector Techniques Could Curb Losses to Fraud and Abuse (GAO/T-HEHS-95-211, July 19, 1995). Medicare: Rapid Spending Growth Calls for More Prudent Purchasing (GAO/T-HEHS-95-193, June 28, 1995). Medicare: Modern Management Strategies Needed to Curb Program Exploitation (GAO/T-HEHS-95-183, June 15, 1995). Medicare: Reducing Fraud and Abuse Can Save Billions (GAO/T-HEHS-95-157, May 16, 1995). Medicare Claims Billing Abuse: Commercial Software Could Save Hundreds of Millions Annually (GAO/T-AIMD-95-133, May 5, 1995). Medicare and Medicaid: Opportunities to Save Program Dollars by Reducing Fraud and Abuse (GAO/T-HEHS-95-110, Mar. 22, 1995). Medicare: High Spending Growth Calls for Aggressive Action (GAO/T-HEHS-95-75, Feb. 6, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed allegations of fraud and abuse related to services and supplies provided to nursing facility patients, focusing on: (1) the nature and extent of such fraud and abuse exist; (2) why nursing facility patients are an attractive target for miscreants; and (3) options for reducing fraudulent billing practices. GAO found that: (1) fraudulent and abusive billing of Medicare is widespread and frequent and a wide variety of providers have been involved in Medicare fraud or abusive billing related to nursing facility patients' care; (2) most fraud and abuse involves billing Medicare for unnecessary or undelivered services and supplies or misrepresenting services to obtain reimbursement; (3) Medicare patients in nursing facilities are attractive fraud targets because of the high volume and concentration of Medicare beneficiaries in nursing facilities, easier access to patients' medical records, billing without confirmation, the lack of sufficient and timely warning flags in Medicare's automated claim processing systems, and inadequate recovery of unwarranted payments; (4) to change its reimbursement method to incorporate the nursing facilities' monitoring of the provision of services and supplies Medicare will need long-term commitment, structural changes, unified billing, and capped payments; and (5) short-term steps to reduce fraud and abusive billing include instituting federal penalties for unauthorized disclosure of patients' medical records and incorporating various early warning controls into Medicare's claim processing systems. |
States. We disagree. As discussed in this report, a core element of the CSI program, specifically the extent to which U.S.-bound containers carrying high-risk cargo are examined at CSI seaports, is not addressed through CBP’s performance measures. Seaports are critical gateways for the movement of commerce through the international supply chain. The facilities, vessels, and infrastructure within seaports, and the cargo passing through them, all have vulnerabilities that terrorists could exploit. The containers carrying goods that are shipped in oceangoing vessels are of particular concern because they can be filled overseas at many different locations and are transported through complex logistics networks before reaching U.S. seaports. In addition, transporting such a shipping container from its international point of origin to its final destination involves many different participants and many points of transfer. The materials in a container can be affected not only by the manufacturer or supplier of the material being shipped, but also by carriers who are responsible for getting the material to a port and by personnel who load containers onto the ships. Others who interact with the cargo or have access to the records of the goods being shipped include, among others, exporters who make arrangements for shipping and loading, freight consolidators who package disparate cargo into containers, and forwarders who manage and process the information about what is being loaded onto the ship. Figure 1 illustrates many of the key participants and points of transfer involved from the time that a container is loaded for shipping to its arrival at the destination seaport and ultimately the importer. Several studies on maritime security conducted by federal, academic, nonprofit, and business organizations have concluded that the movement of oceangoing cargo in containers is vulnerable to some form of terrorist action, largely because of the movement of cargo throughout the supply chain. Every time responsibility for cargo in containers changes hands along the supply chain there is the potential for a security breach, and thus, vulnerabilities exist that terrorists could take advantage of by placing a WMD into a container for shipment to the United States. While there have been no known incidents of containers being used to transport WMDs, criminals have exploited containers for other illegal purposes, such as smuggling weapons, people, and illicit substances, according to CBP officials. Finally, while CBP has noted that the likelihood of terrorists smuggling WMD into the United States in cargo containers is low, the nation’s vulnerability to this activity and the consequences of such an attack are potentially high. In 2002, Booz Allen Hamilton sponsored a simulated scenario in which the detonation of weapons hidden in cargo containers shut down all U.S. seaports over a period of 12 days. The results of the simulation estimated that the port closure could result in a loss of $58 billion in revenue to the U. S. economy, along with significant disruptions to the movement of trade. The federal government has taken many steps to secure the supply chain, including the cargo in containers destined for the United States. While CBP officials at domestic seaports continue efforts to identify and examine high-risk imports arriving in containers, CBP’s post-September 11 strategy also involves focusing security efforts beyond U.S. borders to target and examine high-risk cargo before it enters U.S. seaports. CBP’s strategy is based on a layered approach of related initiatives that attempt to focus resources on potentially risky cargo shipped in containers while allowing other containers carrying cargo to proceed without unduly disrupting commerce into the United States. CBP has initiated most of these efforts, shown in table 1. However, the Department of Energy (DOE) has led U.S. efforts to detect radiation in cargo containers originating at foreign seaports. CBP uses this computerized decision support tool to review documentation, including electronic manifest information submitted by ocean carriers on all cargo destined for the United States to help identify shipments requiring additional scrutiny. ATS utilizes complex mathematical models with weighted rules that assign a risk score to each shipment based on manifested information. CBP officers review the rule firings that support the ATS score to help them make decisions on the extent of documentary review or examination to be conducted. CBP generally requires ocean carriers to electronically transmit cargo manifests to CBP’s Automated Manifest System 24 hours before the U.S.-bound cargo is loaded onto a vessel at a foreign seaport. Carriers and importers are to provide information to CBP that is used to strengthen how ATS assigns risk scores. The cargo manifest information is submitted by ocean carriers on all arriving cargo shipments, and entry data (more detailed information about the cargo) are submitted by brokers. CSI places staff at participating foreign seaports to work with host country customs officials to target and examine high-risk cargo to be shipped in containers for weapons of mass destruction before they are shipped to the United States. CBP officials identify the high-risk containers and request that their foreign counterparts examine the contents of the containers. CBP develops voluntary partnerships with members of the international trade community comprised of importers; customs brokers; forwarders; air, sea, and land carriers; and contract logistics providers. Private companies agree to improve the security of their supply chains in return for various benefits, such as a reduced likelihood that their containers will be examined. DOE installs radiation detection equipment at key foreign seaports, enabling foreign government personnel to use radiation detection equipment to screen shipping containers entering and leaving these seaports, regardless of the containers’ destination, for nuclear and other radioactive material that could be used against the United States and its allies. Pilot program at selected CSI seaports to scan 100 percent of U.S.-bound cargo containers for nuclear and radiological materials overseas using integrated examination systems that couple nonintrusive inspection equipment and radiation detection equipment. In January 2002, CBP began CSI to target container cargo at overseas seaports so that high-risk cargo could be examined prior to departure for the United States. More recently, Congress passed legislation affecting the CSI program, including (1) the SAFE Port Act enacted in October 2006 that established a statutory framework for CSI and, among other things, required a pilot program, now known as the Secure Freight Initiative, to determine the feasibility of 100 percent scanning of U.S.-bound cargo containers at foreign seaports; and (2) the 9/11 Act enacted in August 2007, that, among other things, requires by 2012, the scanning of all U.S.-bound containers at foreign seaports with potential exceptions if a seaport cannot meet that deadline. For the CSI program, CBP officials stated that DHS expended about $138 million and $143 million, respectively for fiscal years 2006 and 2007. The President’s budget for fiscal year 2008 requested $156 million for CSI. CSI is now operating at 58 seaports in 33 foreign countries, as shown in figure 2. Appendix III lists the specific CSI seaports. According to CBP, the three core elements of CSI include (1) CBP identifying high-risk containers; (2) CBP requesting, where necessary, that host governments examine high-risk containers before they are shipped; and (3) host governments conducting examinations of high-risk containers. To integrate these elements into CSI operations, CBP negotiated and entered into bilateral, nonbinding arrangements with foreign governments, specifying the placement of CBP officials at foreign seaports and the exchange of information between CBP and foreign customs administrations. To participate in CSI, a host nation must meet several criteria developed by CBP. The host nation must utilize (a) a seaport that has regular, direct, and substantial container traffic to seaports in the United States; (b) customs staff with the capability of examining cargo originating in or transiting through its country; and (c) nonintrusive inspection equipment with gamma or X-ray capabilities and radiation detection equipment. Additionally, each potential CSI port must indicate a commitment to (d) establish an automated risk management system for identifying potentially high-risk container cargo; (e) share critical data, intelligence, and risk management information with CBP officials; (f) conduct a seaport assessment to ascertain vulnerable links in a port’s infrastructure and commit to resolving those vulnerabilities; and (g) maintain a program to prevent, identify, and combat breaches in employee integrity. As part of the arrangements with foreign governments participating in CSI, CBP most often stations teams of CBP officers at each foreign seaport to conduct CSI activities in collaboration with host government customs officials. While the number of CBP officers stationed at CSI seaports varies by location, typically a CSI team consists of (1) a CSI team leader, who manages the team and monitors the relationship with the host country; (2) CBP officers, who target high-risk cargo and observe (where possible) the host government’s examination of containers carrying the cargo; (3) an intelligence research specialist, who assimilates data to support timely and accurate targeting of containers; and (4) a special agent responsible for CSI-related investigations at the seaport. According to CBP, it is ideal for the CSI team to be located in close physical proximity with host government customs counterparts to facilitate collaboration and information sharing. However, CBP officials also stated that the agency uses CBP officers stationed at the NTCC as needed to support the CBP officers located at the CSI seaports. The CBP officials at NTCC assist the CSI teams at high-volume seaports to ensure all containers that pass through CSI seaports are targeted to identify high-risk container cargo; carry out CSI targeting responsibilities for CSI seaports that do not have CBP officials stationed there; and, according to CBP officials, conduct targeting for U.S.-bound container cargo that does not pass through CSI seaports using national sweeps to identify high-risk container cargo. At CSI seaports, CBP officers share responsibilities with host governments’ customs officials to target and examine high-risk container cargo. Figure 3 describes the activities carried out by CBP officers and host government customs officials, respectively, to target and examine high-risk container cargo at CSI seaports. CBP has undertaken strategic planning to guide efforts to secure the international supply chain and, more specifically, to manage the CSI program. CBP contributed to an international supply chain security strategy DHS recently issued that builds on DHS’s existing strategic framework for maritime security. In 2006 CBP enhanced its strategic plan for CSI by including three key elements missing from the plan’s previous iteration, and has achieved two performance goals by expanding CSI locations and increasing the percentage of total U.S.-bound containers that pass through CSI seaports. Concurrently, CBP reported an increase in the number of high-risk containers examined by host governments participating in CSI. When it published the Strategy to Enhance International Supply Chain Security in July 2007, DHS filled a gap that had existed between broad national strategies and program-specific plans in the federal government’s strategic planning framework for maritime security. Over the last 5 years, DHS has made progress in developing a multilayered strategic framework for securing the maritime domain, including the international supply chain. This framework consists of high-level national strategies, such as the National Strategy for Maritime Security and the Maritime Commerce Security Plan, which describe the federal government’s broad approach to maritime security. These plans are supplemented by a related hierarchy of documents that includes the DHS strategic plan, the CBP strategic plan, and the CSI program’s own strategic plan. Prior to July 2007, the federal government’s maritime security framework touched on many specific aspects of maritime trade and commerce, such as how the CSI program contributes to securing containers bound for U.S. seaports. However, it did not provide a detailed description of how federal, state, and local authorities were to collaborate on supply chain security specifically. In addition, Congress included a provision in the SAFE Port Act of 2006 requiring DHS to develop a strategic plan to enhance the security of the international supply chain. Moreover, the DHS fiscal year 2007 appropriation act withheld $5 million from DHS until a comprehensive strategic plan for port, cargo, and container security, which included specific elements, had been submitted to specified congressional committees. In response, CBP contributed to the Strategy to Enhance the International Supply Chain Security, which DHS developed and issued in July 2007. According to DHS, the supply chain security strategy is not meant to replace other strategic planning documents, but seeks to harmonize the goals of the various plans and programs into a multilayered, unified approach that can be further developed by DHS components, including CBP. This new strategic planning document for supply chain security delineates the supply chain security roles, responsibilities, and authorities of federal, state, local, and private sector entities. The strategy seeks to build on the current multilayered strategic framework for maritime security by establishing an overarching framework for the secure flow of cargo through the supply chain—from point of origin to final destination. The strategy describes how CBP’s portfolio of supply chain security initiatives—including CSI, C-TPAT, cargo screening using ATS, the 24-hour rule, and the use of nonintrusive inspection equipment to examine containers—addresses the various stages in the supply chain. In addition, the strategy provides details on how other organizations’ programs or efforts—such as DOE’s Megaports initiative, which places radiation detection equipment at foreign seaports—contribute to different aspects of supply chain security. Figure 4 describes the major components of the supply chain and the CBP initiatives that operate to secure them. At the program level, CBP has revised its CSI strategic plan, an important component of the DHS strategic framework described above, incorporating three critical elements that were absent from the plan’s previous iteration. In our April 2005 report on CSI, we reported that the CSI strategic plan lacked three of the six key elements identified by the Government Performance and Results Act of 1993 for an agency strategic plan, including descriptions of 1. how performance goals and measures are related to program objectives, 2. the external factors beyond the control of CBP that could affect the achievement of program objectives, and 3. the evaluations that CBP conducts to monitor CSI. We noted that, given the importance of having an effective strategic plan for the program, we would continue to monitor CBP’s progress in refining the plan. CBP has subsequently taken steps to address our concerns. In the most recent version of the plan, released in August 2006, CBP included information in three areas, as we had previously recommended. First, the CSI strategic plan links each performance measure to the strategic goal it supports. In addition, the plan describes how some performance measures were designed to act as proxies for program objectives that can be difficult to measure. Second, the CSI strategic plan also lists a variety of external factors that have the potential to influence CSI operations, including regional conflicts, organized crime, and changes in the political administration of a foreign government participating in CSI. Finally, the revised plan provides an explanation of the CSI team evaluation process, thus addressing the third issue identified in our April 2005 report. We discuss performance measure outcomes, other external factors, and CBP’s evaluation process in greater detail later in this report. The August 2006 CSI strategic plan set specific goals for expanding the number of seaports participating in CSI, and set targets for related increases in the percentage of total U.S.-bound containers that pass through CSI seaports. As of September 2007, CBP reported meeting its goals in both of these areas. Specifically, the plan called for CBP to expand CSI program operations from 40 to 50 seaports by the end of fiscal year 2006, and to 58 seaports by the end of fiscal year 2007 (see appendix III for a complete list of participating seaports). Having reached its goal of 58 CSI seaports, CBP officials reported it currently does not have plans to add other CSI seaports, as the costs associated with expanding the program further would outweigh the potential benefits. In addition, the plan set a performance target that by 2010, 86 percent of all U.S.-bound container cargo was to pass through CSI seaports. According to CBP, when U.S.- bound containers pass through CSI seaports there is an opportunity for high-risk cargo to be examined at the foreign seaport by the host governments participating in CSI, rather than upon arriving at a U.S. seaport. CBP reported that about 73 percent and about 80 percent of total U.S.-bound container cargo passed through CSI seaports in fiscal years 2005 and 2006, respectively, and that it reached its 2010 goal early by reaching approximately 86 percent by the end of fiscal year 2007. Figure 5 shows that as the number of operational CSI seaports expanded from 2002 to 2007, the proportion of total U.S.-bound container cargo passing through CSI seaports also continued to increase. In implementing the CSI program and reaching its goal of 58 operational CSI seaports, CBP selected foreign seaports to participate in the program in three phases. CBP officials reported using the following general selection criteria for each phase as follows: Most of the 23 phase I seaports were selected because they shipped the highest volume of U.S.-bound container cargo. The 19 phase II seaports were selected based on factors such as cargo volume, strategic threat factors and the foreign government’s level of interest in CSI. The 16 phase III seaports were selected using the phase II criteria as well as diplomatic or political considerations, such as the requests of foreign governments already participating in CSI. As CBP expanded the number of CSI seaports and increased the proportion of total U.S.-bound container cargo passing through CSI seaports, the agency also achieved increases in security activities that occur at CSI seaports—targeting (CBP screens container cargo with ATS to produce risk scores and conducts additional review or research to ascertain risk levels) and examining high-risk container cargo (host government officials examine high-risk containers by scanning with nonintrusive inspection equipment or by physically searching the container). As of September 2007 CBP reported fully targeting 100 percent of all U.S.-bound container cargo to identify high-risk cargo as required by the SAFE Port Act. In addition, foreign governments participating in CSI have examined an increasing amount of high-risk container cargo as a growing proportion of total U.S.-bound containers pass through CSI seaports. In keeping with the CSI program’s risk-based approach, CBP currently does not request that the host governments examine all U.S.- bound containers passing through the CSI seaports, just those that CBP officers have determined to be high-risk. In fiscal year 2006, the number of high-risk containers examined by host government officials at CSI seaports increased by 77 percent from the previous year to almost 71,000 containers. In fiscal year 2007, examinations continued to increase, reaching almost 137,000 containers. Moreover, in fiscal year 2007 CBP reported that host government officials examined approximately 96 percent of the container cargo referred for examination. CBP reported that about 4 percent of the referrals did not lead to examinations (about 5,600 requests) because (1) logistical difficulties arose, such as the container had already been loaded on the shipping vessel (about 5,200 requests),or (2) the host government denied the request (fewer than 400 requests). CBP has made various operational improvements to CSI, though challenges remain. First, CBP has revised its human capital plan and added permanent staff at CSI seaports, though it reports difficulties in hiring and deploying qualified staff. Second, CBP’s relations with CSI host governments we spoke to that conduct cargo examinations have improved over time, though access to key examination-related information and processes is limited by host governments at some CSI seaports. And finally, CBP’s ability to conduct CSI program activities involves logistical challenges that are inherent to many seaport environments, such as those that are densely packed with equipment and personnel. The ability of the CSI program to operate in accordance with its mission and objectives depends, in part, on the success of its human capital strategy—and CBP’s ability to manage and deploy staff in a way that ensures that critical security functions are performed. Our April 2005 report on CSI noted that although CBP’s goal is to target all U.S.-bound cargo shipped in containers at CSI seaports before they depart for the United States, the agency had not been able to place enough officers at some CSI seaports to do so. Specifically, CBP had developed a CSI staffing allocation model to determine the staff needed to target container cargo. However, at some CSI seaports CBP had been unable to staff the CSI teams at the levels called for in the CSI staffing model. We noted that CBP’s staffing model had not, at the time, considered whether some of the targeting functions could be performed in the United States. We recommended that CBP revise its staffing model to consider what functions need to be performed at CSI seaports and what functions can be performed in the United States, optimum levels of staff at CSI ports, and the cost of locating CBP targeters overseas at CSI seaports instead of the United States. CBP has subsequently taken several steps to increase the number of CSI officers and to implement our 2005 recommendations. For example, in response to our concerns about staffing imbalances across seaports and shortages at the highest-volume seaports, CBP has increased staffing levels, bringing them closer to those called for in its staffing model— resulting in a parallel increase in the volume of container cargo that is targeted. Also, CBP has added 15 staff to CSI targeting duty at the NTCC since 2005, composed of temporary and permanent officers. In addition, in fiscal year 2007 CBP deployed an additional 125 permanent and 68 temporary officers to CSI seaports. Considering the officers at both CSI seaports and the NTCC, as of November 2007, CBP had deployed 209 CSI officers, which exceeds the 203 called for in the CSI staffing model. As a result of these efforts, CBP officials told us that they had increased their targeting of U.S.-bound container cargo from 65 percent in April 2005 to 100 percent in September 2007. The agency also developed cost estimates for placing a mix of permanent and temporary staff at CSI seaports (with permanent staff costing about $330,000 per year and temporary staff about $275,000 per year) in response to our recommendation. CBP reported that the advantages of placing officers at CSI seaports on a permanent rather than a temporary basis include greater opportunities for enhanced communication and coordination with host governments, and less disruption due to fewer rotations into and out of the country. At one CSI port that we visited, host government customs officials told us that the presence of permanent staff facilitated increased information sharing, which over time could lead to a decrease in unnecessary examinations. Despite the progress it has made, CBP continues to face staffing challenges. CBP officials told us, for example, they continue to face challenges in obtaining sufficient numbers of qualified officers to be permanently deployed at CSI seaports. For example, CBP officials reported that only 9 qualified applicants applied for 40 permanent positions at CSI seaports. Officials told us that CSI must compete for staff with targeting or seaport experience with other CBP programs or positions, such as C-TPAT or other programs that operate at the NTCC. To fill open positions at CSI seaports, CBP officials reported that in some instances officers have been deployed who have not received all of the required training. In addition, CBP evaluation data we reviewed showed examples of CBP officers at CSI seaports lacking key skills, such as the ability to target proficiently or communicate in the local language. In addition, CBP has taken action to enhance its human capital planning process for CSI, but has not yet included important factors in its staffing allocation model. As we reported in 2005, one of the features of the CSI staffing model that may contribute to staffing imbalances was its reliance on placing officers overseas at CSI seaports. It did not consider what functions could be done in the United States. In May 2006, in response to our recommendations, CBP issued a human capital plan that did not specify that CSI targeting positions be located at CSI seaports, thus recognizing that officers could support CSI seaports from the NTCC in the United States. CBP officers assigned to the NTCC perform many of the same roles as officers at CSI seaports, including reviewing bills of lading. CBP officers at the NTCC review bills of lading for high-volume seaports where the placement of the number of CSI officers required to review all bills of lading is unfeasible. In addition, according to CBP officials, CBP officers at the NTCC review bills of lading for U.S.-bound cargo from CSI seaports where no CBP officers are stationed. Though CBP’s 2006 human capital plan generally recognizes that some CSI functions can be performed at either a CSI seaport or at the NTCC, the staffing allocation model used to calculate the number of targeters necessary to review bills of lading for each CSI port does not include factors that specify where these positions should be located. In addition, CBP’s staffing allocation model does not take into account activities other than targeting—such as witnessing host government examinations—that CSI officers perform at CSI seaports. According to CBP, the agency stations as many of the total officers needed as possible at the CSI seaports, but if the number of officers needed is higher than the number of officers allowed by the host government or available to be stationed in the seaport, then the remainder of the officers target from the NTCC. However, we found that CBP has still not systematically determined the optimal number of officers that need to be physically located on-site at CSI seaports to carry out duties that require an overseas presence (such as coordinating with host government officials or witnessing the examinations they conduct) as opposed to other duties that could be performed off-site in the United States (such as reviewing bills of lading and databases). Also, CBP’s revised CSI human capital plan does not include costs related to placing temporary staff at the NTCC and thus does not have the data needed to conduct a cost-benefit analysis for determining the optimal location for its CSI officers. As we noted in our 2002 report on a staffing framework for use at U.S. embassies, federal agencies should consider factors such as cost and physical security of foreign operations and consider options such as relocating staff to the United States, as part of their framework for determining the right number of staff to be placed overseas. Determining optimal staffing levels is particularly important in light of ongoing challenges CBP reports facing to identify sufficient numbers of qualified individuals to hire for the program, and in light of the program’s recent expansion to additional seaports around the world. While CBP has taken steps to implement the recommendations from our April 2005 report, further action is needed regarding the staffing allocation model. Specifically, as we recommended in 2005, the model should be revised to consider (1) what functions need to be performed at CSI seaports and what functions can be performed in the United States, (2) the optimum levels of staff needed at CSI seaports to maximize the benefits of targeting and examination activities in conjunction with host nation customs officials, and (3) the cost of locating targeting positions overseas at CSI seaports instead of in the United States. CSI’s strategic plan emphasizes the importance of CBP’s continued efforts to foster partnerships with foreign customs officials at CSI seaports to improve CSI operations. Specifically, according to CBP headquarters officials, when CSI teams stationed at foreign seaports develop strong interpersonal relations with foreign government officials, it leads to increased trust and information sharing and thus improved targeting and examination of high-risk cargo. While the extent of cooperation across all of the 58 CSI seaports now operating is difficult to quantify, our observations at 6 CSI seaports and our review of select CSI team evaluations provide examples of how collaboration can benefit the CSI program, and conversely, how the lack thereof can hinder progress. At all 6 CSI seaports we visited, CBP officers or host government officials told us that the relationship between the CSI team and the host government has been positive or has improved over time. CBP and host government officials we spoke with at all of the seaports we visited reported that establishing trust and collegiality has led to increased information sharing, resulting in more effective targeting and examination of high-risk container cargo. For example, CBP officers noted instances in which host customs officials would occasionally notify them of container cargo they thought could be high-risk, so that CBP could take a closer look at the information available in ATS related to the container cargo. In addition, a few CBP officers or host government officials stated that the presence of CSI teams at foreign seaports has in many instances helped to prevent unnecessary examinations because information provided by host government customs officials has led to lower risk profiles for certain container cargo. Moreover, CBP officials reported that strengthened relationships with host government officials and the trade community have led host governments to bolster their customs and port security practices. CBP officials we spoke to emphasized that, like the United States, most foreign customs administrations have traditionally focused on revenue collection and the seizure of contraband, rather than security concerns. During our visits to CSI seaports, the CBP and host government officials we spoke with reported several examples of how the presence of CSI teams at seaports has helped to expand the focus of the efforts of these foreign customs administrations and the trade community to include enhanced security practices. For example, one country developed databases with trade information to achieve its customs goals and to assist CSI after seeing how gathering historical data benefited CBP. Furthermore, at a couple of the CSI seaports we visited, the CSI team or host government officials arranged outreach meetings with the trade community to raise companies’ awareness of security practices and the benefits of providing correct and complete data about their cargo. During our visits to CSI seaports and our review of data CBP collected during its evaluations of CSI teams, we also identified instances where cooperation between CSI teams and their counterparts in the host government could be improved—though, as CBP officials noted, some of the factors involved are beyond CBP’s ability to control directly. For example, in some locations, CBP officials reported that a country may have laws that hinder the collaboration of host government officials with CSI teams. We identified the following issues during our observations at 6 CSI seaports as well as from our review of CBP data collected in fiscal year 2007 at an additional 12 CSI seaports (for a total of 18 CSI seaports): At 9 CSI seaports, the CSI teams there reported that they only interacted infrequently with their host government counterparts or the host government officials did not readily share information that would benefit CSI, such as knowledge about potentially suspicious container cargo. In one instance the lack of interaction was attributed to the host government’s competing priorities. At 6 CSI seaports, host governments restricted CSI teams from viewing nonintrusive inspection equipment examinations conducted by host customs authorities or the resulting images of the container’s contents, which is one of the key purposes for staffing CBP officers at CSI seaports. At 4 CSI seaports, host governments prohibited the use of hand-held radiation detection devices by CBP officials, which is considered by CBP to be an important way to identify a potential anomaly in a high- risk container. According to CBP officials, a few of the countries prohibit the equipment due to safety and health concerns about the use of the equipment. At 3 of the CSI seaports, host customs officials lacked access to technical equipment, such as computers or nonintrusive inspection equipment that worked properly, which CBP believes could limit their ability to share customs-related information with CSI team members or efficiently conduct examinations. According to CBP officials, sometimes host governments lack resources to meet these technological needs. At 6 CSI seaports in 2 countries, CBP officers at the seaport reported that host customs administrations did not provide a sufficient number of staff to assist CSI teams or the host government officials were often unavailable, which, according to CSI teams, can sometimes lead to delays in examining high-risk containers. At 3 CSI seaports, there was evidence of challenges to effective communication, such as some CSI teams having limited proficiency in the local language. These examples are not intended to represent the CSI program as a whole, but are included to illustrate the types of challenges that CSI teams at the seaports and CBP program managers face. CBP officials responsible for managing the CSI program have reported that overall there has been a high level of cooperation at CSI seaports, though they acknowledged that the degree of involvement and participation that CBP officers have with foreign customs officials during the examination of high-risk cargo varies by country. It is also important to note that while CBP negotiates a written, nonbinding arrangement stating expectations for inclusion in the CSI program with the participating foreign governments, the agency cannot compel foreign governments to offer information for the purposes of CSI or to examine high-risk containers. Later in this report, we describe the processes CBP has in place to address difficulties that may be identified at the CSI seaports as part of its program oversight and monitoring efforts. Another factor that can affect CBP’s ability to conduct CSI program operations involves logistical challenges that are inherent to many seaport environments. For example, as illustrated in figure 6, foreign government officials we spoke with at CSI seaports reported that many seaports are densely packed with equipment and personnel, which can make it difficult for host government customs officials to examine container cargo. According to CBP, open space to place scanning equipment or to conduct physical searches of containers can be scarce at some CSI seaports. For example, in two of the CSI locations we visited, scanning equipment and examination sites were placed several miles from where container cargo is unloaded, loaded, or stored. According to the CBP officials we spoke with, this adds to the costs and time required for examination and may result in logistical difficulties in having high-risk U.S.-bound containers examined before being loaded onto the shipping vessel. In addition, at one port we visited, the host government limited the number of containers it would examine, in part to limit the cost of examination and the amount of delay caused by moving these containers, according to the CSI team we spoke with. CBP officials reported that despite this limit to examine no more than 250 containers (out of the over 115,000 container cargo shipments to the United States from this seaport in fiscal year 2007), the country has not denied many examination requests—only two in fiscal year 2007. However, this ceiling was not based on risk factors, and an increase in denied requests could lead to additional containers with high-risk cargo departing for the United States without being examined. Finally, CBP officials stated that containers at seaports are generally stored in a container yard before they are loaded onto the shipping vessel. These container yards may be very large, and containers in these yards are often stacked to minimize the time required to load container vessels. As shown in figure 7, containers on a vessel may be stacked several layers deep. Accordingly, CBP and host government officials we spoke to at a few CSI seaports reported it can sometimes be challenging to access a container for examination. CBP officials noted that any examinations requested but not conducted in the CSI seaport would occur at a U.S. seaport upon arrival. CBP has enhanced how it collects CSI data by strengthening its approach to conducting periodic evaluations of CSI officers at CSI seaports through on-site evaluations of performance. However, weaknesses remain in how CBP conducts evaluations, the information collected regarding host government examination systems, and performance measurement of the program as a whole. For example, CBP does not systematically collect information on the equipment, people, and processes that are part of the host government’s overall examination system. Also, while CBP has refined and updated its performance measures, we identified remaining limitations, such as the omission of measures for all core program elements and several performance targets. CBP conducts evaluations at CSI seaports to determine the effectiveness of the program. Specifically, CBP uses these on-site evaluations to assess CSI team operations and capabilities, such as how well CSI team members use ATS to determine the risk levels associated with U.S.-bound containers passing through CSI seaports. CBP’s CSI strategic plan states that these periodic reviews are intended both to ensure that deployed CSI teams are adhering to standard operating procedures as well as to evaluate the relationships between the teams and the host customs administrations. In fiscal years 2006 and 2007, CBP reported conducting 42 and 45 evaluations, respectively. Since the program’s inception in 2002, the agency reported conducting a total of 202 evaluations. In November 2006, CBP significantly changed the way it conducts CSI team evaluations. Prior to that time, CBP officials reported that its evaluators relied on self-reported information from CSI team members on how proficiently they performed CSI program activities. CBP’s current approach to conducting CSI team evaluations seeks to provide a more thorough review of CSI team performance. According to CBP officials, the agency now requires the CSI team members under review to demonstrate their targeting competence to an evaluator, such as by physically showing the evaluator how they review information about container cargo to determine its risk level. To better assess the deployed CSI team’s performance, CBP augmented its evaluation teams with officers who have expertise in areas such as targeting and intelligence gathering. Also, CBP has developed a new software tool that enables evaluators to record evaluation data electronically, using laptop computers to conduct the on-site evaluations. This tool, CSI Team Evaluation (CSITE), consists of a series of yes or no questions that cover the various areas of CSI team performance, including whether all of the container cargo that the CSI team designated high-risk were examined and whether these actions were properly documented. The CSITE tool also provides guidance on each question and prompts evaluators as they conduct their review by, for example, directing them to ensure that the CSI team is using the correct settings in ATS. In addition, employing CSITE, CBP reported it can now aggregate the results of some or all of its evaluations, a capability it previously lacked, and can conduct statistical analyses of the results of the evaluations. The agency can determine, for example, what percentage of CSI team members successfully demonstrated proficiency in targeting high-risk containers. According to CBP officials, CSITE will eventually allow the agency to make comparisons of CSI performance across seaports. Moreover, CBP now retains the information it collects at CSI seaports and the resultant evaluation reports in a more systematic fashion. CBP officials acknowledged that the agency did not always store this data effectively prior to the implementation of the new evaluation system and could not provide us with documentation of all of the evaluations it had conducted since the program’s inception. While these efforts should help to strengthen the CSI team evaluation process, CBP is still not consistently collecting all available data to aid in its analysis of CSI team performance, and we identified instances in which the agency did not reconcile contradictory information it had collected. Based on our review of CBP’s documentation associated with 34 evaluations to assess the information the agency collected and its methods for doing so, we found that evaluators do not always answer all of the questions contained in CSITE. For example, the software tool instructs the CBP evaluation team to collect information on whether recommendations made in prior evaluations have been implemented. This information could allow CBP to determine whether past problems have been addressed, but it is not always provided by the evaluation team. We also identified discrepancies between (a) the CSITE checklist of questions that the evaluation team completes during the onsite evaluation, and (b) the resulting evaluation report produced by CBP headquarters officials for 2 of the 14 locations for which we had both documents to compare. At one seaport, for example, the CBP evaluation team indicated in the CSITE checklist that the CSI team did not have all of the data systems it needed to effectively target outbound shipments, whereas the evaluation report stated the team had access to all of the appropriate targeting tools and databases. With more complete information, collected in a consistent manner, CBP may be better able to determine how well CSI teams are performing, what corrective actions may be needed to improve the program, or whether the CSI program is achieving its security goals. In April 2005, we recommended that CBP establish minimum technical criteria required for the capabilities of nonintrusive inspection equipment at CSI seaports, while considering sovereignty issues with participating countries. CBP agreed to evaluate the feasibility of establishing such criteria. In 2006, section 205(e) of the SAFE Port Act required DHS to establish minimum technical capability criteria for the use of nonintrusive inspection equipment and nuclear and radiological detection systems in conjunction with CSI, but noted that these criteria should not be designed to conflict with the sovereignty of host countries. In 2007, the 9/11 Act also required the Secretary of DHS to develop technological standards for scanning systems that will be used to conduct 100 percent scanning at foreign seaports in the future and to ensure that these and other actions implementing the act’s 100 percent scanning provisions do not violate international trade obligations and are consistent with the World Customs Organization framework or other international obligations of the United States. CSI host governments, which are responsible for conducting examinations of container cargo, purchase and operate nonintrusive inspection equipment, though as of November 2007, 13 CSI seaports use equipment on loan from the United States. The capabilities of this inspection equipment vary by manufacturer and model. The equipment may differ, for example, in its ability to penetrate steel shielding in order to generate an image of container contents, or may scan containers at different rates. Appendix IV describes the capabilities of this equipment in greater detail. As of November 2007, CBP had not yet implemented our prior recommendation or taken actions to meet the SAFE Port and 9/11 Acts requirements for setting minimum technical criteria. CBP officials stated that the reason for this is that they do not consider the agency to be a standard-setting organization. While CBP refers host governments to the World Customs Organization’ SAFE Framework regarding the procurement of inspection equipment, this document does not include specific technical criteria or standards. Moreover, they added that it is important to acknowledge the inherent challenges involved in efforts to ascertain the capabilities of nonintrusive inspection equipment that is owned and operated by CSI host governments. In May 2005, however, CBP put forth minimum technical criteria to evaluate the quality and performance of nonintrusive imaging inspection equipment being considered for use at U.S. seaports. These domestic standards set baseline performance requirements for penetration, contrast sensitivity, throughput, image quality, and scan size. To determine whether certain types of nonintrusive inspection equipment were acceptable for use at domestic seaports—and could meet the criteria that had been set— CBP conducted tests comparing the capabilities of nonintrusive imaging inspection equipment provided by seven manufacturers with its technical operating standards. On the basis of the test results, CBP recommended the inspection equipment from five of the seven manufacturers for use at domestic seaports, while equipment from two manufacturers was not recommended. CBP officials stated that there are no plans to systematically compare the capabilities of inspection equipment at CSI seaports against these criteria for domestic equipment due to sovereignty concerns. CBP collects limited information on certain characteristics of the inspection equipment installed at CSI seaports, such as manufacturer; however, information related to capabilities and performance is not generally obtained. Officials in CBP’s Office of Technology stated that they have information on the capabilities of equipment that the United States loans to other countries for 16 CSI seaports, and that only this equipment can be assured of meeting the CBP domestic requirements. However, these CBP officials said that they had neither determined which other CSI seaports use the inspection equipment that was assessed as part of CBP’s test and recommended for use at domestic seaports, nor systematically determined the specific capabilities of the equipment used at those CSI seaports. Host government officials in the countries we visited stated that they followed their country’s acquisition procedures, which included reviewing equipment capabilities and performance, among other things, for the purchase of nonintrusive imaging inspection equipment. However, CBP does not have documentation on the testing used by the host countries or the manufacturers to determine the basis for the equipment’s stated performance or whether this stated performance is less than, meets, or exceeds the criteria CBP established for equipment used at domestic seaports. According to CBP officials, the capabilities of nonintrusive inspection equipment are vetted during an assessment phase of the CSI program, when CBP is determining whether a seaport is prepared to operate within CSI. While, as part of the assessment phase, CSI officials stated that they collect descriptive technical information about the type of nonintrusive inspection equipment to be used at seaports, we did not find—in our review of CBP’s checklist used to guide its assessment teams as they examine prospective CSI seaports—questions covering inspection equipment other than general direction to ascertain whether some type of this equipment was in place. Also, through our review of CBP’s assessments of 10 CSI seaports—through which approximately 55 percent of all U.S.-bound containers passed in fiscal year 2007—we did not find any assessments that described the performance capabilities of the equipment or judgments about the proficiency of host government officials in operating these systems. CBP officials stated that the agency has never prohibited a seaport from participating in CSI on the basis of its inspection equipment, and CBP documents show that participation in the program requires only that some type of nonintrusive inspection equipment be available at or near the potential CSI port. The SAFE Port Act also directed DHS to (1) establish standard operating procedures for the use of nonintrusive inspection equipment at CSI seaports and (2) require CSI seaports to operate the equipment in accordance with the criteria and operating procedures established by DHS. Also, the 9/11 Act required DHS to develop operational standards for scanning systems that will be used to conduct 100 percent scanning at foreign seaports in the future. CBP officials stated that they recognize that the capabilities of nonintrusive inspection equipment are only one element for determining the effectiveness of examinations that take place at CSI seaports. It is better, in their view, to make assessments of the whole examination system, which includes nonintrusive inspection equipment, personnel, and processes. However, CBP acknowledged it does not systematically collect information on host governments’ use of examination systems and has not developed general guidelines or criteria that could provide CBP with the means to determine the quality of examinations of high-risk container cargo bound for the United States. CBP officials stated that they rely on CSI teams to notify headquarters if they have concerns about the host government customs or examination practices. Specifically, each CSI team leader is to meet weekly—usually via teleconference—with a CSI manager located at CBP headquarters to discuss ongoing CSI operations. However, CBP officials acknowledged that equipment, capabilities, and examinations practices of host government customs personnel are not routinely discussed. CBP officials also reported that CSI team members witness most examinations of high-risk U.S.-bound containers, and their presence at the examinations would allow them to make judgments about aspects of the host government’s examination system. However some host governments specifically prohibit CSI team members from witnessing examinations. Also we found that CBP officials did not routinely observe inspections at one CSI seaport we visited, and were not always able to be present for inspections at two other CSI seaports because those inspections were scheduled and conducted when CBP officials were not available. CBP officials told us that their CSI team evaluations are also a means of capturing some information on various aspects of the host government’s examination system. In order to participate in CSI, CBP requires that, among other things, host governments have customs staff capable of examining cargo originating in or transiting through its country and maintain a program to prevent breaches in employee integrity. However, the 15 CSI team evaluations we reviewed, which CBP had conducted since the agency revised its evaluation process in November 2006, showed limited coverage of whether host government customs personnel have been trained to use nonintrusive inspection equipment or are using it properly, the sufficiency of host staffing levels, and host government efforts to ensure the integrity of their customs administration. Specifically, 6 of the 15 CSI team evaluations discussed whether equipment was used properly, 1 discussed host staffing levels, and none discussed host integrity programs. CBP’s lack of a systematic way to collect information on host governments’ examination systems—including their equipment, people, and processes—potentially limits CBP’s ability to ensure that examinations of high-risk container cargo at CSI seaports can detect and identify WMD. Without information on host governments’ examination systems, CBP management may not be able to determine the reliability of the host government’s inspections of high-risk U.S.-bound container cargo. This is of particular concern since, according to CBP officials, most high- risk cargo that has already been examined at a CSI seaport, is generally not reexamined once it arrives at a U.S. seaport. CBP officials stated that if problems are found in the examination process at a CSI seaport, then high-risk container cargo would be reexamined upon arrival in the United States. As already noted, CBP must respect participating countries’ sovereignty. CBP cannot require that a country use specific equipment. However, if a high-risk container was examined using an examination system found by CBP to be less capable than established criteria, the agency could require that the container be reexamined upon arrival at a U.S. seaport. CBP officials stated that they believe that in general the equipment used by participating governments meets or exceeds the capabilities of the nonintrusive inspection equipment used at U.S. seaports. However, because CBP has not set minimum technical criteria for nonintrusive inspection equipment at CSI seaports, and the agency does not systematically review the operations of the host government examination systems at CSI seaports, CBP potentially has limited assurance that their inspection equipment is capable of detecting and identifying potential WMDs. In light of the new 9/11 Act requirement that 100 percent of U.S.- bound container cargo be scanned in the future with nonintrusive inspection equipment at foreign seaports before leaving for the United States, it is important that CBP have processes in place to gather the information necessary to ensure that cargo container examinations—and the equipment used as part of the examination process—are reliable, regardless of the point of origin. While CBP has taken steps to strengthen performance measures for the CSI program, we identified areas that did not fully address our April 2005 recommendation to develop outcome-based performance measures or proxy measures of program functions—if program outcomes could not be captured—and performance targets to track the program’s progress in meeting its objectives. Whereas CBP’s CSI team evaluations and program monitoring activities help to evaluate CSI operations at the seaport level, CBP uses performance measures to gauge the effectiveness of the overall program in meeting its broader strategic objectives for CSI across seaports. By definition, performance measures are a particular value or characteristic used to quantify a program’s outputs—which describe the products and services delivered over a period of time—or outcomes— which describe the intended result of carrying out the program. A performance target is a quantifiable characteristic that establishes a goal for each measure; agencies can determine the program’s progress, in part, by comparing the program’s measures against the targets. For example, the target of one of CBP’s performance measures—the “number of operational CSI seaports”—was to have 58 CSI seaports operating in fiscal year 2007, which the agency achieved as described previously in this report. The Government Performance and Results Act of 1993 incorporated performance measurement as one of its most important features, and the establishment and review of performance measures are a key element of the standards for internal control within the federal government. As discussed in the Government Performance and Results Act of 1993 and as we reported in 1996, measuring performance allows organizations to track progress being made toward specific goals and provides managers crucial information upon which to base their organizational and management decisions. In addition, leading organizations recognize that performance measures can create powerful incentives to influence organizational and individual behavior. In the past 2 years, CBP has made efforts to refine and modify its performance measures as the CSI program has matured. Since 2005, for example, CBP has eliminated five performance measures that it had used to track the implementation of seaports participating in CSI, measures that CBP determined were no longer needed because CSI operations were under way at the majority of planned CSI seaports. Also, in our April 2005 review of CSI, we identified a CSI performance measure that was calculated inappropriately, and in response, CBP modified how the measure was calculated to address our concerns. Specifically, for the CSI measure that tracks the number of container examinations waived because they are determined to be unnecessary, CBP began excluding inappropriate data that made the results of the performance measure misleading. This was an important modification because, as we reported in November 2002, measures that are defined inconsistently with how they are calculated can be confusing and create the impression that performance is better or worse than it actually is. CBP has made efforts to enhance CSI performance measures, but we identified limitations in the information available for CSI program managers to assess the program. In the past, we and the Office of Management and Budget have encouraged federal departments and agencies to measure whether programs are achieving their intended outcomes, such as CSI’s purpose of protecting global trade from being exploited by international terrorists. However, we and the Office of Management and Budget have acknowledged the difficulty in developing outcome measures for programs that aim to deter or prevent specific behaviors. In such an instance, we have reported that proxy measures should be designed to assess the effectiveness of program functions. CBP officials reported the agency has not been able to develop a way to measure the deterrence effect of the program, as CSI is designed to support the CBP mission to prevent and deter terrorists and terrorist weapons from entering the United States. Examples of CSI program functions include targeting and examining high-risk container shipments before they are loaded on vessels bound for the United States, and in our 2005 review of CSI we provided guidance on an alternative method of developing proxy measures to evaluate program performance. Further, according to the Office of Management and Budget, proxy measures should be closely tied to the intended program outcome, and it may be necessary to have a number of proxy measures to help ensure sufficient safeguards are in place to account for performance results. According to CBP officials the following three of its existing performance measures were proxies for program outcomes.(1) The percentage of worldwide U.S.-bound containers passing through CSI seaports—since these containers are to be targeted and, if determined high-risk, may be examined by host government officials, this is a measure of the program goal to detect and prevent WMDs headed to U.S. seaports from leaving foreign seaports. (2) The number of foreign mitigated examinations (that is, examinations determined to be unnecessary due to information provided by host government officials and thus waived) by category—developed to quantify whether collocating CBP officials at CSI seaports increases information sharing and collaboration. (3) The number of intelligence reports based on CSI foreign sources— intended to measure whether having CBP officials located at foreign seaports leads to increased collaboration with foreign customs officials. The Office of Management and Budget has stated that performance measures should capture the most important aspects of a program’s mission and priorities. However, CBP does not have a measure that tracks the extent to which U.S.-bound containers carrying high-risk cargo are examined at CSI seaports, despite the fact that this activity is a core element of the CSI program. CBP has taken other actions to address our April 2005 recommendation that includes ways to improve CSI performance measures, but we found additional weaknesses as well. The strategic plan demonstrated how each performance measure corresponds to the three strategic goals of CSI, which include (1) securing U.S. borders, (2) building a robust CSI cargo security system, and (3) protecting and facilitating trade. This marked an improvement, as this linkage had not been made previously. In addition, CBP addressed an additional aspect of our prior recommendation by establishing performance targets for four of the six CSI performance measures currently used. However, only one measure had a target for multiple years. In addition, since issuing the CSI strategic plan the agency has not updated its performance targets for fiscal year 2008 or beyond for any of its measures. Without this information about the performance targets, it may be difficult for CBP to determine whether the results were more positive or negative than expected. Also, we identified a weakness in how some CSI performance measures are calculated. As we noted earlier in this report, as the number of CSI seaports has increased in recent years, program activities have increased as well. However, CBP does not appropriately control for this program growth in how it calculates three of its six performance measures. For example, since the “number of foreign mitigated examinations by category”—the number of container examinations determined to be unnecessary due to information provided by host government officials—is not calculated on a per-container basis (i.e., per 10,000 containers), it may be difficult to determine whether fluctuation in the numbers across years is due to (1) increased collaboration with foreign government officials or (2) simply an increase in the number of containers reviewed and considered for examination at the increasing number of CSI seaports. Similarly, the number of intelligence reports and the number of investigative cases initiated may be due to an increase in the number of operational CSI seaports, not increased collaboration with host government officials. Without controlling for program growth, CBP’s calculation of results for its performance measures may be misleading or confusing to CBP and DHS program managers or the Congress, who provide program oversight. Since we began reporting on the CSI program in 2003, CBP has made significant progress in expanding and developing the program. However, CBP continues to face several management and operational challenges, which may limit CBP’s ability to ensure that the CSI program provides the intended level of security for U.S.-bound container cargo moving through the international supply chain. Also, balancing security concerns with the need to facilitate the free flow of commerce remains an ongoing challenge for CBP. Recognizing that program evaluation data are important for program managers to understand why results occur and what value a program adds, CBP has taken actions to enhance its evaluation of CSI team activities. The revised evaluation program has increased the information available to make policy and programmatic decisions regarding the operations at the CSI seaports. However, limitations that remain in CBP’s evaluation process affect the accuracy and completeness of the program information available for making sound management decisions about the CSI program as a whole. Specifically, when CBP’s evaluation teams do not complete the evaluation tools or resolve contradictory information, program managers may receive limited or inaccurate information. Further, when the data collected using the CSI evaluation tool during the evaluations are not reliable and readily available for assessment, CBP’s planned programwide trend analyses of the CSI program may be misleading. In assessing CSI performance, CBP lacks information about a very important aspect of the program—the overall examination systems used by the host governments to examine high-risk cargo shipped in containers as requested by CBP. CBP’s efforts have led to the successful participation of a wide array of foreign governments in the CSI program, and CBP has established many cooperative relationships with its foreign partners. While we acknowledge the agency cannot force security requirements upon foreign governments, the lack of information systematically gathered about the examination systems used by participating governments is problematic. Data about the equipment, people and processes involved in the examination system are vital for determining whether high-risk U.S.- bound containers have been properly examined or should be examined or reexamined upon arrival at a U.S. seaport. CBP lacks guidelines and criteria for most of the equipment and the people and processes used by host government examination systems—as required, in some instances, by the SAFE Port and 9/11 Acts—for evaluating CSI seaport operations and determining overall program effectiveness. In light of the new 9/11 Act requirement that 100 percent of U.S.-bound container cargo be scanned in the future at foreign seaports before leaving for the United States, it is important that that CBP have programs in place to gather the information necessary to ensure that cargo container examinations—and the equipment used as part of the examination process—are reliable, regardless of the point of origin. Program evaluations are just one source of information that managers need to make decisions, and evaluation data must often be coupled with performance measurement to assess overall program results. Measuring the overall impact of the CSI program remains difficult due to the challenges involved in creating effective performance measures, and because of great difficulty in measuring the deterrent effect of the program. As we and the Office of Management and Budget have reported, performance measurement can be very valuable to program managers, as the process can indicate what a program is accomplishing and whether intended results are being achieved. Measuring program performance encourages managers to focus on the key goals of a program and helps them by providing information on how resources and efforts are best allocated to ensure effectiveness. Though CBP identified performance measures it considers proxies for program outcomes (given the difficulty in assessing the deterrent effect of CSI), these measures do not cover a key core program function, for example a performance measure for the number of high-risk U.S.-bound containers examined at CSI seaports. Finally, without clearly developed performance targets for each of its measures, program managers, Congress, and the public lack information needed to determine the extent to which the CSI program is performing as intended. Taken as a whole, the lack of clearly articulated performance measures and accurate and reliable evaluative data may hinder CBP’s ability to ensure that the resources it expends for CSI effectively achieve its goal of helping to secure U.S. borders against terrorists and terrorist weapons. To help ensure that CBP has the information needed to assess its achievement of CSI program goals to help enhance supply chain security—while at the same time balancing security concerns with the need to facilitate the free flow of commerce—we recommend that the Secretary of Homeland Security direct the Commissioner of U. S. Customs and Border Protection to take the following actions in three areas: Strengthen CBP’s process for evaluating CSI teams at overseas ports by (a) systematically capturing and maintaining all relevant evaluation data and documentation so that it can be used by CBP management to guide operating decisions, monitor program performance, and inform resource allocation decisions; (b) ensuring that CSI evaluation teams follow established evaluation procedures; and (c) monitoring the completion, within established time frames, of recommendations made in previous evaluations. In collaboration with host government officials, improve the information gathered about the host governments’ examination systems—which includes people, processes, and equipment—at each CSI port by (a) establishing general guidelines and technical criteria regarding the minimal capability and operating procedures for an examination system that can provide CBP with a basis for determining the reliability of examinations and related CSI activities; (b) systematically collecting data for that purpose; and (c) analyzing the data against the guidelines and technical criteria to determine what, if any, mitigating actions or incentives CBP should take to help ensure the desired level of security. Enhance CSI performance measures to better assess CSI performance overall by (a) developing measures for all core CSI program functions designed to have a deterrent effect, (b) establishing annual performance targets—based on explicit assumptions—for all performance measures, and (c) revising how performance measures are calculated to take into account CSI program growth. We provided a draft of this report to the Department of State and the Department of Homeland Security for their review and comment. The Department of State did not provide written comments but provided technical comments, which have been incorporated into the report as appropriate. DHS provided written comments—incorporating comments from CBP—on December 20, 2007, which are presented in Appendix II. In commenting on a draft of this report, DHS noted that it concurred with one recommendation and partially concurred with the remaining two recommendations. In its written comments, DHS and CBP concurred with our recommendation on strengthening its process for evaluating CSI teams at overseas locations. Specifically, CBP noted that by June 2008, it planned to establish a database that would contain all recommendations and action plans as a result of CSI port evaluations as well as due dates for implementing recommendations and actions taken. To ensure that CSI evaluation teams follow procedures, CBP indicated that it would make it mandatory that the teams complete all database fields. Furthermore, CBP reported that it would assign values to questions in its evaluation tool on the basis of the criticality of the activity evaluated in each question to CSI’s mission as a whole. DHS commented that CBP partially concurred with our second recommendation to improve information gathered about host governments’ examination systems by (a) establishing general guidelines and technical criteria regarding the minimal capability and operating procedures; (b) systematically collecting data for that purpose; and (c) analyzing the data against the guidelines and technical criteria. CBP agreed on the importance of an accepted examination process and noted it continues to take steps in addressing improvements in the information gathered about host government’s examination systems at CSI seaports by working directly with host government counterparts, through the World Customs Organization, and providing capacity building training and technical assistance. While CBP does engage in capacity building, it does so with only 5 of the 33 countries with CSI ports. CBP also stated that it will continue to use the WCO through its SAFE Framework of Standards to address a uniform customs process and technical standards for equipment. However, the SAFE Framework mentions no specific technical capability criteria for inspection equipment. Additionally, CBP does not systematically collect or assess information on the people, processes, or technology used by these host governments to examine high- risk U.S.-bound containers. CBP also noted in its comments to this report, that equipment used for inspection of containers in foreign countries is equal to or better than the equipment used by CBP at its domestic ports. While CBP has performance information for the 16 seaports that have inspection equipment on loan from CBP, it is not in a position to assess the performance of equipment used at the remaining 42 CSI seaports. Although we repeatedly requested systematic information regarding the equipment technical capabilities in these other ports, CBP officials were unable to provide it to us. In response to our 2005 report, CBP stated that it would evaluate the feasibility of technical requirements for nonintrusive inspection equipment, but a legal issue may exist regarding CBP’s ability to impose such requirements. While we understand CBP’s position, it could still gather information on such equipment’s technical capabilities. Because the CSI inspection might be the only inspection of a container before it enters the United States, it is important that information on the people, processes, and equipment used as part of CSI be obtained and assessed to provide some level of assurance of the likelihood that the examination system could detect the presence of WMD. If a port’s examination system were determined to be insufficient, CBP could take mitigating actions, such as re-examining container cargo upon its arrival at a domestic seaport. Finally, DHS commented that CBP partially concurred with our third recommendation to enhance CSI performance measures to better assess CSI performance overall. CBP stated that it believes its current measures address core program functions of targeting and collaboration with host governments to mitigate or substantiate the risk of a maritime container destined for the United States. We disagree. As discussed earlier in this report, a core element of the CSI program, specifically the extent to which U.S.-bound containers carrying high-risk cargo are examined at CSI seaports, is not addressed through CBP’s performance measures. In its comments, CBP stated that its outcome performance indicator captures the number of foreign mitigated examinations by category, however CBP did not respond to our requests for more information regarding these categories, including whether risk was a category. Although it considers action on this recommendation completed, CBP noted its intention to continue to refine, evaluate, and implement measures to track progress toward meeting CSI objectives. As previously stated, since issuing the CSI strategic plan, CBP has not updated its performance targets for fiscal year 2008 or beyond for any of its measures. Thus, we believe additional action is warranted. Establishing annual targets for performance measures is important, as agencies can determine the program’s progress, in part, by comparing the performance measures against the targets. In addition, CBP did not address whether it plans to reconsider how it calculates some of its performance measures to control for CSI program growth. Without doing so, CBP’s calculation of results for its performance measures may be misleading or confusing to CBP and DHS program managers, or the Congress, who provide program oversight. DHS and CBP also provided technical comments, which have been incorporated into the report as appropriate. If you or your staff have any questions about this report, please contact me at (202) 512-9610 or at [email protected]. Key contributors to this report are listed in appendix VI. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. We addressed the following issues regarding the U.S. Customs and Border Protection’s (CBP) Container Security Initiative (CSI): How has CBP contributed to strategic planning for supply chain security efforts and the CSI program in particular, and what progress has been made in achieving CSI performance goals? How has CBP strengthened CSI operations in response to our 2005 review, and what challenges, if any, remain? How does CBP evaluate CSI port operations and assess program performance overall, and how has this process changed over time? To address our first objective, we reviewed the strategic plans of the Department of Homeland Security (DHS), CBP, and CSI as well as national strategies like the National Maritime Security Strategy and the Strategy to Enhance International Supply Chain Security. We also analyzed the CSI strategic plan to determine whether it includes all of the key elements included in the Government Performance and Results Act. In addition, to measure CSI’s progress in meeting its performance goals, we reviewed and analyzed CBP data related to the number of CSI seaports, the cargo CBP targeted and referred to the host government to examine, and the number of cargo containers that were (and were not) examined by host government officials at the CSI seaports. We also met with CBP officials responsible for managing the CSI program, from the CSI Strategic Planning and Evaluation Branch, and from CBP’s Office of Field Operations and Office of International Affairs and Trade Relations, not only to gather information about CSI strategic planning and performance goals, but to discuss all of the issues within the scope of this review. To examine CBP’s efforts to enhance CSI operations and the operational challenges that remain at CSI seaports, we reviewed GAO’s previous assessments of the CSI program and examined CBP’s efforts to implement our three prior recommendations. We also reviewed the CSI human capital plan and spoke to CBP officials about actions the agency has taken to ensure that CSI human resources are appropriately allocated. As part of that process, we met with officials at CBP headquarters and at the National Targeting Center - Cargo (NTCC) in Virginia to discuss the agency’s decision to conduct some targeting of high-risk containers from the NTCC rather than at CSI seaports. In addition, we spoke to CBP officials at three domestic seaports, selected according to geographical location and container volume. We also visited six CSI seaports located overseas, and selected the locations based on geographic and strategic significance, container volume to the United States from the seaports, when the seaports began conducting CSI operations, and whether the seaport was involved in CBP’s Secure Freight Initiative. At the CSI seaports, we also interviewed host government officials and CSI teams to discuss the frequency and level of collaboration involved in their interactions with each other, circumstances at seaport facilities that affect CSI operations, and financial cost issues associated with examinations. The results from our visits to seaports provided examples of CBP and host government operations but cannot be generalized beyond the seaports visited because we did not use statistical sampling techniques in selecting the seaports. To determine what progress CBP has made in strengthening its tools for monitoring and measuring the progress of the CSI program, we reviewed the performance measures presented in the CSI strategic plan against criteria developed by the Office of Management and Budget and GAO. In addition, to appraise CBP’s efforts to strengthen its methods to evaluate CSI teams and to learn about operations at CSI seaports, we analyzed a sample of evaluation documents. Our nonrepresentative sample consisted of evaluations for all 40 seaports for which we had documentation at the time of our review, including (1) the 15 evaluations conducted between November 2006 (when CBP revised its evaluation process and began using the Container Security Initiative Team Evaluation software tool) and May 2007 (when we conducted our analysis), (2) the 7 available evaluations that directly preceded them chronologically and were conducted using CBP’s previous evaluation methodology (for the purpose of comparison), and (3) the most recent evaluations conducted at each of the additional locations for which documentation had been provided by CBP. Thus, we reviewed a total of 34 evaluations (covering 40 CSI seaports) out of the 114 evaluations that GAO had obtained from CBP as of May 2007. For each of the evaluations reviewed, we assessed any available materials, which could include a narrative report and/or a checklist of yes or no responses. While our sample covered various aspects of CBP’s evaluations, our sample was not selected using statistical sampling techniques. Thus, the results from our review of CBP evaluation data provide illustrative examples about CSI team evaluation methods and program operations at CSI seaports—and generally corroborated our seaport site visit observations—but cannot be generalized to the all 58 seaports conducting CSI operations. We also met with CBP officials managing the CSI program to assess the agency’s efforts to collect information about the equipment, people, and processes involved in the host governments’ examinations of U.S.-bound container cargo, including the capabilities of examination equipment operating at CSI seaports and the proficiency of host customs administrations using the equipment. In addition, we selected and analyzed a nonrepresentative sample of 10 port assessments among those that CBP conducted at each port prior to its admission into the CSI program—the sample was composed of the 6 seaports we visited plus the 4 highest-volume locations as of January 2007. As of that date, approximately 55 percent of containers bound for the United States passed through these 10 seaports. Thus, our findings from our review of the assessments provide examples about the type of information collected as part of the process, but cannot be generalized to all 58 seaports in the program. We conducted this performance audit from May 2006 through January 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We met with CBP officials to discuss the agency’s efforts to ensure CSI data on the number of cargo shipments and containers subject to targeting and examination are reliable. In our 2005 review of the program, we found the data to be sufficiently reliable to support our findings. Since that time, CBP has further enhanced the way in which it collects and aggregates information about CSI program activities at foreign seaports, including the targeting and examination of high-risk container cargo. Specifically, CSI teams now utilize improved technology, eliminating the need for transmitting data to CBP headquarters via e-mail and thereby reducing the opportunity for human error in manually entering and aggregating data for the program. CBP officials at headquarters can now directly access the data entered at each CSI port as soon as they are entered into the shared system and can monitor the data on a daily basis to identify errors in or mischaracterization of the data. While we did not directly test the reliability of 2006 data, the recent CBP initiatives to improve reliability, combined with GAO's previous assessment of the 2005 data, gave us confidence in using CSI targeting and examination data to provide descriptive, background information regarding the extent to which high- risk container cargo is targeted by CBP and examined by foreign governments participating in CSI. This appendix provides information on the 58 foreign seaports participating in CBP’s Container Security Initiative (CSI). According to CBP, CSI was operating in 58 foreign seaports by the end of September 2007. Table 2 lists the CSI seaports according to the date when the seaports began conducting CSI operations, shows the phase (I, II, or III) in which specific seaports were selected for participation in CSI, the volume of U.S.-bound shipments passing through the seaport in fiscal year 2007, and specifies which seaports are participating in the Department of Energy’s (DOE) Megaports Initiative and in CBP’s Secure Freight Initiative. This appendix provides a detailed description of activities and equipment used at CSI seaports to target and examine container cargo. CBP targets all of the U.S.-bound containers that pass through CSI seaports to identify and, where feasible, examine high-risk container cargo. The container targeting and examination activities conducted at the foreign seaports for U.S.-bound cargo (exports) are very similar to activities CBP conducts at domestic seaports for arriving containers (imports). Figure 8 illustrates the various steps and decision points involved in targeting and examining high-risk U.S.-bound containers at CSI seaports. Under CSI, the targeting of cargo can include the targeters’ review of the Automated Targeting System (ATS) score and the information on which it is based, the bills of lading—which include data about the cargo—and additional information provided by host government officials. CBP targeters at CSI seaports are to access bills of lading through ATS, a system that automatically uses its hundreds of rules to check available data for every container arriving in the United States and assigns a risk score to each cargo shipment. Targeters review the bill of lading, making a cursory check for discrepancies and anomalies in the name and address of the importer, the commodity, the cargo description and other data elements. On the basis of the initial review of the bill of lading, CBP officials are to either (1) categorize the cargo as low risk, in which case, the container holding the cargo is loaded onto the departing vessel without being examined, or (2) conduct further research in order to properly characterize the risk level of the cargo. Further research entails targeters using automated resources, such as the Treasury Enforcement Communication System or AutoTrack, as well as nonautomated resources, such as information provided by host government officials, to obtain applicable information to determine the validity of the shipment. Further research may also be conducted by the team’s intelligence research specialist. After further research is completed, CBP officials are to characterize the cargo as either (1) low risk, in which case it is loaded onto the departing vessel without being examined, or (2) high-risk, in which case it is referred to host government officials for concurrence to examine. Since CBP officials do not have the legal authority to examine U.S.-bound containers in foreign seaports, the host government customs officials conduct the examinations. Host government officials can respond to the referrals for examination in one of three ways—cargo is examined or the request is either waived or denied. After receiving a referral from CSI teams, host customs officials are to review the bill of lading and the reasons for the referrals to determine whether or not to examine the container cargo. Some host governments collect intelligence information on U.S.-bound cargo independent of CSI, which host officials also consider in decisions of whether to examine the referred cargo. If host government officials agree that the cargo is high-risk, they will proceed with an examination. According to CBP, in general, CSI team members are to observe the examinations and review and document the results. On the basis of the results of a nonintrusive examination, such as if an anomaly is apparent in the image of the container, the host government and CBP officials must decide whether the host government will conduct a physical examination of the a container. Alternatively, the CSI team may waive an examination referral if (1) host government officials provide the CSI team with additional information that lowers the risk level of the cargo or (2) logistics prohibit an examination, such as if the cargo container were already loaded on the departing vessel. Finally, if the host government officials determine, on the basis of their review, that the cargo is not high-risk, they will deny examination of the cargo. For any high-risk cargo for which an examination is waived or denied, CSI teams are to place a domestic hold on the cargo, so that an examination will be conducted upon arriving in the United States. However, if CSI team members are adamant that a cargo container poses an imminent risk to the carrier or U.S. seaport of arrival but cannot otherwise convince the host officials to examine the container, CSI team members are to contact and coordinate with the NTCC to issue a do-not- load order for national security. According to CBP officials, this order advises the carrier that the specified container will not be permitted to be unloaded in the United States until a time when any associated imminent risk to the cargo container is neutralized. Once the risk is neutralized, the container is to be loaded back onto the carrier and placed on hold for a domestic examination. According to CBP officials, this type of do-not-load order for national security has been implemented six times since the inception of CSI. There are generally two types of CSI cargo container examinations— scanning with nonintrusive inspection equipment and physical searches. For scanning cargo containers, there are two basic types of nonintrusive inspection equipment currently used at CSI seaports: (1) radiation detection equipment and (2) imaging inspection equipment, which may use X-rays or gamma rays. Radiation detection equipment, such as a radiation portal monitor (RPM) and radiation isotope identifier devices (RIID) detects the presence of radioactive material that may originate from a container. However, only the RIID can determine whether the type of radiation emitted by the material actually poses a threat or is a normal emission of radiation, such as that found in ceramic tile. We observed at a domestic and a foreign seaport that generally if radioactive emissions are detected from a cargo container, customs officials will use a RIID (shown in fig. 9), to determine whether the radiation being emitted poses a threat. The second type of equipment, referred to as imaging equipment, uses X- ray or gamma ray technology to scan a container and create images of the container’s contents without opening the container. CBP officials, along with host government officials, may review the images produced with the X-ray or gamma ray equipment to detect anomalies that may indicate the presence of WMD. Figure 10 shows a sample image produced by this type of equipment. The capabilities of nonintrusive imaging inspection equipment vary by manufacturer and model. In May 2005, CBP defined minimum performance capabilities to evaluate the quality and performance of the nonintrusive imaging inspection equipment being considered for use at domestic seaports. The domestic standards set baseline performance requirements for such things as the ability of nonintrusive inspection equipment to identify images through steel shielding (referred to as penetration) or the ability to scan an amount of containers in a given time (referred to as throughput). This appendix provides information on the CSI performance measures used by CBP. Table 3 describes the performance measures CBP is currently using to report the overall performance of the CSI program, the linkage between performance measures and CSI goals, the performance targets established, and the recent results collected for each measure. In addition, since our 2005 report, CBP has identified performance measures one, two and three below as its proxy measures used in place of a measure for program outcomes, given the difficulty in measuring the deterrent effect of the program. Stephen L. Caldwell, Director, Homeland Security and Justice Issues, (202) 512-9610, [email protected]. This report was prepared under the direction of Christine A. Fossett, Assistant Director, Homeland Security and Justice Issues. Key contributions to this report also included Amy Bernstein, Fredrick Berry, Yecenia Camarillo, Frances Cook, Christopher Conrad, Wendy Dye, Kathryn Godfrey, Valerie Kasindi, Stanley Kostyla, Matthew Lee, Frederick Lyles, Robert Rivas, and Leslie Sarapu. Maritime Security: The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington, D.C.: October 30, 2007. Maritime Security: One Year Later: A Progress Report on the SAFE Port Act. GAO-08-171T. Washington, D.C.: October 16, 2007. Maritime Security: The SAFE Port Act and Efforts to Secure Our Nation’s Seaports. GAO-08-86T. Washington, D.C.: October 4, 2007. Combating Nuclear Smuggling: Additional Actions Needed to Ensure Adequate Testing of Next Generation Radiation Detection Equipment. GAO-07-1247T. Washington, D.C.: September 18, 2007. Maritime Security: Observations on Selected Aspects of the SAFE Port Act. GAO-07-754T. April 26, 2007. Customs Revenue: Customs and Border Protection Needs to Improve Workforce Planning and Accountability. GAO-07-529. Washington, D.C.: April 12, 2007. Cargo Container Inspections: Preliminary Observations on the Status of Efforts to Improve the Automated Targeting System. GAO-06-591T. Washington, D.C.: March 30, 2006. Combating Nuclear Smuggling: Efforts to Deploy Radiation Detection Equipment in the United States and in Other Countries. GAO-05-840T. Washington, D.C.: June 21, 2005. Container Security: A Flexible Staffing Model and Minimum Equipment Requirements Would Improve Overseas Targeting and Inspection Efforts. GAO-05-557. Washington, D.C.: April 26, 2005. Homeland Security: Key Cargo Security Programs Can Be Improved. GAO-05-466T. Washington, D.C.: May 26, 2005. Maritime Security: Enhancements Made, but Implementation and Sustainability Remain Key Challenges. GAO-05-448T. Washington, D.C.: May 17, 2005. Cargo Security: Partnership Program Grants Importers Reduced Scrutiny with Limited Assurance of Improved Security. GAO-05-404. Washington, D.C.: March 11, 2005. Preventing Nuclear Smuggling: DOE Has Made Limited Progress in Installing Radiation Detection Equipment at Highest Priority Foreign Seaports. GAO-05-375. Washington, D.C.: March 31, 2005. Homeland Security: Process for Reporting Lessons Learned from Seaport Exercises Needs Further Attention. GAO-05-170. Washington, D.C.: January 14, 2005. Port Security: Better Planning Needed to Develop and Operate Maritime Worker Identification Card Program. GAO-05-106. Washington, D.C.: December 10, 2004. Maritime Security: Substantial Work Remains to Translate New Planning Requirements into Effective Port Security. GAO-04-838. Washington, D.C.: June 30, 2004. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Washington, D.C.: March 31, 2004. Container Security: Expansion of Key Customs Programs Will Require Greater Attention to Critical Success Factors. GAO-03-770. Washington, D.C.: July 25, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Customs and Border Protection's (CBP) Container Security Initiative (CSI) aims to identify and examine high-risk U.S.-bound cargo at foreign seaports. GAO reported in 2003 and 2005 that CSI helped to enhance homeland security, and recommended actions to strengthen the program. This report updates information and assesses how CBP has (1) contributed to strategic planning for supply chain security, (2) strengthened CSI operations, and (3) evaluated CSI operations. To address these issues, GAO interviewed CBP officials and reviewed CSI evaluations and performance measures. GAO also visited selected U.S. and CSI seaports, and met with U.S. and foreign government officials. By collaborating on the development of the Department of Homeland Security's Strategy to Enhance International Supply Chain Security, and by revising the CSI strategic plan as GAO recommended, CBP has contributed to the overall U.S. strategic planning efforts related to enhancing the security for the overseas supply chain. Also, CBP reached its targets of operating CSI in 58 foreign seaports, and thereby having 86 percent of all U.S.-bound cargo containers pass through CSI seaports in fiscal year 2007--representing a steady increase in these measures of CSI performance. To strengthen CSI operations, CBP has sought to address human capital challenges and previous GAO recommendations by increasing CSI staffing levels closer to those called for in its staffing model and revising its human capital plan. However, challenges remain because CBP continues to rely, in part, on a temporary workforce; has not determined how to optimize its staffing resources; and reports difficulties in identifying sufficient numbers of qualified staff. In addition, CBP has enhanced relationships with host governments participating in CSI. However, hurdles to cooperation remain at some seaports, such as restrictions on CSI teams witnessing examinations. CBP improved its evaluation of CSI team performance at seaports, but limitations remain in the evaluation process that affect the accuracy and completeness of data collected. CBP has not set minimum technical criteria for equipment or systematically collected information on the equipment, people, and processes involved in CSI host government examinations of high-risk, U.S-bound container cargo. Also, CBP has not developed general guidelines to use in assessing the reliability of these examinations. Thus, CBP potentially lacks information to ensure that host government examinations can detect and identify weapons of mass destruction, which is important because containers are typically not reexamined in the United States if already examined at a CSI seaport. CBP refined overall CSI performance measures, but has not fully developed performance measures and annual targets for core CSI functions, such as the examination of high-risk containers before they are placed on vessels bound for the United States. These weaknesses in CBP's data collection and performance measures potentially limit the information available on overall CSI effectiveness. |
At least 18 different federal agencies, from DOE to HHS, conduct at least 158 energy-related program activities. These programs address eight major categories of activities, ranging from energy supply to energy conservation. In fiscal year 2003, for the energy program activities we identified, the federal government provided at least $9.8 billion in estimated budget authority. In addition, 11 federal energy-related income tax preferences were estimated at $4.4 billion in outlay equivalent value for fiscal year 2003. On the revenue side, in fiscal year 2003, the federal government collected about $10.1 billion through various energy-related programs that include fees and royalties on development of federal energy resources and about $34.6 billion in excise taxes on gasoline and other fuels. Federal energy-related programs and income tax preferences address eight major energy activity areas: (1) energy supply, (2) energy’s impact on the environment and health, (3) low-income energy consumer assistance, (4) basic energy science research, (5) energy delivery infrastructure, (6) energy conservation, (7) energy assurance and physical security, and (8) energy market competition and education. On the basis of our analysis of fiscal year 2003 estimated budget authority for energy-related programs and outlay equivalent estimates for energy-related income tax preferences, resources to address energy supply activities accounted for almost one-half of the $14.2 billion in federal energy-related resources. Table 1 provides a summary of fiscal year 2003 resources for energy-related programs we identified and income tax preferences by the eight major energy activity areas. Appendix II provides additional details on energy-related programs by major activity area, by agency, and by energy type. In addition to programs and income tax preferences, other federal policies that are not quantified also affect these major energy areas. For example, in the supply area, the federal government provides electricity support through federal utilities and loan programs. Also, regarding energy’s impact on the environment and energy conservation, the federal government, as a major energy user, has energy use policies that influence both the type and amounts of energy used. On the basis of our analysis of fiscal year 2003 resources, energy supply programs and related income tax preferences accounted for about $6.6 billion, or almost one-half of the federal resources provided to energy-related programs. We identified 6 agencies, conducting 65 different program activities, addressing supply issues such as access for energy development on federal lands, research and development for energy sources ranging from clean coal to nuclear fusion, and nuclear energy regulation. In addition to these 6 agencies, Treasury reports on 9 different income tax preferences that address energy supply. Specifically, several provisions of the Internal Revenue Code grant favorable tax treatment to activities such as the recovery of the actual capital investment costs of discovering, purchasing, and developing energy. These income tax preferences accounted for about $4.18 billion in fiscal year 2003 outlay equivalent estimates, more than the total estimated budget authority of $2.39 billion for energy supply programs. Table 2 shows fiscal year 2003 outlay equivalent estimates for supply-related income tax preferences and fiscal year 2003 estimated budget authority for energy supply programs by major federal agency. Appendix II provides details on energy supply programs by agency and energy type. Supply programs address four primary types of energy: fossil, renewable, nuclear, and alternative. Fossil energy supply includes coal, oil, and natural gas production and accounted for $4.7 billion of the almost $6.6 billion in fiscal year 2003 resources for energy supply programs. Fossil resources included $1.1 billion in estimated budget authority for programs such as clean coal technology research and development. Resources addressing fossil supply also included an estimated $3.6 billion in outlay equivalent value from 6 different income tax preferences. These income tax preferences include the support of fossil fuel production from nonconventional sources such as synthetic fuels produced from coal. Renewable energy supply includes hydropower, biomass, geothermal, wind, and solar energy. Estimated budget authority for renewable programs was at $349 million in fiscal year 2003, and these programs generally address renewable energy research and development. In addition, 2 income tax preferences, a new technology credit and exclusion of interest on facility bonds, supported renewable energy at an estimated outlay equivalent of $510 million in fiscal year 2003. Nuclear energy supply-related programs, with estimated budget authority of about $507 million in fiscal year 2003, address nuclear fission and mainly consist of DOE’s nuclear energy research and development programs and the Nuclear Regulatory Commission’s (NRC) regulation of nuclear energy. Finally, alternative energy programs, with estimated budget authority of $439 million in fiscal year 2003, include transportation fuels other than gasoline or diesel; traditional energy sources used in untraditional ways (distributed energy); and energy sources of the future, such as hydrogen and fusion. Hydrogen and fusion programs account for most of the programs under alternative energy. In addition, 1 tax preference, providing tax credits for alcohol fuels, supports alternative energy supply. Table 3 shows the fiscal year 2003 level of resources by energy supply type. Appendix II provides additional details on the types of energy supply addressed by specific agency programs. In addition to resources for programs and income tax preferences directed at the energy sector, the federal government provides other forms of support, largely to users of electricity. While this support is not captured in the programs or income tax preferences, it does provide benefits that represent implicit federal support for certain users of electricity. Specifically, there are five federal utilities, four Power Marketing Administrations (PMA) and the Tennessee Valley Authority (TVA), that provide electricity and transmission services to customers in their regions. The PMAs market power produced primarily at federal hydroelectric dams and projects that are owned and operated by either the Department of the Interior’s (DOI) Bureau of Reclamation, the U.S. Army Corps of Engineers, or the International Boundary and Water Commission. TVA markets electricity produced at its own fossil, nuclear, and hydroelectric energy facilities. In addition, another federal agency, the Rural Utilities Service (RUS), provides federal loan guarantees and other services to rural utilities. The federal support provided through these agencies differs from that of the other programs and incentives described in this report because it does not provide any federal funding to electricity customers. Revenue from sales of electricity generated by federally owned facilities and from loan repayment (in the case of RUS) is intended to largely pay the costs to the federal government of providing the electricity and loans. Therefore, the programs undertaken by these agencies are intended to be revenue-neutral to the federal government. Nonetheless, the electricity support provided by these agencies constitutes a benefit to users—an implicit federal subsidy—because the revenues collected by the agencies have generally been below what would have been collected for the same services by private entities. Appendix III provides additional details on these support programs. We identified 29 program activities, implemented by 11 different agencies, that address the impact of energy development and use on the environment and health. In fiscal year 2003, these programs represented estimated budget authority of $1.87 billion. In addition, an income tax preference for clean-fuel burning vehicles amounted to an estimated $90 million outlay equivalent in fiscal year 2003. Major program focuses include nuclear waste cleanup and environmental science research. The largest portion of the funding in this energy policy area goes to DOE, which received an estimated $1.6 billion for energy-related programs in fiscal year 2003. The Environmental Protection Agency (EPA), with a primary mission of protecting the nation’s environment, is also a major agency involved in addressing energy’s impact on the environment and health. EPA is a major regulator of energy development and use through its implementation of environmental laws, such as the Clean Air Act. We were able to quantify an estimated $24.2 million in fiscal year 2003 that supported EPA programs addressing energy’s impact on the environment. However, EPA regulatory activities affect more than the energy sector, and, because EPA does not track costs by industry sector, the agency was not able to determine with complete certainty how much of its $8 billion annual budget is energy-related. Thus, we believe the estimate for EPA programs related to energy’s impact on the environment is understated. Finally, because energy development and use can have a significant impact on the environment and health, other programs that primarily address other areas, such as renewable supply and energy conservation, also address the environmental impacts of energy. However, within this inventory, those programs are accounted for under their primary area of energy supply and conservation and are not also included here. Table 4 summarizes fiscal year 2003 resources for energy’s impact on the environment and health, by major agency; appendix II provides more details on the agencies’ individual programs. In addition to these programs, the federal government addresses energy’s impact on the environment through policies that are difficult to quantify. For example, the federal government has set standards and offered incentives to the private sector and citizens to reduce the effects of fossil fuel use and to reduce reliance on fossil fuel for energy. These include standards for smokestack and motor vehicle emissions, home appliances, and building materials and practices. In addition, the federal government is a significant consumer of energy and, through its consumption decisions, can choose to consume energy that is less harmful to the environment. In the late 1990s, the federal government embarked on its “greening of the government” initiative and sought to reduce reliance on the use of fuels in its buildings and vehicles that contribute the most to pollution. Executive Order 13123, Greening of the Government Through Efficient Energy Management, signed June 3, 1999, addresses greenhouse gas emissions from federal facilities and makes energy-efficiency targets more stringent. This order requires that each agency reduce its greenhouse gas emissions by 30 percent by 2010 when compared with 1990 emissions levels. The federal government provides funding to assist low-income consumers through two block grant programs: (1) the Low-Income Home Energy Assistance Program (LIHEAP), managed by HHS, provides grants to states to fund fuel payment assistance and home energy efficiency improvements for low-income households and (2) DOE’s Weatherization Assistance Program provides funds to make dwellings more fuel efficient in the long term for low-income housholds. The total estimated budget authority for these two programs in fiscal year 2003 was $2.212 billion, with the majority of the budget authority ($1.988 billion) being for LIHEAP. LIHEAP seeks to increase the health and prosperity of communities and tribes by assisting low-income households, particularly those with the lowest income that pay a high proportion of household income for home energy, in meeting their immediate home energy needs. LIHEAP operates in the 50 states, the District of Columbia, Indian tribes or tribal organizations, and U.S. territories. LIHEAP offers three types of assistance: heating/cooling bill payment, energy crisis, and weatherization and energy-related home repairs. Each state operates its own program, which includes taking applications, establishing eligibility, and making decisions on the kinds of assistance it will offer. In fiscal year 2003, LIHEAP received $1.988 billion in budget authority. During that fiscal year, approximately 4.4 million households received heating assistance; 494,000 households received cooling aid; 1.1 million received winter/year-round crisis aid; 71,000 received summer crisis aid; and 113,000 received weatherization assistance. Households may receive more than one kind of LIHEAP assistance. Thus, even though the precise number of households assisted is not known, 4.8 million households are estimated to have received assistance in fiscal year 2003. DOE’s Weatherization Assistance Program is part of the department’s Weatherization and Intergovernmental Program (WIP). The overall goal of WIP is to develop, promote, and accelerate the adoption of energy efficiency, renewable energy, and oil displacement technologies and practices by a wide range of customers—including state and local governments, weatherization agencies, communities, companies, fleet managers, building code officials, technology developers, tribal governments, and international agencies. In fiscal year 2003, DOE received about $224 million in budget authority for the Weatherization Assistance Program to provide weatherization assistance for low-income residences. The weatherization program also provides technical assistance and formula grants to state and local weatherization agencies to help low-income residents with weatherization services. Also, the weatherization program, as part of WIP, addresses energy conservation areas as it helps to reduce demand for fuels and peak loads on constrained electricity systems and modernizes conservation technologies and practices. Basic energy sciences consist of general energy-related research within DOE’s Office of Science. The Office of Science’s Basic Energy Science (BES) Program (fiscal year 2003 estimated budget authority of $1.0 billion) and its Advanced Scientific Computing Research (ASCR) Program (fiscal year 2003 estimated budget authority of $163 million) encompass the basic energy science research programs we identified. The BES program is a multipurpose, scientific research effort aimed at expanding the foundation for new and improved energy technologies and for understanding and mitigating the environmental impacts of energy use. BES touches virtually every aspect of energy resources—that is, production, conversion, efficiency, and waste mitigation. Energy-related research includes (1) advancing hydrogen production, storage, and use and developing new concepts and (2) improving existing models for solar energy conversion and for other energy sources. BES states that it provided the basic knowledge that resulted in an array of energy-related advances, including high-energy and high-power lithium batteries, highly efficient photovoltaic solar cells, and solutions for nuclear fuel purification/reprocessing and for cleanup of radioactive waste. Also, the BES research for the Hydrogen Fuel Initiative is based on the BES workshop report entitled Basic Research Needs for the Hydrogen Economy. The ASCR program supports DOE’s strategy to ensure the security of the nation and succeed in its science, energy, and environmental quality missions. ASCR provides the fundamental mathematical and computer science research that enables the simulation and prediction of complex physical and biological systems. Its energy-related objectives include providing the science base to enable the development of bioenergy sources and laying the groundwork for DOE's Fusion Simulation Project. The primary purpose of energy delivery infrastructure programs is to facilitate the development, maintenance, and improvement of the comprehensive energy delivery system—for example, electricity transmission and distribution systems, oil refining and gas processing, and oil and gas pipelines. We identified 13 program activities at 6 federal agencies that accounted for estimated budget authority of $882 million in fiscal year 2003 that addressed energy delivery infrastructure. The largest investment of program dollars in energy infrastructure that we identified in fiscal year 2003 involved international infrastructure funded by the U.S. Agency for International Development (USAID) in its programs in Iraq and Afghanistan. The total USAID infrastructure effort amounted to about $561 million—or 64 percent of the total energy infrastructure funding—with the great majority of the effort in Iraq ($558 million). Domestically, several programs involve the regulation of energy infrastructure on federal lands by DOI. In addition, Federal Energy Regulatory Commission (FERC) activities related to energy infrastructure include pipeline certification, hydropower licenses, and dam safety inspections, while the Department of Transportation (DOT) conducts regulatory work on pipeline safety. Table 5 provides a listing of infrastructure estimated budget authority for fiscal year 2003, by agency, while appendix II offers more details on specific programs. Energy conservation programs include those efforts to increase energy efficiency and reduce the amount of energy used in all sectors, such as buildings and transportation. We identified 27 program activities related to energy conservation at 5 federal agencies that accounted for about $788 million in estimated budget authority for fiscal year 2003. Energy conservation programs at DOE represent the bulk of the conservation efforts, accounting for about $657 million of the $788 million. In general, the program activities at DOE and the other major agencies, particularly EPA, DOT, and the National Science Foundation (NSF), involve research and development efforts aimed at improving energy conservation. In addition, an income tax preference provides $110 million in exclusions from income of conservation subsidies provided by public utilities. Table 6 provides a listing of energy conservation resources for fiscal year 2003, by agency, while appendix II provides program details. In addition to these programs, the federal government has addressed energy conservation through policies that seek to minimize the federal government’s own energy use. The federal government is the largest institutional user of energy in the world and can influence the amount of energy used in the marketplace. The National Energy Conservation Policy Act, as amended, requires federal agencies to achieve reductions in energy use. The legislation also contains provisions concerning energy management requirements and incentives, life-cycle cost methods for energy management decisions, and new technology requirements. In addition, Executive Order 13123, June 3, 1999, is one of a series of executive orders over recent years directing federal agencies to demonstrate leadership in energy and environmental management, including energy efficient building design, construction and operation, and the reduction of petroleum use through improvements in fleet fuel efficiency. Chartered in 1973, the Federal Energy Management Program, administered by DOE, is charged with coordinating federal government energy management efforts. DOE’s most recent Annual Report to the Congress on Federal Government Energy Management and Conservation Programs for Fiscal Year 2002, dated September 29, 2004, provides information on federal energy consumption and costs submitted to DOE by 29 federal agencies. Specifically, the report provides information on (1) consumption and costs of energy by fuel type for buildings, vehicles, and equipment and (2) agency appropriations for energy conservation retrofits and capital equipment. In summary, the report noted that fiscal year 2002 federal consumption costs were $9.7 billion, with 92 percent spent on two categories—62 percent on vehicles and equipment and 30 percent on standard buildings. DOD, through such energy uses as jet fuel and diesel, was by far the largest federal energy consumer—DOD spent $7.1 billion of the $9.7 billion and accounted for 73 percent of the total federal government energy use. In addition, the report provides information on progress toward energy conservation goals. For example, Executive Order 13123 requires a 30 percent reduction by 2005 in energy consumption per square foot for buildings and a 35 percent reduction by 2010 from the base year of 1985. The report indicates that energy consumption per square foot for buildings in fiscal year 2002 was about 24 percent less than the fiscal year 1985 base year. Energy assurance and physical security activities incorporate federal programs designed to respond to or prevent energy emergencies and major reliability and supply disruptions. This includes energy supply reserves, such as the Strategic Petroleum Reserve, and protection of energy production and delivery infrastructure from natural events, accidents, equipment failures, or deliberate sabotage. DOE has two programs to provide oil reserves to offset supply disruptions: the Strategic Petroleum Reserve and the Northeast Heating Oil Reserve. In addition, DOE’s Energy Security and Assurance Program supports the national security of the United States by working in close collaboration with state and local governments and the private sector to protect the nation against severe energy supply disruptions. The Department of Homeland Security (DHS) is responsible for coordinating the national effort to enhance critical infrastructure protection, including energy-related infrastructure. However, DOE is the sector-specific agency for the energy sector. DOE’s Office of Energy Assurance is responsible for fulfilling the roles of critical infrastructure identification, prioritization, and protection for the energy sector, which includes the production, refining, and distribution of oil and gas and electric power—except for commercial nuclear power facilities. NRC has programs that address security for commercial nuclear power facilities. Table 7 lists all of the energy assurance and physical security-related programs that we identified and provides estimated program funding for fiscal year 2003. The issue of energy market competition and education includes efforts to ensure that competitive domestic and international energy markets are functioning, as well as efforts in energy education and consumer protection and awareness. We identified 14 program activities implemented by 11 different agencies that play some role in facilitating competitive and informed energy markets. For those programs for which we could obtain estimates, these programs’ estimated budget authority was at least $238 million in fiscal year 2003. Major program focuses include providing federal oversight of the domestic natural gas, petroleum, and propane markets; providing energy information and education; and facilitating secure, stable, and competitive international energy markets that support investment in developing countries. DOE’s EIA represented the largest program in this area with estimated budget authority of $80 million. While most of EIA’s budget goes for domestic data collection and analysis activities, these activities serve to enhance competitive domestic and, to a lesser extent, international energy markets. EIA is responsible for providing energy information that promotes sound policy making, efficient markets, and public understanding. In addition, FERC, through its competitive market and market oversight programs, was the next significant program, with estimated budget authority of about $73 million. FERC has responsibility for ensuring “just and reasonable rates” for the interstate transportation of natural gas and the wholesale price of electricity sold in interstate commerce. Internationally, the U.S. Trade and Development Agency (USTDA), Commerce, State, and USAID promote economic development and/or U.S. commercial interests in the energy sector. It was difficult to quantify the funding specifically associated with energy-related aspects of various programs in this energy activity area, and some agencies were not able to provide us with funding information for their energy-related programs or activities. Significant among these programs were those agencies—Commodity Futures Trading Commission (CFTC), Department of Justice (DOJ), Securities and Exchange Commission (SEC), and Federal Trade Commission (FTC)—that can play a role in market oversight, including energy markets. Table 8 provides a summary of major federal agencies that play a role in energy market competition and education and the available estimates of budget authority for fiscal year 2003. Appendix II provides additional details on individual programs. While the federal government has a limited role in setting energy prices or dictating buyer purchasing strategies, the federal government has an interest in promoting a competitive and informed energy marketplace that protects the public from unnecessary price volatility. Recent investigations of market manipulation, by companies such as Enron, have heightened the relevancy of the federal government’s role in ensuring that a lack of competition or reliable market information do not exacerbate energy prices. Tools available to federal agencies to promote a competitive energy marketplace and protect the public from price volatility include monitoring for anticompetitive behavior; taking appropriate enforcement actions when necessary; and providing decision makers with sound, up-to-date, energy marketplace information, such as short-term price movements and long-term demand and supply trends. In addressing this area of market oversight, we attempted to quantify 4 relevant agencies’ level of effort in energy-related activities—CFTC, FTC, SEC, and DOJ. However, these 4 agencies, with overall budgets of $85 million for CFTC in fiscal year 2003; $177 million for FTC; $717 million for SEC; and $22 billion for DOJ, were not able to develop reliable estimates of the amount of effort devoted to energy-related activities. CFTC officials roughly estimated that about 20 percent of CFTC’s annual budget of $85 million, or $17 million, could be associated with energy-related activities. They noted that their work has increased in recent years because of concerns about energy markets, but they were not able to quantify the increase. DOJ officials told us that the majority of DOJ’s energy-related work falls within their Antitrust Division and their Environment and Natural Resources Division (ENRD). The Antitrust Division was able to provide us with an estimate for energy-related work, which totaled almost $4 million in fiscal year 2003, but ENRD was not able to provide us with a similar estimate of their energy-related work. Although we were not able to quantify energy-related funding for these 4 agencies, we were able to gather some basic information on major energy-related activities. For example: CFTC resolved its natural gas manipulation case against Enron in fiscal year 2004. CFTC also undertook a broader energy investigation that focused on energy trading firms that allegedly engaged in (1) the reporting of false, misleading, or knowingly inaccurate market information, including price and volume information; (2) manipulation or attempted manipulation; and/or (3) “round tripping,” which is a risk-free trading practice that produces “wash” results and the reporting of non-bona fide prices, in violation of the Commodity Exchange Act. As a result of its efforts in this area, as of February 1, 2005, enforcement actions commenced by the commission have resulted in civil monetary penalties totaling over $297 million, among other sanctions, imposed against approximately 27 entities and individuals. FTC, from 1981 to 2004, alleged that 15 proposed petroleum mergers would have resulted in significant reductions in competition and harmed consumers in one or more relevant markets. Four of the mergers were abandoned or blocked as a result of FTC or court action. In the other 11 cases, FTC required the merging companies to divest substantial assets in the markets where competitive harm was likely to occur. FTC has, since 2000, brought seven energy-related law enforcement actions to prevent consumer injury from unsubstantiated, false, or deceptive claims concerning energy or energy-related products. SEC officials reported that in 2003, there were 23 energy-related cases or enforcement actions brought by SEC. In addition; SEC issued about 100 orders under the Public Utility Holding Company Act in fiscal year 2003. Also, SEC’s Division of Corporation Finance performed 4,088 full reviews and full financial reviews of filings from all types of companies; of these, 619 were for energy-related companies. The division also performed 190 targeted reviews related to those energy-related companies. DOJ’s Antitrust Division has energy-related responsibilities that include promoting competition and enforcing antitrust laws in the energy industries. DOJ energy-related activities within ENRD include (1) defending EPA’s more stringent clean air standards for heavy-duty trucks and diesel fuel; (2) safety standards for the Yucca Mountain nuclear waste repository in Nevada; and (3) administrative enforcement actions, such as a major clean air enforcement action against coal-fired power plants. The federal government collects about $10.1 billion a year through various energy-related programs and about $34.6 billion in energy-related excise taxes. Most of the collections are royalties, rents, and bonuses from oil and gas on federal lands or offshore areas; while taxes on gasoline and other fuels account for most of the excise tax revenue. A number of energy-related programs, especially those dealing with the use of federal energy resources, radioactive waste, and regulation of the energy industry, involve the collection of federal revenues that are deposited into the Treasury. In fiscal year 2003, these collections amounted to about $10.1 billion. The majority of these collections come from collections associated with the production of energy resources on federal lands and in offshore areas. DOI’s Minerals Management Service (MMS) collected about $8.0 billion in royalties, rents, and bonuses in fiscal year 2003 for the development of energy resources in federal lands and offshore areas. The remainders of these collections are generally fees to pay for energy-related programs. In some cases, federal agencies are authorized to use these collections to offset program costs. For example, the Office of Civilian Radioactive Waste Management in DOE collected over $1 billion from generators of nuclear waste in fiscal year 2003 to manage and dispose of high-level radioactive waste and spent nuclear fuel. FERC collected fees from the entities it regulates that funded all of the cost of its regulatory activities related to energy, while NRC collected fees from the entities it regulates, including nuclear power plants, that cover about 90 percent of its costs. Table 9 provides a breakdown of federal energy-related collections for fiscal year 2003. The Internal Revenue Code, which is administered by the Department of the Treasury, provides for federal excise taxes on energy fuels that are used in many sectors across the United States. Revenue from these energy-related taxes totaled over $34 billion in fiscal year 2003. The excise taxes, some applied at the retail and some at the manufacturers’ level, were typically applied on a unit basis, typically by the gallon, and rates varied according to the content of the fuel. In general, these excise taxes fund certain trust funds. The largest of these, the excise tax on gasoline and gasohol, resulted in $24.2 billion in collections in fiscal year 2003 that support the Highway Trust Fund. The next largest revenue raiser was the excise tax on diesel fuel, which amounted to $8.6 billion in the same fiscal year. Most of the excise taxes on liquid fuels include 0.1 cent per gallon to finance the Leaking Underground Storage Tank Trust Fund. In addition to funding various trust funds, excise taxes can be used as a tool to achieve federal energy-related objectives. For example, alcohol fuels and fuels containing a portion of alcohol are generally taxed at a lower rate. The standard rate for gasoline is 18.4 cents per gallon. However, a partial exemption of 5.4 cents per gallon from the federal excise tax is provided for ethanol that is derived from renewable sources and used as fuel. The exemption encourages the substitution of alcohol fuels produced from renewable sources for gasoline and diesel to reduce reliance on imported petroleum and to contribute to energy independence. In addition, dyed diesel fuel and kerosene meant for use in trains, school buses, and local and mass transit buses are exempt from the 24.3 cents per gallon excise tax on the normal varieties of these fuels. Another excise tax, the “gas guzzlers” levy on certain vehicles that do not meet standards for fuel economy per gallon, raised $127 million in fiscal year 2003. Table 10 provides a listing of fiscal year 2003 energy-related excise tax collections and the associated trust funds. It is difficult to fully assess the status of progress made in implementation of the NEP recommendations because the information DOE has reported has been limited. Moreover, some of the recommendations are open-ended and lack measurable goals, which contribute to the difficulty in assessing implementation progress. Finally, because the NEP recommendations do not reflect all federal energy-related efforts, understanding the overall status of federal efforts to address energy issues is challenging. Since the May 2001 NEP report, publicly reported information on the status of the recommendations has been limited. For example, on the first anniversary of the NEP report, in May 2002, DOE issued a press release highlighting progress made in implementing the NEP recommendations. According to DOE, at that time all but 1 of the 22 recommendations, that it reported required legislative action, had either been enacted into law or were contained in House or Senate energy bills. However, DOE provided no detail on what the 22 recommendations that required legislation were or what the status was of the other 84 recommendations. On the second anniversary of the NEP report, in May 2003, DOE again issued a press release that described progress in implementing the NEP recommendations. This document provided the first status information on each of the 106 recommendations in the form of an NEP scorecard that characterized each recommendation as either under way or complete. The scorecard reported that 96 of the 106 recommendations were complete, although it noted that 16 of the “complete” recommendations involved legislation that was then being considered by Congress. However, DOE did not provide information on the progress cited specifically related to the 96 recommendations the scorecard reported as complete or on what actions were planned or then under way to complete the remaining 10 recommendations. DOE’s next report on the NEP recommendations was its January 2005 report. In contrast to the May 2003 scorecard that characterized most of the recommendations as complete (but had provided no specific information pertinent to each), DOE’s January 2005 report (1) characterized most recommendations as implemented but involving ongoing activities or requiring legislation and (2) provided the first information on specific actions taken to implement each recommendation. Although DOE’s January 2005 report represents an improvement in the level of information DOE has provided on the status of NEP recommendation implementation, the information is still incomplete. For example, the NEP report recommended the development of energy educational programs, including possible legislation to create education programs funded by the energy industry. However, the January 2005 status report provided only an overview of federal energy education efforts, and it made no mention of creating education programs through legislation. Similarly, the 2001 NEP report made a recommendation to the Secretary of Transportation to work with Congress to enact legislation to implement congestion mitigation strategies. However, while the reported status outlined various DOT congestion mitigation efforts, it did not address the legislative aspect of the recommendation nor did it reflect DOT efforts to propose legislation to address this recommendation. In addition, another recommendation was made to DOE and DOI to promote new oil and gas well technology, but the status report addressed only DOE’s efforts to implement the recommendation. Appendix IV provides a complete list of the 106 NEP recommendations, DOE’s reported status of the recommendations, and our observations. DOE’s ability to provide consistent and complete information on the status of NEP implementation may have been limited by a lack of sustained, centralized efforts to monitor and report on the ongoing implementation of the NEP recommendations. For example, one of the first recommendations in the NEP report was that the National Energy Policy Development Group (NEPDG) continue to work and meet on the implementation of the NEP. However, the NEPDG was terminated on September 30, 2001, and did not meet or work on the implementation of the NEP recommendations after that time. Nevertheless, according to DOE, individual agencies have continued to coordinate implementation efforts and to measure and track implementation progress. Also, according to DOE, an interagency working group led by DOE was established to coordinate agencies’ implementation of the NEP recommendations. According to DOE officials, the agency’s Office of National Energy Policy is responsible for coordinating, and providing strategic direction for, the implementation of the NEP report recommendations. However, additional information we obtained in our review raises questions about the extent to which centralized monitoring of recommendation implementation has been sustained. For example, according to DOE, its NEP Office did not assume leadership of the interagency working group until the fall of 2003. Also, DOE officials told us in November 2003 that the NEP Office had not been fully staffed because of budget constraints. Finally, at that time, DOE officials also told us that implementing the NEP recommendations was the responsibility of individual federal agencies, and that there was no centralized, formal system to monitor implementation and report on the status of the NEP recommendations. The nature of some of the NEP recommendations also makes it difficult to assess the progress made in implementing them. Specifically, some of the recommendations are open-ended and lack measurable goals. For example, a NEP report recommendation is that the President make energy security a priority of our trade and foreign policy. In reporting on the status of this recommendation, DOE states that the recommendation has been implemented, with activities ongoing, because energy security has been made a priority of our trade and foreign policy through various bilateral and multilateral activities, such as the U.S.-China Oil and Gas Industry Forum and the International Partnership for the Hydrogen Economy. However, this recommendation is open-ended and does not contain a specific, measurable goal, thereby making it difficult to understand how or to what extent the activities described have helped to implement the recommendation. In contrast, another NEP report recommendation directs the Secretary of Energy to authorize the Western Area Power Administration to explore relieving an electricity transmission bottleneck in the western United States. The DOE status report noted that a new transmission line to relieve this bottleneck was completed on December 14, 2004. This recommendation sets a measurable infrastructure-related goal, and the status report demonstrated progress toward that goal. (See app. IV.) Finally, some federal energy-related programs that address the same issues as some of the NEP recommendations are not mentioned in either the NEP recommendations or the status report, making it difficult to assess the overall status of federal efforts to address energy issues. For example, one NEP recommendation calls for the Secretary of Energy to conduct a review of current funding and historic performance of energy-efficiency research and development programs. In response, the status report noted that DOE completed a detailed review of its programs. However, at least one other federal agency, NSF, funds energy-efficiency research and development activities as part of its overall science program. These activities were not specified in the recommendation or recognized in the status report. Other federal energy efforts that relate to some of the same issues that the NEP recommendations addressed, but were not specifically addressed in the recommendations or the status report, include some NRC programs and most USTDA and USAID programs. (See app. IV.) These agencies are not represented on DOE’s NEP interagency task force. When we spoke with representatives from these agencies, they said that even though their programs address some of the same issues as the NEP recommendations, they were not involved in the development of the NEP, nor were they charged with implementation of the recommendations. Additionally, we found that the NEP report recommendations omit discussion of some federal energy-related efforts and the issues they address. Such omissions preclude a full accounting of the results of federal energy efforts in any NEP status report. For example, the NEP report recommendations do not address all energy-related excise taxes and energy-related income tax preferences. Regarding programs, our review of the NEP report did not find that it addressed basic energy science research; DOE nondefense nuclear waste cleanup; federal electricity support; FERC energy market oversight; and the overall market oversight roles of agencies such as CFTC, FTC, DOJ, and SEC. Federal energy-related program resources have grown since the release of the NEP report as programs continue to address the major energy activity areas. For example, compared with fiscal year 2000 estimated budget authority, fiscal year 2003 estimated budget authority funding grew by about 30 percent, from $7.3 billion to $9.6 billion for those programs where we could identify estimated budget authority for both years. In addition, over the same time period, outlay equivalent estimates for energy-related income tax preferences grew by over 60 percent, from $2.7 billion to about $4.4 billion. While we did not review changes within individual programs and tax policies, federal efforts have continued to address the eight major energy activities of supply, environment and health, low-income assistance, basic science, infrastructure, conservation, assurance and security, and competition and education. Energy supply continues to be a major emphasis of the federal efforts, accounting for a majority of the growth. For example, income tax preferences associated with energy supply have represented almost all of the $1.7 billion growth in income tax preferences. Within energy supply income tax preferences, growth has occurred primarily with efforts targeting fossil and renewable energy supplies. Table 11 shows changes in program estimated budget authority, by major energy issue, in fiscal years 2000 and 2003. Appendix V provides a breakdown of the change in estimated budget authority for each program addressing the major energy issues. Income tax preferences do not compete in the budget process and do not have to seek budget authority—they are already “fully funded” as long as they remain in effect. However, as has been demonstrated, they can represent significant resources. Current fiscal year 2005 projected estimates indicate energy-related income tax preferences have continued to grow—to $5.15 billion in outlay equivalent estimates. Table 12 provides a profile of changes in energy-related income tax preferences in outlay equivalent estimates between fiscal years 2000 and 2003. Along with the growth in energy-related federal resources, budget requests for federal energy-related programs have also grown since 2000. However, budget request information is not available for all of the programs identified in our inventory for which we have obtained estimates because many energy-related programs are part of larger programs and separate, distinct budget requests are not made for them. For those programs that had specific, energy-related budget requests, budget requests grew between fiscal years 2000 and 2003 by about 27 percent—from $5.9 billion to $7.5 billion. This growth continued into fiscal year 2005, when requests reached $8.4 billion. Table 13 shows budget requests in fiscal years 2000, 2003, and 2005 by major energy activity area. Appendix VI provides a breakdown of requests for each program that has a budget request under the major energy areas. The nation’s energy problems are not new. In the 1970s, we issued a series of reports to Congress on the need for both a focal point for dealing with energy problems and a coherent set of energy policies that would stand the tests of the future. While the United States does have, and has had, a series of energy-related programs and tax policies, calls for a “national energy policy” persist. Currently, hundreds of energy-related programs funded by the federal government, energy-related income tax preferences, and federal regulatory requirements that impact energy encompass the federal government’s role in energy policy. At the federal level, development and implementation of our national energy policy is a shared responsibility of the executive and legislative branches of government. Any progress toward understanding the role that the federal government plays in energy policy and improving upon it must start with a comprehensive inventory of these federal energy-related programs, tax policies, and regulatory activity. The NEP report, as other national energy policies have in the past, offers such a start toward the development of this inventory. Furthermore, although we are not making recommendations in this report, we have noted a lack of information on the results of federal energy-related efforts. DOE’s Office of National Energy Policy has an opportunity to serve as a key focal point in improving upon the measurement of results made in federal energy-related efforts. Establishing clear and measurable goals and having the ability to track, measure, and transparently report on results achieved toward those goals will give policy makers the information they need to provide continually improving direction to the federal government’s energy-related efforts. We provided DOE with a draft of this report for review and comment and asked DOE to coordinate any formal written comments from the other federal agencies included in this report. In addition, we provided a draft of this report to the other federal agencies in order to obtain comments on specific information about particular agencies’ energy-related activities. In summary, DOE responded in its written comments that it did not believe our report accurately reflected the goals or intent of the NEP, its implementation, or the Administration’s ongoing energy security efforts. Overall, we believe DOE’s comments reflect a basic misunderstanding about the report’s objectives and the approaches we used to address these objectives. Specifically, with respect to our first objective (an inventory of major federal energy programs and their cost) DOE commented that our presentation of estimated budget authority for programs and outlay equivalent estimates for tax preferences represented a quantitative approach to evaluating the NEP report that is not consistent with its purpose. However, our first objective and the resulting inventory of major federal energy programs laid out in our report does not in any way reflect an evaluation of the NEP report. We prepared this inventory independent of the NEP report and did not intend to suggest that the NEP report was intended to reflect an inventory and accounting of resources comparable to the one we prepared. Our second and third objectives--dealing with the results of NEP report recommendation implementation and changes in resources since the NEP report’s issuance—do have obvious connections to the NEP report. Here too, however, we believe DOE’s comments confuse the issue by suggesting that our report is somehow an evaluation of the NEP report rather than simply a presentation of observations on actions taken and reported results achieved since the report’s issuance. In this connection, DOE defends the NEP report “as an overall blueprint” and that it “is not sufficient to look at the President’s energy policies through specific NEP recommendations alone.” We agree and note that our report suggests nothing to the contrary. However, our report does focus on the reported results achieved in implementing these important NEP recommendations that, as the NEP report states, “taken together, offer the thorough and responsible energy plan our nation has long needed.” Moreover, DOE implies that when we point out that many of the NEP recommendations are open-ended in nature, we were being critical of the recommendations. This is not our intent. We were simply stating as a matter of fact that the open-ended, nonspecific nature of many of the NEP recommendations complicated our reporting on recommendation implementation status. With respect to NEP report recommendation implementation, DOE further commented that DOE’s own NEP status report was not intended to be comprehensive and that supplementary material could be found in unidentified “budget documents and other means.” We recognize that status information may be available from a variety of sources, and we explored those sources in performing our analysis. However, in reviewing the status of efforts to implement the recommendations, we believe it was appropriate to focus on DOE’s most recent report on the status of these recommendations. In our view, it does not seem unreasonable to expect that Congress and the American people could find relatively complete information on NEP implementation status in a direct format through one centralized source, especially if that source is entitled NEP Status Report. DOE and other federal agencies provided numerous technical clarifications, observations, and editorial comments, and we have made changes to this report as appropriate. DOE’s written comments are reproduced in appendix VII. As agreed with your offices, unless you publicly announce the contents of this report, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies to the Secretary of Energy and other interested parties. We will make copies available to others upon request. In addition, the report will be available at no charge at GAO’s Web site at http:www.gao.gov. Questions about this report should be directed to me at (202) 512-3841. Key contributors to this report are James Cooksey, Nancy Crothers, Doreen Feldman, Mark Gaffigan, Michael Gilbert, Erica Haley, Elisabeth Helmer, Chir Huang, Arthur James, Alan Kasdan, Frank Rusco, John Scott, Karla Springer, Anne Stevens, Jena Whitley, and Monica Wolford. We were asked to (1) identify the federal government’s major energy- related efforts, (2) review the status of efforts to implement the May 2001 National Energy Policy (NEP) report recommendations, and (3) determine the extent to which resources associated with federal energy-related efforts has changed since the release of the NEP report. To identify the federal government’s major energy-related efforts, we reviewed the federal agencies that have the most responsibility for implementing the recommendations of the NEP report—the Departments of Energy (DOE), the Interior (DOI), Commerce, Transportation (DOT), State, and Agriculture (USDA) and the Environmental Protection Agency (EPA). We asked these key agencies, and other agencies as time allowed, to identify their energy-related work, and we developed an inventory of the energy-related programs that we identified. Other agencies we included were the Commodity Futures Trading Commission, Department of Justice (DOJ), Federal Energy Regulatory Commission, Federal Trade Commission, Department of Health and Human Services, Nuclear Regulatory Commission, National Science Foundation (NSF), Securities and Exchange Commission, U.S. Army Corps of Engineers, U.S. Trade and Development Agency (USTDA), and U.S. Agency for International Development (USAID). In addition to identifying energy-related programs, we relied on the list of energy-related tax expenditures published in the President’s annual budget that provided income tax preferences. We also obtained data on energy-related federal collections, including revenue from royalties and user fees from the agencies. In addition, we also attempted to identify collections from energy-related excise taxes. Although the Department of the Treasury does not provide a specific listing of energy- related excise taxes, we used information on the collection of excise taxes that was published by Treasury’s Internal Revenue Service to identify these taxes. While this information is updated quarterly, the last full fiscal year available is 2003. We collected and analyzed agency-reported program and tax preference descriptions and budget request and funding information at these key agencies. Based on our review of the NEP report and the program and tax preference descriptions and our discussions with applicable program officials, we identified eight categories of energy- related activities and grouped the programs and tax preferences by these eight areas: (1) energy supply, (2) energy’s impact on the environment and health, (3) low-income energy consumer assistance, (4) basic energy science research, (5) energy delivery infrastructure, (6) energy conservation, (7) energy assurance and physical security, and (8) energy market competition and education. Because it was often difficult to quantify the resources associated with energy-related aspects of various programs, where possible, we relied on agency estimates of budget authority for fiscal year 2003—the most recent year for which data were readily available for most of the programs during our review. Since we began our review in late 2003, fiscal year 2003 was the most complete year for which data were readily available. It was often difficult to quantify the resources associated with energy- related aspects of various programs because agencies could not provide specific estimates. We used the following method to arrive at an estimate of the magnitude of federal energy resources for fiscal year 2003—the most recently completed fiscal year readily available—and for fiscal year 2000. For many programs, we obtained budget request, budget authority, outlay, and obligation information for programs from agency officials and documents to the extent that these numbers were available. To ensure the accuracy of the financial information provided by the agencies, we attempted to obtain documentation and agency verification, but we could not independently verify the estimates for energy-related programs or activities. In obtaining information on resources associated with most programs, we were able to obtain actual budget authority or estimated budget authority from agency officials. However, some programs do not have readily available estimates of budget authority available for their energy-related activities because they are part of a larger appropriation that addresses both energy-related and nonenergy-related activities. For such programs, agencies had to estimate the portion of budget authority associated with the energy-related program activity. In these cases, we asked knowledgeable agency officials to estimate the amount of resources dedicated to the energy-related activities. In some cases, agencies provided estimates of energy-related outlays or obligations. For the following agencies, in consultation with agency officials, we used these agency outlay or obligation estimates as estimates for budget authority: State, U.S. Army Corps of Engineers, NSF, USAID, USTDA, and some USDA, DOT, and EPA programs. On the basis of our examination of the supporting information, we believe that the estimates of budget authority for federal energy-related programs gathered are sufficiently reliable for the purposes of this report, which is to provide the best available estimate of federal resources for energy-related programs. In addition to obtaining budget authority estimates for energy-related programs, we also obtained outlay equivalent estimates for energy-related income tax preferences—federal income tax provisions that provide preferential tax treatment related to energy supply and use. Revenue losses resulting from these tax preferences—also called tax expenditures—may, in effect, be viewed as spending channeled through the tax system. The Congressional Budget and Impoundment Act of 1974 requires that the budget include a list of tax expenditures. Each year, revenue loss estimates for tax expenditures are prepared by Treasury and the Joint Committee on Taxation. Treasury also produces outlay equivalent estimates—the amount of budget outlays that would be required to provide the taxpayer with the same after-tax income as would be received through the tax expenditure. We used the outlay equivalent measure in quantifying the energy-related tax preferences because it allows the tax preference programs to be compared with federal outlay programs on a more even footing. While the aggregate value for energy-related tax preferences is useful for gauging their general magnitude, summing does not take into account interactions between individual provisions. In addition, tax preferences below $5 million annually are not reported on Treasury’s list and, therefore, are not included in this report. We focused on federal resources associated with key federal agencies that have direct responsibility for issues addressed in and for implementing the recommendations of the NEP report. We attempted to address other agencies as time allowed, but the inventory did not evaluate the efforts of every federal agency. Principally, in this review, we did not attempt to inventory DOD spending and activities. However, DOD is a large user of energy and engages in a wide range of activities that may impact the energy sector. For example, DOD installations have about 2,600 electric, water, wastewater, and natural gas utility systems valued at about $50 billion. These systems include the equipment, fixtures, and structures used in the distribution of electric power and natural gas; the treatment and distribution of water; and the collection and treatment of wastewater. Because we did not evaluate DOD spending, or every federal agency that may have energy-related activities, this report reflects a significant, but minimum amount of resources associated with federal programs that may play a role in energy. In addition, although the federal government has a major impact on the energy industry through regulatory actions, this review did not attempt to inventory the federal regulatory actions that affect energy, but rather focused on federal energy-related programs and tax policies. Federal regulatory actions that impact energy have a cost to the industry but are offset by benefits accruing to the population at large or targeted groups. For example, in its report entitled Progress in Regulatory Reform: 2004 Report to Congress on the Costs and Benefits of Federal Regulations and Unfunded Mandates on State, Local, and Tribal Entities 2004, the Office of Management and Budget (OMB) estimated the annual costs of all major federal rules implemented between fiscal years 1994 and 2003 at about $35 billion to $40 billion and annual benefits of these rules at between $63 billion to $169 billion. A large fraction of these costs and benefits may be related to energy in that (1) they have come about as the result of regulations to reduce public exposure to fine particulate matter, such as some emissions from burning fuels, or (2) they pertain to regulations promulgated by DOE, in part to address energy efficiency and renewable energy. In this report, we have primarily focused on direct federal programs and tax policies, rather than trying to assess the total economic impact of the federal government on the energy sector. However, the magnitude of the OMB estimates of the costs and benefits of regulation indicates that the federal impact on energy issues may be greater than the sum of resources associated with direct programs and tax preferences. To review the status of federal efforts to implement the recommendations contained in the May 2001 NEP report, we reviewed publicly reported status information on the implementation of the NEP recommendations, focusing on DOE’s most recent January 2005 report on the status of the 106 NEP recommendations. We discussed efforts to monitor and report on the status of these recommendations with DOE’s Office of National Energy Policy and other federal agencies involved in energy-related efforts. We also discussed the energy-related programs with the appropriate agency personnel and, when possible, determined whether and how the programs were related to the NEP report recommendations. To determine the extent to which resources associated with federal energy- related efforts have changed since the release of the NEP report, we compared fiscal year 2000 (shortly before the NEP report) federal programs and budget authority estimates with fiscal year 2003 programs and budget authority estimates. However, we were not able to identify estimates of budget authority for every program for both fiscal years 2000 and 2003. Thus, we compared only those programs for which we could identify an estimate for both years. As a result, three FERC programs that were included in the inventory of fiscal year 2003 programs and resources were not included in the fiscal years 2000 to 2003 comparison. In addition, we compared outlay equivalents for energy-related tax preferences between fiscal years 2000 and 2003. We were able to obtain outlay equivalent estimates for all 11 energy-related tax preferences for both years as well as projections for fiscal year 2005. Finally, we compared fiscal years 2000, 2003, and 2005 Presidential budget requests for those major energy- related programs that have specific budget requests. However, many of the smaller programs we identified in our inventory do not have specific budget requests. Thus, those programs are not included in the comparison of energy-related budget requests and cannot be compared with the estimates of budget authority provided for all energy-related programs we identified in our inventory. Finally, due to the constraints of developing an inventory of federal energy- related efforts and associated resources within the review time frame, we did not assess the changes within the objectives of the individual program activities within our inventory. Instead, we compared the resources and budget requests associated with federal energy-related efforts in the eight major activity areas. We conducted our review between December 2003 and May 2005 in accordance with generally accepted government auditing standards. The four federal Power Marketing Administrations (PMAs)—Bonneville Power Administration, Southeast Power Administration, Southwest Power Administration, and Western Power Administration—market power produced primarily at federal hydroelectric dams and projects. These facilities are owned and operated by either DOI’s Bureau of Reclamation, the U.S. Army Corps of Engineers, or the International Boundary and Water Commission. In contrast, the Tennessee Valley Authority (TVA) markets electricity produced at its own fossil, nuclear, and hydroelectric energy facilities. Most electricity marketed by the PMAs is generated from facilities built with federal funding through appropriations or Treasury financing. Sales of this electricity are intended to pay back these appropriated funds or financing as well as offset any ongoing expenses associated with operating or upgrading the facilities, including the construction, operation, and maintenance of hydroelectric facilities by the Bureau of Reclamation, the Corps, or the International Boundary and Water Commission. The Corps has developed hydroelectric power as part of many of its multipurpose water resources projects. The Corps reports that it has an $18 billion investment in hydropower facilities, which include 75 plants and 350 generating units. Hydropower represents 13 percent of the electrical power generated in the United States, and the Corps reports that its facilities generate 24 percent of it. The Corps is the largest owner/operator of hydroelectric power plants in the United States. The Corps reports that its objective is to keep the plants operating at peak efficiency and reliability by replacing aging turbines, generators, and control systems with state-of-the- art equipment. In fiscal year 2003, the Corps received budget authority of $414 million to fund a portion of these activities. The revenues from the power collected were either deposited in the Treasury by the PMAs or, as in the case of the Bonneville Power Administration, used directly to fund the Corps activities. In fiscal year 2003, Bonneville provided $336 million directly for the Corps’ hydroelectric power program in Bonneville’s region. The Bureau of Reclamation’s central mission is to manage water resources for multiple benefits, including the generation of electricity, at its multipurpose water projects in the western United States. Electricity produced at Reclamation facilities either is used internally at projects or sold as surplus power. Surplus power marketed by the PMAs produces revenues used to repay project costs. In fiscal year 2003, Reclamation received $58.6 million in budget authority for operations of hydroelectric facilities in three of its five regions. In the other two regions, PMAs directly fund the hydroelectric facilities. Finally, the International Boundary and Water Commission operates the Falcon-Amistad Project. The project consists of two dams on the Rio Grande River between Texas and Mexico, which share and operate separate power plants on each side of the river. By law, the federal utilities are nonprofit and provide selected classes of customers with preference in purchasing their power. These “preference customers” include municipal utilities; cooperatives; state utilities; irrigation districts; and, in some instances, state governments and federal agencies. According to the Energy Information Administration (EIA), in 2003, federal utilities sold about 300 million megawatt-hours of wholesale and retail electricity, a volume equivalent to about 8 percent of total U.S. electricity consumption. In 2000, EIA published a report on federal financial interventions and subsidies in energy markets that included an assessment of subsidies to PMA and TVA customers. In its 2000 report, EIA presented three different methodologies for estimating the value of the implicit support to these customers measured by the extent to which (1) electric power was sold by federal utilities at below-market prices, (2) federal utilities paid below-market rates on debt they had incurred, or (3) federal utilities’ rates of return were below those of their private utility counterparts. The estimated value of the implicit support varied significantly, depending on which methodology was used. Further, EIA’s report noted that there are potential problems with each of the methodologies that EIA discussed that make it impossible to choose the best methodology or to conclude that any one of the three methodologies is likely to give a “most accurate” estimate of the actual value. In table 17, we present EIA’s measures of implicit support for 1998 for the PMAs and TVA. The Rural Utilities Service (RUS) is an agency of USDA that provides support to rural communities, including loans and loan guarantees for the development and improvement of electricity services. Under the authority of the Rural Electrification Act of 1936 and amendments, RUS loans and loan guarantees are (1) to finance the construction of electric transmission and generation facilities, as well as electric system improvements and replacements in rural areas, and (2) to be used for energy conservation programs and renewable energy systems. As of September 30, 2003, RUS had approximately $28 billion in outstanding loans and about $520 million in outstanding loan guarantees. Some RUS loans and loan guarantees provide access to financing at below- market rates, which amounts to a subsidy for some rural users of electricity. The size of this subsidy depends on the interest rates at which RUS loans are made as well as the prevailing market interest rates; therefore, the amount of support varies from year to year and according to which measure of market interest is used. In its May 2000 report on federal financial interventions in energy markets, EIA estimated that the value of the subsidy provided by RUS loans and loan guarantees was between $144 million and $1.557 billion in 1998. We asked RUS to estimate the value of the subsidy associated with these loans and loan guarantees for fiscal year 2003, but RUS does not estimate such values. However, RUS did provide us with a figure derived by OMB of about $5 million that reflects the net cost to the government of the program, which is the amount of direct appropriations to the program that is not recaptured by loan repayments. This figure does not reflect the implied interest rate support as measured by the EIA report. This appendix provides a complete list of the 106 recommendations contained in the May 2001 National Energy Policy report, DOE’s January 2005 reported status of these recommendations, and our observations on the reported status. For each of the 106 NEP recommendations, table 18 contains the following information: The first column contains the number and text of the NEP recommendation as printed in the May 2001 NEP report. This number is used to track the recommendations and refers to the chapter and the order within which the recommendations appear in the NEP. Thus, “4-3” refers to the fourth chapter of the NEP and the third recommendation within that chapter. The second column contains DOE’s overall assessment of the recommendation, such as “Implemented, Activities Ongoing, or Legislation Proposed,” and DOE’s description of the actions taken to implement the recommendation. This status information was reported by DOE in its January 2005 National Energy Policy Status Report on Implementation of NEP Recommendations. The third column contains our observations on the status of the recommendation provided by DOE as reported in the second column. Our observations may discuss reported status of the recommendations and observations about it, such as the lack of specific goals and measures that make it difficult to assess the progress of federal energy- related efforts to implement the recommendations. In some cases, we include additional information in this column from (1) DOE’s responses to questions we raised about its status report or (2) agency comments on a draft of this report. Our observations on the status report should not be viewed as either an endorsement or a critique of the NEP recommendations. Cooperative State Research, Education, and Extension Service: Bioenergy and Energy Related Programs I Cooperative State Research, Education, and Extension Service: Bioenergy and Energy Related Programs II Cooperative State Research, Education, and Extension Service: Bioenergy and Energy Related Programs III Cooperative State Research, Education, and Extension Service: Bioenergy and Energy Related Programs IV Farm Service Agency-Commodity Credit Corporation’s Bioenergy Program Forest Service Research and Development: Bioenergy, Energy Efficiency, and Conservation Research Office of Chief Economist, Office of Energy Policy and New Uses-3 Office of Chief Economist, Office of Energy Policy and New Uses-2 Rural Development Business Programs: Renewable Energy and Energy Efficiency (146,000,000) (47,000,000) Energy Supply-Biomass And Biorefinery Systems Research and Development (R&D) Cooperative State Research, Education, and Extension Service-Bioenergy and Energy Related Programs I Cooperative State Research, Education, and Extension Service-Bioenergy and Energy Related Programs II Cooperative State Research, Education, and Extension Service-Bioenergy and Energy Related Programs III Cooperative State Research, Education, and Extension Service-Bioenergy and Energy Related Programs IV Farm Service Agency-Commodity Credit Corporation’s Bioenergy Program Forest Service Research and Development-Bioenergy, Energy Efficiency, and Conservation Research Office of Chief Economist-Office of Energy Policy and New Uses-3 Office of Chief Economist-Office of Energy Policy and New Uses-2 Rural Development Business Programs- Renewable Energy and Energy Efficiency (246,000,000) (140,000,000) Energy Supply-Biomass and Biorefinery Systems Research and Development (R&D) | The lives of most Americans are affected by energy. Increased energy demand and higher energy prices has led to concerns about dependable, affordable, and environmentally sound energy. The federal government has adopted energy policies and implemented programs over the years that have focused on the appropriate role of the federal government in energy, attempting to achieve balance between supply and conservation. The May 2001 National Energy Policy (NEP) report contained over 100 recommendations that it stated, taken together, provide a national energy plan that addresses the energy challenges facing the nation. As Congress considers existing federal energy programs and proposed energy legislation in support of the May 2001 report, GAO was asked to (1) identify major federal energy-related efforts, (2) review the status of efforts to implement the recommendations in the May 2001 NEP report, and (3) determine the extent to which resources associated with federal energy-related efforts have changed since the release of the NEP report. Over 150 energy-related program activities and 11 tax preferences address eight major energy activity areas: (1) energy supply, (2) energy's impact on the environment and health, (3) low-income energy consumer assistance, (4) basic energy science research, (5) energy delivery infrastructure, (6) energy conservation, (7) energy assurance and physical security, and (8) energy market competition and education. At least 18 federal agencies, from the Department of Energy (DOE) to the Department of Health and Human Services, have energy-related activities. Based on fiscal year 2003 data (the most complete data available), the federal government provided a minimum of $9.8 billion in estimated budget authority for the energy-related programs we identified. In addition, various federal energy-related income tax preferences provided another estimated $4.4 billion in outlay equivalent value, primarily for energy supply objectives. On the revenue side, the federal government collected about $10.1 billion in fiscal year 2003 through various energy-related programs and about $34.6 billion in energy-related excise taxes. Significant collections involve royalties from the sale of oil and gas resources on federal lands, while taxes on gasoline and other fuels account for most of the excise taxes. While DOE reports that most of the 2001 NEP report recommendations are implemented, it is difficult to independently assess the status of efforts made to implement these recommendations because of limited information and the open-ended nature of some of the recommendations themselves. For example, the NEP report recommended the development of energy educational programs, including possible legislation to create education programs funded by the energy industry. However, DOE's January 2005 status report on NEP implementation provided only an overview of federal energy education efforts and made no mention of possible legislation to create such programs. In addition, some of the recommendations are open-ended and lack a specific, measurable goal, which makes it difficult to assess progress. Without a specific, measurable goal, it can be difficult to understand how and to what extent activities are helping to fulfill a recommendation. While this report does not make recommendations, it provides observations on the lack of information on the status of the NEP recommendations, which may hinder policy makers in assessing progress and determining future energy policies. Resources devoted to energy-related programs have grown since the release of the NEP report. For example, compared with fiscal year 2000, just prior to the 2001 NEP report, fiscal year 2003 estimated budget authority for energy-related programs grew by about 30 percent, from $7.3 billion to $9.6 billion. In addition, over the same period, estimated outlay equivalents for energy-related income tax preferences grew by over 60 percent, from $2.7 billion to $4.4 billion. Federal efforts have continued to address the eight major energy activities. Energy supply continues to be a major emphasis of the federal efforts, accounting for a majority of the growth. |
NIJ is the principal research development, and evaluation agency within OJP. It was created under the 1968 Omnibus Crime Control and Safe Streets Act, and is authorized to enter into grants, cooperative agreements, or contracts with public or private agencies to carry out evaluations of the effectiveness of criminal justice programs and identify promising new programs. NIJ’s Office of Research and Evaluation oversees evaluations by outside researchers of a wide range of criminal justice programs, including ones addressing violence against women, drugs and crime, policing and law enforcement, sentencing, and corrections. According to NIJ officials, the agency initiates a specific criminal justice program evaluation in one of three ways. First, congressional legislation may mandate evaluation of specific programs. For example, the Departments of Commerce, Justice, and State, the Judiciary, and Related Agencies Appropriations Act, 2002, requires DOJ to conduct independent evaluations of selected programs funded by OJP’s Bureau of Justice Assistance and selected projects funded by OJP’s Office of Juvenile Justice and Delinquency Prevention. DOJ determined that NIJ would be responsible for overseeing these evaluations. Second, NIJ may enter into an evaluation partnership with another OJP or DOJ office, or another federal agency, to evaluate specific programs or issues of interest to both organizations. In these cases, NIJ, in partnership with the program offices, develops a solicitation for proposals and oversees the resulting evaluation. Third, NIJ periodically solicits proposals for evaluation of criminal justice programs directly from the research community, through an open competition for grants. These solicitations ask evaluators to propose research of many kinds in any area of criminal justice, or in broad conceptual areas such as violence against women, policing research and evaluation, research and evaluation on corrections and sentencing, or building safer public housing communities through research partnerships. According to NIJ officials, once the decision has been made to evaluate a particular program, or to conduct other research in a specific area of criminal justice, the process of awarding an evaluation grant involves the following steps. First, NIJ issues a solicitation and receives proposals from potential evaluators. Next, proposals are reviewed by an external peer review panel, as well as by NIJ professional staff. The external review panels are comprised of members of the research and practitioner communities, and reviewers are asked to identify, among other things, the strengths and weaknesses of the competing proposals. External peer review panels are to consider the quality and technical merit of the proposal; the likelihood that grant objectives will be met; the capabilities, demonstrated productivity, and experience of the evaluators; and budget constraints. Reviews are to include constructive comments about the proposal, useful recommendations for change and improvement, and recommendations as to whether the proposal merits further consideration by NIJ. NIJ professional staff are to review all proposals and all written external peer reviews, considering the same factors as the peer review panels. NIJ professional staff are also to consider the performance of potential grantees on any other previous research grants with NIJ. Next, the results of the peer and NIJ staff reviews are discussed in a meeting of NIJ managers, led by NIJ’s Director of the Office of Research and Evaluation. Then, NIJ’s Office of Research and Evaluation staff meet with the NIJ Director to present their recommendations. Finally, the NIJ Director makes the funding decision based on peer reviews, staff recommendations, other internal NIJ discussions that may have taken place, and consideration of what proposals may have the greatest impact and contribute the most knowledge. NIJ generally funds outcome evaluations through grants, rather than with contracts. NIJ officials told us that there are several reasons for awarding grants as opposed to contracts. Contracts can give NIJ greater control over the work of funded researchers, and hold them more accountable for results. However, NIJ officials said that NIJ most often uses grants for research and evaluation because they believe that grants better ensure the independence of the evaluators and the integrity of the study results. Under a grant, NIJ allows the principal investigator a great deal of freedom to propose the most appropriate methodology and carry out the data collection and analysis, without undue influence from NIJ or the agency funding the program. Grants also require fewer bureaucratic steps than do contracts, resulting in a process whereby a researcher can be selected in a shorter amount of time. NIJ officials told us that NIJ tends to make use of contracts for smaller and more time-limited tasks—such as literature reviews or assessments of whether specific programs have sufficient data to allow for more extensive process or outcome evaluations—rather than for conducting outcome evaluations. NIJ also occasionally makes use of cooperative agreements, which entail a greater level of interaction between NIJ and the evaluators during the course of the evaluation. According to NIJ officials, cooperative agreements between NIJ and its evaluators tend to be slight variations of grants, with the addition of a few more specific requirements for grantees. NIJ officials told us that they might use a cooperative agreement when NIJ wants to play a significant role in the selection of an advisory panel, in setting specific milestones, or aiding in the design of specific data collection instruments. NIJ is to monitor outcome evaluation grantees in accordance with policies and procedures outlined in the OJP Grant Management Policies and Procedures Manual. In general, this includes monitoring grantee progress through regular contact with grantees (site visits, cluster conferences, other meetings); required interim reports (semiannual progress and quarterly financial reports); and a review of final substantive evaluation reports. In some cases, NIJ will require specific milestone reports, especially on larger studies. Grant monitoring for all types of studies is carried out by approximately 20 full-time NIJ grant managers, each responsible for approximately 17 ongoing grants at any one time. From 1992 through 2002, NIJ awarded about $36.6 million for 96 evaluations that NIJ identified as focusing on measuring the outcomes of programs, policies, and interventions, among other things. The 15 outcome evaluations that we selected for review varied in terms of completion status (8 were completed, 7 were ongoing) and the size of the award (ranging between about $150,000 and about $2.8 million), and covered a wide range of criminal justice programs and issues (see table 1). All evaluations were funded by NIJ through grants or cooperative agreements.Seven of the 15 evaluations focused on programs designed to reduce domestic violence and child maltreatment, 4 focused on programs addressing the behavior of law enforcement officers (including community policing), 2 focused on programs addressing drug abuse, and 2 focused on programs to deal with juvenile justice issues. produce sufficiently sound information about program outcomes. Six evaluations began with sufficiently sound designs, but encountered implementation problems that would render their results inconclusive. An additional 4 studies had serious methodological problems that from the start limited their ability to produce reliable and valid results. Five studies appeared to be methodologically rigorous in both their design and implementation. (Appendix II provides additional information on the funding, objectives, and methodology of the 15 outcome evaluation studies.) Our review found that 5 evaluations had both sufficiently sound designs and implementation plans or procedures, thereby maximizing the likelihood that the study could meaningfully measure program effects. Funding for these methodologically sound studies totaled about $7.5 million, or nearly 50 percent of the approximately $15.4 million spent on the studies we reviewed. Six evaluations were well designed, but they encountered problems implementing the design as planned during the data collection phase of the study. Funding for these studies with implementation problems totaled about $3.3 million, or about 21 percent of the approximately $15.4 million spent on the studies we reviewed. Five of the evaluations we reviewed were well designed and their implementation was sufficiently sound at the time of our review. Two of these evaluations had been completed and 3 were ongoing. All 5 evaluations met generally accepted social science standards for sound design, including measurement of key outcomes after a follow-up period to measure change over time, use of comparison groups or appropriate statistical controls to account for the influence of external factors on the results, random sampling of participants and/or sites or other purposeful sampling methods to ensure generalizable samples and procedures to ensure sufficient sample sizes, and appropriate data collection and analytic procedures to ensure the reliability and validity of measures (see table 2). All 5 evaluations measured, or included plans to measure, specified outcomes after a sufficient follow-up period. Some designs provided for collecting baseline data at or before program entry, and outcome data several months or years following completion of the program. Such designs allowed evaluators to compare outcome data against a baseline measurement to facilitate drawing conclusions about the program’s effects, and to gauge whether the effects persisted or were transitory. For example, the National Evaluation of the Gang Resistance Education and Training Program examined the effectiveness of a 9-week, school-based education program that sought to prevent youth crime and violence by reducing student involvement in gangs. Students were surveyed regarding attitudes toward gangs, crime, and police, self-reported gang activity, and risk-seeking behaviors 2 weeks before the program began, and then again at yearly intervals for 4 years following the program’s completion. Measuring change in specific outcome variables at both baseline and after a follow-up period may not always be feasible. When the outcome of interest is “recidivism,” such as whether drug-involved criminal defendants continue to commit criminal offenses after participating in a drug treatment program, the outcome can only be measured after the program is delivered. In this case, it is important that the follow-up period be long enough to enable the program’s effects to be discerned. For example, the ongoing evaluation of the Culturally Focused Batterer Counseling for African-American Men seeks to test the relative effectiveness of counseling that recognizes and responds to cultural issues versus conventional batterer counseling in reducing batterer recidivism. All participants in the study had been referred by the court system to counseling after committing domestic violence violations. The evaluators planned to measure re-arrests and re-assaults 1 year after program intake, approximately 8 months after the end of counseling. The study cited prior research literature noting that two-thirds of first-time re-assaults were found to occur within 6 months of program intake, and over 80 percent of first-time re-assaults over a 2-1/2 year period occur within 12 months of program intake. All 5 evaluations used or planned to use comparison groups to isolate and minimize external factors that could influence the results of the study. Use of comparison groups is a practice employed by evaluators to help determine whether differences between baseline and follow-up results are due to the program under consideration or to other programs or external factors. In 3 of the 5 studies, research participants were randomly assigned to a group that received services from the program or to a comparison group that did not receive services. In constructing comparison groups, random assignment is an effective technique for minimizing differences between participants who receive the program and those who do not on variables that might affect the outcomes of the study. For example, in the previously mentioned ongoing evaluation of Culturally Focused Batterer Counseling for African-American Men participants who were referred to counseling by a domestic violence court are randomly assigned to one of three groups: (1) a culturally focused group composed of only African- Americans, (2) a conventional counseling group composed of only African- Americans, or (3) a mixed race conventional counseling group. The randomized design allows the investigators to determine the effect of the culturally focused counseling over and above the effect of participating in a same race group situation. In the remaining two evaluation studies, a randomized design was not used and the comparison group was chosen to match the program group as closely as possible on a number of characteristics, in an attempt to ensure that the comparison and program groups would be similar in virtually all respects aside from the intervention. For example, the ongoing Evaluation of a Multi-Site Demonstration for Enhanced Judicial Oversight of Domestic Violence Cases seeks to examine the effects of a coordinated community response to domestic violence (including advocacy, provision of victim services, and enhanced judicial oversight) on victim safety and offender accountability. To ensure that the comparison and program groups were similar, comparison sites were selected based on having court caseload and population demographic characteristics similar to the demonstration sites. Only the program group is to receive the intervention; and neither comparison site has a specialized court docket; enhanced judicial oversight; or a county-wide, coordinated system for handling domestic violence cases. All 5 evaluations employed or planned to employ sufficiently sound sampling procedures for selecting program and comparison participants. This was intended to ensure that study participants were representative of the population being examined so that conclusions about program effects could be generalized to that population. For example, in the previously mentioned Judicial Oversight Demonstration evaluation, offenders in program and comparison sites are being chosen from court records. In each site, equal numbers of eligible participants are being chosen consecutively over a 12-month period until a monthly quota is reached. Although this technique falls short of random sampling, the optimal method for ensuring comparability across groups, use of the 12-month sampling period takes into consideration and controls for possible seasonal variation in domestic violence cases. The 5 evaluations also had adequate plans to achieve, or succeeded in achieving, reasonable response rates from participants in their samples. Failure to achieve adequate response rates threatens the validity of conclusions about program effects, as it is possible that selected individuals who do not respond or participate are substantially different on the outcome variable of interest from those who do respond or participate. The previously mentioned National Evaluation of the Gang Resistance Education and Training Program sought to survey students annually for up to 4 years after program participation ended. The grantee made considerable efforts in years 2, 3, and 4 to follow up with students who had moved from middle school to high school and were later enrolled in a large number of different schools; in some cases, in different school districts. The grantee achieved a completion rate on the student surveys of 76 percent after 2 years, 69 percent after 3 years, and 67 percent after 4 years. The grantee also presented analyses that statistically controlled for differential attrition among the treatment and comparison groups, and across sites, and showed that the program effects that were found persisted in these specialized analyses. All 5 well-designed evaluations employed or had adequate plans to employ careful data collection and analysis procedures. These included procedures to ensure that the comparison group does not receive services or treatment received by the program group, response rates are documented, and statistical analyses are used to adjust for the effects of selection bias or differential attrition on the measured results. For example, the Breaking the Cycle evaluation examined the effectiveness of a comprehensive effort to reduce substance abuse and criminal activity among arrestees with a history of drug involvement. The program group consisted of felons who tested positive for drug use, reported drug use in the past, or were charged specifically with drug-related felonies. The comparison group consisted of persons arrested a year before the implementation of the Breaking the Cycle intervention who tested positive for at least one drug. Both groups agreed to participate in the study. Although groups selected at different times and using different criteria may differ in systematic ways, the evaluators made efforts to control for differences in the samples at baseline. Where selection bias was found, a correction factor was used in the analyses, and corrected results were presented in the report. Six of the 11 studies that were well-designed encountered problems in implementation during the data collection phase, and thus were unable to or are unlikely to produce definitive results about the outcomes of the programs being evaluated. Such problems included the use of program and comparison groups that differed on outcome-related characteristics at the beginning of the program or became different due to differential attrition, failure of the program sponsors to implement the program as originally planned, and low response rates among program participants (see table 3). Five of the studies had been completed and 1 was ongoing. Three of the 6 studies used a comparison group that differed from the program group in terms of characteristics likely to be related to program outcomes—either due to preexisting differences or to differential attrition—even though the investigators may have made efforts to minimize the occurrence of these problems. As a result, a finding that program and comparison group participants differed in outcomes could not be attributed solely to the program. For example, the Comprehensive Service-Based Intervention Strategy in Public Housing evaluation sought to reduce drug activity and promote family self-sufficiency among tenants of a public housing complex in one city through on-site comprehensive services and high profile police involvement. The intervention site was a housing project in one section of the city; the comparison site was another public housing complex on the opposite side of town, chosen for its similarities to the intervention site in terms of race, family composition, crime statistics, and the number of women who were welfare recipients. However, when baseline data from the two sites were examined, important preexisting differences between the two sites became apparent. These differences included a higher proportion of residents at the comparison site who were employed, which could have differentially affected intervention and comparison residents’ propensity to utilize and benefit from available services. Additionally, since there was considerable attrition at both the intervention and comparison sites, it is possible that the intervention and comparison group respondents who remained differed on some factors related to the program outcomes. Although it may have been possible to statistically control for these differences when analyzing program outcomes, the evaluator did not do so in the analyses presented in the final report. In 5 of the 6 studies, evaluators ran into methodological problems because the program under evaluation was not implemented as planned, and the investigators could not test the hypotheses that they had outlined in their grant proposals. For the most part, this particular implementation problem was beyond the evaluators’ control. It resulted from decisions made by agencies providing program services that had agreed to cooperate with the evaluators but, for a number of reasons, made changes in the programs or did not cooperate as fully as expected after the studies were underway. This occurred in the evaluation of the Juvenile Justice Mental Health Initiative with Randomized Design, a study that is ongoing and expected to be completed in September 2003. The investigators had proposed to test whether two interventions provided within an interagency collaborative setting were effective in treating youths with serious emotional disturbances referred to the juvenile justice system for delinquency. Juveniles were to be randomly assigned to one of two treatment programs, depending on age and offense history (one for youth under the age of 14 without serious, violent, or chronic offense history, and one for youth ages 14 and older with serious, violent, or chronic delinquencies) or to a comparison group that received preexisting court affiliated service programs. The evaluators themselves had no power to develop or modify programs. The funding agencies contracted with a local parent support agency and with a nonprofit community-based agency to implement the programs, but the program for youth under the age of 14 was never implemented. In addition, partway through the study, the funding agencies decided to terminate random assignment of juveniles, and shortly thereafter ended the program. As a result, the evaluators had complete data on 45 juveniles who had been in the treatment program, rather than on the 100 juveniles they had proposed to study. Although the study continued to collect data on juveniles eligible for the study (who were then assigned to the comparison group, since a treatment option was no longer available), the evaluators proposed to analyze the data from the random experiment separately, examining only those treatment and comparison youths assigned when program slots were available. Because of the smaller number of participants than anticipated, detailed analyses of certain variables (such as the type, or amount of service received, or the effects of race and gender) are likely to be unreliable. Low response rates were a problem in 2 of the 6 studies, potentially reducing the reliability and validity of the findings. In a third study, response rates were not reported, making it impossible for us to determine whether this was a problem or not. In one study where the response rate was a problem, the evaluators attempted to survey victims of domestic abuse, a population that NIJ officials acknowledged was difficult to reach. In An Evaluation of Victim Advocacy With a Team Approach, the evaluators attempted to contact by telephone women who were victims of domestic violence, to inquire about victims’ experiences with subsequent violence and their perceptions of safety. Response rates were only about 23 percent, and the victims who were interviewed differed from those who were not interviewed in terms of the nature and seriousness of the abuse to which they had been subjected. NIJ’s program manager told us that when she became aware of low response rates on the telephone survey, she and the principal investigator discussed a variety of strategies to increase response rates. She said the grantee expended additional time and effort to increase the response rate, but had limited success. In the other study with low response rates—Reducing Non-Emergency Calls to 911: An Assessment of Four Approaches to Handling Citizen Calls for Service—investigators attempted to survey police officers in one city regarding their attitudes about the city’s new non-emergency phone system. Only 20 percent of the police officers completed the survey. Four of the evaluation studies began with serious design problems that diminished their ability to produce reliable or valid findings about program outcomes. One of the studies was completed, and 3 were ongoing. The studies’ design problems included the lack of comparison groups, failure to measure the intended outcomes of the program, and failure to collect preprogram data as a baseline for the outcomes of interest (see table 4). Funding for these studies that began with serious methodological problems totaled about $4.7 million, or about 30 percent of the approximately $15.4 million spent on the studies we reviewed. None of the 4 outcome evaluation studies had a comparison group built into the design—a factor that hindered the evaluator’s ability to isolate and minimize external factors that could influence the results of the study. The completed National Evaluation of the Rural Domestic Violence and Child Victimization Enforcement Grant Program did not make use of comparison groups to study the effectiveness of the federal grant program that supports projects designed to prevent and respond to domestic violence, dating violence, and child victimization in rural communities. Instead, evaluators collected case study data from multiday site visits to 9 selected sites. The other three funded grant proposals submitted to NIJ indicated that they anticipated difficulty in locating and forming appropriate comparison groups. However, they proposed to explore the feasibility of using comparison groups in the design phase following funding of the grant. At the time of our review, when each of these studies was well into implementation, none was found to be using a comparison group. For example, the Evaluation of a Multi-Site Demonstration of Collaborations to Address Domestic Violence and Child Maltreatment proposed to examine whether steps taken to improve collaboration between dependency courts, child protective services, and domestic violence service providers in addressing the problems faced by families with co occurring instances of domestic violence and child maltreatment resulted in improvements in how service providers dealt with domestic violence and child maltreatment cases. Although NIJ stated that the evaluators planned to collect individual case record data from similar communities, at the time of our review these sites had not yet been identified, nor had a methodology for identifying the sites been proposed. Our review was conducted during the evaluation’s third year of funding. Although they were funded as outcome evaluations, 2 of the 4 studies were not designed to provide information on intended outcomes for individuals served by the programs. Both the Rural Domestic Violence and the Multi- Site Demonstration of Collaborations programs had as their objectives the enhanced safety of victims, among other goals. However, neither of the evaluations of these programs collected data on individual women victims and their families in order to examine whether the programs achieved this objective. Most of the data collected in the Rural Domestic Violence evaluation were indicators of intermediary results, such as increases in the knowledge and training of various rural service providers. While such intermediary results may be necessary precursors to achieving the program’s objectives of victim safety, they are not themselves indicators of victim safety. The Multi-Site Demonstration of Collaborations evaluation originally proposed to collect data on the safety of women and children as well as perpetrator recidivism, but in the second year of the evaluation project, the evaluators filed a request to change the scope of the study. Specifically, they noted that the original outcome indicators proposed for victim safety were not appropriate given the time frame of the evaluation compared to the progress of the demonstration project itself. The modified scope, which was approved by NIJ, focused on system rather than individual level outcomes. The new ‘effectiveness’ indicators included such things as changes in policies and procedures of agencies participating in the collaboration, and how agency personnel identify, process, and manage families with co-occurring domestic violence and child maltreatment. Such a design precludes conclusions about whether the programs improved the lives of victims of domestic violence or their children. As discussed in our March 2002 report, the Rural Domestic Violence evaluation team did not collect baseline data prior to the start of the program, making it difficult to identify change resulting from the program. In addition, at the time of our review, in the third year of the multi-year National Evaluation of the Domestic Violence Victims’ Civil Legal Assistance Program evaluation, the evaluator did not know whether baseline data would be available to examine changes resulting from the program. This evaluation, of the federal Civil Legal Assistance program,proposed to measure whether there had been a decrease in pro se representation (or self-representation) in domestic violence protective order cases. A decrease in pro se representation would indicate successful assistance to clients by Civil Legal Assistance grantees. In May 2003, NIJ reported that the evaluator was still in the process of contacting the court systems at the study sites to see which ones had available data on pro se cases. The evaluator also proposed to ask a sample of domestic violence victims whether they had access to civil legal assistance services prior to the program, the outcomes of their cases, and satisfaction with services. Respondents were to be selected from a list of domestic violence clients served by Civil Legal Assistance grantees within a specified time period, possibly 3 to 9 months prior to the start of the outcome portion of the study. Such retrospective data on experiences that may have occurred more than 9 months ago must be interpreted with caution, given the possibility of recall errors or respondents’ lack of knowledge about services that were available in the past. Outcome evaluations are inherently difficult to conduct because in real- world settings program results can be affected by factors other than the intervention being studied. In addition, grantees’ ability to conduct such evaluations can depend on the extent to which information is available up front about what data are available to answer the research questions, where such data can be obtained, and how the data can be collected for both the intervention and comparison groups. We found that in 3 of the 15 NIJ evaluations we reviewed, NIJ lacked sufficient information about these issues to assure itself that the proposals it funded were feasible to carry out. These 3 studies totaled about $3.7 million. For the Evaluation of Non-Emergency Calls to 911, NIJ and DOJ’s Office of Community Oriented Policing Services jointly solicited grant proposals to evaluate strategies taken by 4 cities to decrease non-emergency calls to the emergency 911 system. NIJ officials told us that they had conducted 3-day site visits of the 4 sites, and that discussions with local officials included questions about availability of data in each jurisdiction. The NIJ solicitation for proposals contained descriptions of how non-emergency calls were processed at all 4 sites, but no information on the availability of outcome data to assess changes in the volume, type, and nature of emergency and non-emergency calls before and after the advent of the non-emergency systems. Evaluators were asked to conduct both a process analysis and an assessment analysis. The assessment analysis was to include “compiling and/or developing data” on a number of outcome questions. Once the study was funded, however, the grantee learned that only 1 of the 4 cities had both a system designed specifically to reduce non-emergency calls to 911, as well as reliable data for evaluation purposes. In the case of the Multi-Site Demonstration of Collaborations to Address Domestic Violence and Child Maltreatment, NIJ funded the proposal without knowing whether the grantee would be able to form comparison groups. NIJ officials stated that one of the reasons for uncertainty about the study design was that at the time the evaluator was selected, the 6 demonstration sites had not yet been selected. The proposal stated that the grantee would explore the “potential for incorporating comparison communities or comparison groups at the site level, and assess the feasibility, costs, and contributions and limitations of a design that incorporates comparison groups or communities.” NIJ continued to fund the grantee for 3 additional years, although the second year proposal for supplemental funding made no mention of comparison groups and the third year proposal stated that the grantee would search for comparison sites, but did not describe how such sites would be located. In response to our questions about whether comparison groups would be used in the study, NIJ officials said that the plan was for the grantee to compare a random sample of case records from before program implementation to those after implementation at each of the demonstration sites. Designs utilizing pre-post treatment comparisons within the same group are not considered to be as rigorous as pre-post-treatment comparison group designs because they do not allow evaluators to determine whether the results are due to the program under consideration or to some other programs or external factors. NIJ also approved the Multi-Site Demonstration of Collaborations proposal without knowing whether data on individual victims of domestic violence and child maltreatment would be available during the time frame of the evaluation. The first year proposal stated that the grantee would examine outcomes for individuals and families, although it also noted that there are challenges to assessing such outcomes and that system outcomes should be examined first. Our review found that in the third year of the evaluation, data collection was focused solely on “system” outcomes, such as changes in policies and procedures and how agency personnel identify, process, and manage families with co-occurring domestic violence and child maltreatment. Thus, although the original design called for answering questions about the outcomes of the program for individuals and families, NIJ could not expect answers to such questions. In the case of the Civil Legal Assistance study, NIJ officials told us that they have held discussions with the grantee about the feasibility of adding comparison groups to the design. According to these officials, the grantee said that a comparison group design would force it to reduce the process sites to be studied from 20 to somewhere between 6 and 8. NIJ advised the grantee that so large a reduction in sites would be too high a price to pay to obtain comparison groups, and advised the grantee to stay with the design as originally proposed. Consequently, NIJ cannot expect a rigorous assessment of outcomes from this evaluation. Of the 5 completed NIJ studies that focused on issues of interest to DOJ program offices, findings related to program effectiveness were not sufficiently reliable or conclusive. However, DOJ program administrators told us that they found some of the process and implementation findings from the completed studies to be useful. Program administrators from DOJ’s Office on Violence Against Women said that although they did not obtain useful outcome results from the Rural Domestic Violence evaluation, they identified two “lessons learned” from the process and implementation components of the study. First, the evaluation found that very little information was available to grantees regarding how to create collaborative programs. Thus, DOJ engaged a technical assistance organization to develop a training program on how to create collaborative projects based on the experiences of some of the grantees examined by the Rural evaluation. Second, program administrators told us that the evaluation found that because Rural grants were funded on an 18-month schedule, programs did not have adequate time to structure program services and also collect useful program information. As a result, Rural programs are now funded for at least 24 months. While shortcomings in NIJ’s outcome evaluations of law enforcement programs leave questions about whether the programs are effective and whether they should continue to be funded, program administrators in DOJ’s Office of Community Oriented Policing Services said that the studies helped identify implementation problems that assisted them in developing and disseminating information in ways useful to the law enforcement community. These included curriculum development, leadership conferences, and fact sheets and other research publications. For example, as a result of the NIJ-managed study, Responding to the Problem Police Officer: An Evaluation of Early Warning Systems, DOJ officials developed a draft command level guidebook that focuses on the factors to be considered in developing an early warning system, developed an early warning intervention training curriculum that is being taught by the 31 Regional Community Policing Institutes located across the country, and convened a “state-of-art” conference for five top law enforcement agencies that were developing early warning systems. DOJ officials also said the studies showed that the various systems evaluated had been well received by citizens and law enforcement officials. For example, they said that citizens like the 311 non-emergency number that was established in several cities to serve as an alternative to calling the 911 emergency number. The system allows law enforcement officers to identify hot spots or trouble areas in the city by looking at various patterns in the citizen call data. Officials may also be able to monitor the overall state of affairs in the city, such as the presence of potholes, for example. Similarly, Chicago’s City-Wide Community Policing program resulted in the development of a crime mapping system, enabling officers to track crime in particular areas of the city. Like the non-emergency telephone systems, DOJ officials believe that crime mapping helps inform citizens, police, and policy makers about potential problem areas. NIJ officials told us that they have begun to take several steps to try to increase the likelihood that outcome evaluations will produce more definitive results. We recommended in our March 2002 report on selected NIJ-managed outcome evaluations that NIJ assess its evaluation process to help ensure that future outcome evaluations produce definitive results. In November 2002, Congress amended the relevant statute to include cost- effectiveness evaluation where practical as part of NIJ’s charge to conduct evaluations.Since that time NIJ has established an Evaluation Division within NIJ’s Office of Research and Evaluation. NIJ officials told us that they have also placed greater emphasis on funding cost-benefit studies, funded feasibility studies prior to soliciting outcome evaluations, and placed greater emphasis on applicants’ prior performance in awarding grants. In January 2003, NIJ established an Evaluation Division within NIJ’s Office of Research and Evaluation, as part of a broader reorganization of NIJ programs. According to NIJ, the Division will “oversee NIJ’s evaluations of other agency’s programs and…develop policies and procedures that establish standards for assuring quality and utility of evaluations.” NIJ officials told us that among other things, the Division will be responsible for recommending to the NIJ Director which evaluations should be undertaken, assigning NIJ staff to evaluation grants and overseeing their work, and maintaining oversight responsibility for ongoing evaluation grants. In addition, NIJ officials told us that one of the NIJ Director’s priorities is to put greater emphasis on evaluations that examine the costs and benefits of programs or interventions. To support this priority, NIJ officials told us that the Evaluation Division had recently developed training for NIJ staff on cost-benefit and cost-effectiveness analysis. NIJ recently undertook 37 “evaluability assessments” to assess the feasibility of conducting outcome evaluations of congressionally earmarked programs prior to soliciting proposals for evaluation. In 2002 and 2003, these assessments were conducted to examine each project’s scope, activities, and potential for rigorous evaluation. The effort included telephone interviews and site visits to gather information regarding such things as what outcomes could be measured, what kinds of data were being collected by program staff, and the probability of using a comparison group or random assignment in the evaluation. Based on the review, NIJ solicited proposals from the research community to evaluate a subset of the earmarked programs that NIJ believed were ready for outcome evaluation. NIJ officials also stated that in an effort to improve the performance of its grantees, it has begun to pay greater attention to the quality and timeliness of their performance on previous NIJ grants when reviewing funding proposals. As part of NIJ’s internal review of grant applications, NIJ staff check that applicants’ reports are complete and accurate and evaluate past work conducted by the applicant using performance related measures. Although this is not a new activity, NIJ officials told us that NIJ was now placing more emphasis on reviewing applicants’ prior performance than it had in the past. NIJ officials told us that NIJ staff may also contact staff in other OJP offices, where the applicant may have received grant funding, to assess applicant performance on those grants. Our in-depth review of 15 outcome evaluations managed by NIJ during the past 10 years indicated that the majority was beset with methodological and/or implementation problems that limited the ability to draw meaningful conclusions about the programs’ effectiveness. Although our sample is not representative of all NIJ outcome evaluations conducted during the last 10 years, it includes those that have received a large proportion of the total funding for this type of research, and tends to be drawn from the most recent work. The findings from this review, coupled with similar findings we reported in other reviews of NIJ outcome evaluations, raise concerns about the level of attention NIJ is focusing on ensuring that funded outcome evaluations produce credible results. We recognize that it is very difficult to design and execute outcome evaluations that produce meaningful and definitive results. Real world evaluations of complex social programs inevitably pose methodological challenges that can be difficult to control and overcome. Nonetheless, we believe it is possible to conduct outcome evaluations in real world settings that produce meaningful results. Indeed, 5 of NIJ’s outcome evaluations can be characterized in this way, and these 5 accounted for about 48 percent of the $15.4 million spent on the studies we reviewed. We also believe that NIJ could do more to help ensure that the millions of dollars it spends annually to evaluate criminal justice programs is money well spent. Indeed, poor evaluations can have substantial costs if they result in continued funding for ineffective programs or the curtailing of funding for effective programs. NIJ officials told us that they recognize the need to improve their evaluation efforts and have begun to take several steps in an effort to increase the likelihood that outcome evaluations will produce more conclusive results. These steps include determining whether a program is ready for evaluation and monitoring evaluators’ work more closely. We support NIJ’s efforts to improve the rigor of its evaluations. However, it is too soon to tell whether and to what extent these efforts will lead to NIJ funding more rigorous effectiveness evaluations, and result in NIJ obtaining evaluative information that can better assist policy makers in making decisions about criminal justice funding priorities. In addition to the steps that NIJ is taking, we believe that NIJ can benefit from reviewing problematic studies it has already funded in order to determine the underlying causes for the problems and determine ways to avoid them in the future. Recommendations for We recommend that the Attorney General instruct the Director of NIJ to: Executive Action Conduct a review of its ongoing outcome evaluation grants—including those discussed in this report—and develop appropriate strategies and corrective measures to ensure that methodological design and implementation problems are overcome so the evaluations can produce more conclusive results. Such a review should consider the design and implementation issues we identified in our assessment in order to decide whether and what type of intervention may be appropriate. If, based on NIJ’s review, it appears that the methodological problems cannot be overcome, NIJ should consider refocusing the studies’ objectives and/or limiting funding. Continue efforts to respond to our March 2002 recommendation that NIJ assess its evaluation process with the purpose of developing approaches to ensure that future outcome evaluation studies are funded only when they are effectively designed and implemented. The assessment could consider the feasibility of such steps as: obtain more information about the availability of outcome data prior to developing a solicitation for research; require that outcome evaluation proposals contain more detailed design specifications before funding decisions are made regarding these proposals; and more carefully calibrate NIJ monitoring procedures to the cost of the grant, the risks inherent in the proposed methodology, and the extent of knowledge in the area under investigation. We provided a copy of a draft of this report to the Attorney General for review and comment. In a September 4, 2003, letter, DOJ’s Assistant Attorney General for the Office of Justice Programs commented on the draft. Her comments are summarized below and presented in their entirety in appendix III. The Assistant Attorney General stated that NIJ agreed with our recommendations. She also highlighted NIJ’s current and planned activities to improve its evaluation program. For example, as we note in the report, NIJ has established an Evaluation Division and initiated a new strategy of evaluability assessments. Evaluability assessments are intended to be quick, low cost initial assessments of criminal or juvenile justice programs to help NIJ determine if the necessary conditions exist to warrant sponsoring a full-scale outcome evaluation. To improve its grantmaking process, the Assistant Attorney General stated that NIJ is developing a new grant “special conditions” that will require grantees to document all changes in the scope and components of evaluation designs. In response to our concerns, NIJ also plans, in fiscal year 2004, to review its grant monitoring procedures for evaluation grants in order to more intensively monitor the larger or more complex grants. NIJ also plans to conduct periodic reviews of its evaluation research portfolio to assess the progress of ongoing grants. This procedure is to include documenting any changes in evaluation design that may have occurred and reassessing the expected benefits of ongoing projects. In her letter, the Assistant Attorney General made two substantive comments—both concerning our underlying assumptions in conducting the review—with which we disagree. In her first comment, the Assistant Attorney General noted that our report implies that conclusive evaluation results can always be achieved if studies are rigorously designed and carefully monitored. We disagree with this characterization of the implication of our report. While sound research design and careful monitoring of program implementation are factors that can significantly affect the extent to which outcome evaluation results are conclusive, they are not the only factors. We believe that difficulties associated with conducting outcome evaluations in real world settings can give rise to situations in which programs are not implemented as planned or requisite data turn out not to be available. In such instances, even a well-designed and carefully monitored evaluation will not produce conclusive findings about program effectiveness. Our view is that when such problems occur, NIJ should respond and take appropriate action. NIJ could (1) take steps to improve the methodological adequacy of the studies if it is feasible to do so, (2) reconsider the purpose and scope of evaluation if there is interest in aspects of the program other than its effectiveness, or (3) decide to end the evaluation project if it is not likely to produce useful information on program outcomes. In her second comment, the Assistant Attorney General expressed the view that our work excluded consideration of valid, high quality evaluation methods other than experimental and quasi-experimental design. We believe that our assessment of NIJ’s outcome evaluations was both appropriate and comprehensive. We examined a variety of methodological attributes of NIJ’s studies in trying to assess whether they would produce sufficiently sound information on program outcomes. Among other things, we systematically examined such factors as the type of evaluation design used; how program effects were isolated (that is, whether comparison groups or statistical controls were utilized); the size of study samples and appropriateness of sampling procedures; the reliability, validity, and appropriateness of outcome measures; the length of follow-up periods on program participants; the extent to which program attrition or program participant nonresponse may have been an issue; the appropriateness of analytic techniques that were employed; and the reported results. Therefore, we made determinations about the cause and effect linkages between programs and outcomes using a myriad of methodological information. In discussing the methodological strengths of experimental and quasi-experimental designs, we did not intend to be dismissive of other potential approaches to isolating the effects of program interventions. For example, if statistical controls can be employed to adequately compensate for a methodological weakness such as the existence of a comparison group that is not comparable on characteristics that could affect the study’s outcome, then we endorse the use of such a technique. However, in those instances where our review found that NIJ’s studies could not produce sufficiently sound information about program outcomes, we saw no evidence that program effects had been isolated using alternative, compensatory, or supplemental methods. In addition to these comments, the Assistant Attorney General also provided us with a number of technical comments, which we incorporated in the report as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days from the date of this report. At that time, we will send copies to the Attorney General, appropriate congressional committees and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. In response to your request, we undertook a review of the outcome evaluation work performed under the direction of the National Institute of Justice (NIJ) during the last 10 years. We are reporting on (1) the methodological quality of a sample of completed and ongoing NIJ outcome evaluation grants and (2) the usefulness of the evaluations in producing information on program outcomes. Our review covered outcome evaluation grants managed by NIJ from 1992 through 2002. Outcome evaluations are defined as those efforts designed to determine whether a program, project, or intervention produced its intended effects. These kinds of studies can be distinguished from process evaluations, which are designed to assess the extent to which a program is operating as intended. To determine the methodological quality of a sample of NIJ-managed outcome evaluations, we asked NIJ, in June 2002, to identify and give us a list of all outcome evaluations managed by NIJ that were initiated during the last 10 years, or initiated at an earlier date but completed during the last 5 years. NIJ identified 96 evaluation studies that contained outcome evaluation components that had been awarded during this period. A number of these studies included both process and outcome components. We did not independently verify the accuracy or completeness of the data NIJ provided. These 96 evaluations were funded for a total of about $36.6 million. Individual grant awards ranged in size from $22,374 to about $2.8 million. Twenty grants were awarded for $500,000 or more, for a total of about $22.8 million (accounting for about 62 percent of all funding for NIJ outcome evaluations during the 10-year review period); 51 grants for less than $500,000, but more than $100,000, for a total of about $11.7 million (accounting for about 32 percent of all NIJ outcome evaluation funding); and 25 grants for $100,000 or less, for a total of about $2.1 million (accounting for about 6 percent of all NIJ outcome evaluation funding). Fifty-one of the 96 evaluations had been completed at the time of our review; 45 were ongoing. From the list of 96 outcome evaluation grants, we selected a judgmental sample of 16 grants for an in-depth methodological review. Our sample selection criteria were constructed so as to sample both large and medium-sized grants (in terms of award size), and both completed and ongoing studies. We selected 8 large evaluations—funded at $500,000 or above—and 8 medium-sized evaluations—funded at between $101,000 and $499,000. Within each group of 8 we selected the 4 most recently completed evaluations, and the 4 most recently initiated evaluations that were still ongoing, in an effort to ensure that the majority of the grants reviewed were subject to the most recent NIJ grant management policies and procedures. One of the medium-sized ongoing evaluations was dropped from our review when we determined that the evaluation was in the formative stage of development; that is, the application had been awarded but the methodological design had not yet been fully developed. As a result, our in-depth methodological review covered 15 NIJ-managed outcome evaluations accounting for about 42 percent of the total spent on outcome evaluation grants between 1992 and 2002 (see tables 5 and 6). These studies are not necessarily representative of all outcome evaluations managed by NIJ during this period. The evaluations we selected comprised a broad representation of issues in the criminal justice field and of program delivery methods. In terms of criminal justice issues, 7 of the 15 evaluations focused on programs designed to reduce domestic violence, 4 focused on programs addressing the behavior of law enforcement officers, 2 focused on programs addressing drug abuse, and 2 focused on programs to deal with juvenile justice issues. In terms of program delivery methods, 3 evaluations examined national discretionary grant programs or nationwide cooperative agreements, 4 examined multisite demonstration programs, and 8 examined local programs or innovations. For the 15 outcome evaluations we reviewed, we asked NIJ to provide any documentation relevant to the design and implementation of the outcome evaluation methodologies, such as the application solicitation, the grantee’s initial and supplemental applications, progress notes, interim reports, requested methodological changes, and any final reports that may have become available. We used a data collection instrument to obtain information systematically about each program being evaluated and about the features of the evaluation methodology. We based our data collection and assessments on generally accepted social science standards. We examined such factors as whether evaluation data were collected before and after program implementation; how program effects were isolated (i.e., the use of nonprogram participant comparison groups or statistical controls); and the appropriateness of sampling, outcome measures, statistical analyses, and any reported results. A senior social scientist with training and experience in evaluation research and methodology read and coded the documentation for each evaluation. A second senior social scientist reviewed each completed data collection instrument and the relevant documentation for the outcome evaluation to verify the accuracy of every coded item. We relied on documents NIJ provided to us between October 2002 and May 2003 in assessing the evaluation methodologies and reporting on each evaluation’s status. We grouped the studies into 3 categories based on our judgment of their methodological soundness. Although we recognize that the stronger studies may have had some weaknesses, and that the weaker studies may have had some strengths, our categorization of the studies was a summary judgment based on the totality of the information provided to us by NIJ. Following our review, we interviewed NIJ officials regarding NIJ’s role in soliciting, selecting, and monitoring these grants, and spoke to NIJ grant managers regarding issues raised about each of the grants during the course of our methodological review. In the course of our discussions with NIJ officials, we learned of changes NIJ has underway to improve its administration of outcome evaluation studies. To document these changes, we interviewed responsible NIJ officials, and requested and reviewed relevant documents. We are providing information in this report about these changes. To identify the usefulness of the evaluations in producing information on program outcomes, we reviewed reported findings from completed NIJ- managed outcome evaluations that either evaluated programs administered or funded by the Department of Justice (DOJ), or had been conducted with funding contributed by DOJ program offices (see table 7). Of the 8 completed evaluations that we reviewed for methodological adequacy, 5 had been conducted with funding contributed in part by DOJ program offices, including 2 evaluations funded in part by DOJ’s Office on Violence Against Women (OVW) and 3 evaluations funded in part by DOJ’s Office of Community Oriented Policing Services (COPS). Of the 2 evaluations funded by OVW, 1 was a review of a national program administered by DOJ, and the other was a review of a locally administered program funded partially by an OVW grant. Of the 3 evaluations funded by COPS, 2 were evaluations of programs funded at least in part with COPS funding, and the other was an evaluation of a program operating at several local law enforcement agencies, supported with local funding. Because of our interest in the effectiveness of criminal justice programs, we limited our review of the usefulness of NIJ outcome evaluations to evaluations of DOJ programs, or evaluations funded by DOJ program offices, and did not examine the 3 other completed NIJ outcome evaluations that focused on programs funded by agencies other than DOJ. We interviewed NIJ officials and relevant DOJ program administrators regarding whether these findings were used to implement improvements in the evaluated programs. At OVW and COPS, we asked officials the extent to which they (1) were involved in soliciting and developing the evaluation grant, and monitoring the evaluation; (2) were aware of the evaluation results; and (3) had made any changes to the programs they administered based on evaluation findings about the effectiveness of the evaluated programs. We conducted our work at NIJ headquarters in Washington, D.C., between May 2002 and August 2003 in accordance with generally accepted government auditing standards. The National Evaluation of the Gang Resistance Education and Training (GREAT) Program University of Nebraska at Omaha The GREAT program began in 1991 with the goal of using federal, state, and local law enforcement agents to educate elementary school students in areas prone to gang activity about the destructive consequences of gang membership. The program seeks to prevent youth crime and violence by reducing involvement in gangs. According to the evaluator’s proposal, as of April 1994, 507 officers in 37 states (150 sites) had completed GREAT training. GREAT targets middle school students (with an optional curriculum for third and fourth graders) and consists of 8 lessons taught over a 9-week period. Process and outcome evaluations began in 1994 and were completed in 2001. Total evaluation funding was $1,568,323. The outcome evaluation involved a cross-sectional and longitudinal design. For the cross-sectional component, 5,935 eighth grade students in 11 different cities were surveyed to assess the effectiveness of GREAT. Schools that had offered GREAT within the last 2 years were selected, and questionnaires were administered to all eighth graders in attendance on a single day. This sample constituted a 1-year follow-up of 2 ex-post facto groups: students who had been through GREAT and those who had not. A 5-year longitudinal, quasi-experimental component was conducted in 6 different cities. Schools in the 6 cities were selected purposively, to allow for random assignment where possible. Classrooms in 15 of 22 schools were randomly assigned to receive GREAT or not, whereas assignment in the remaining schools was purposive. A total of more than 3,500 students initially participated, and active consent was obtained for 2,045 participants. Students were surveyed 2 weeks before the program, 2 weeks after completion, and at 1-, 2-, 3-, and 4-year intervals after completion. Significant follow-up efforts were employed to maintain reasonable response rates. Concepts measured included attitudinal measures regarding crime, gangs and police; delinquency; drug sales and use; and involvement in gangs, gang activities, and risk-seeking behaviors. In addition, surveys were conducted with parents of the students participating in the longitudinal component, administrative and teaching staff at the schools in the longitudinal design, and officers who had completed GREAT training prior to July 1999. Assessment of evaluation Although conclusions from the cross-sectional component may be limited because of possible pre- existing differences between students who had been exposed to GREAT and students who had not and lack of detail about statistical controls employed, the design and analyses for the longitudinal component are generally sound, including random assignment of classrooms to the intervention in 15 of the 22 schools, collection of baseline and extensive follow-up data; and statistical controls for differential attrition rates of participant and comparison groups. Evaluation of Breaking the Cycle A consortium of federal agencies, led by the Office of National Drug Control Policy and NIJ, developed the Breaking the Cycle (BTC) demonstration program in 3 sites to test the effectiveness of a comprehensive, coordinated endeavor to reduce substance abuse and criminal activity, and improve the health and social functioning of drug-involved offenders. The first site, Birmingham, Ala., received funding in 1997, and the next 2 sites, Tacoma, Wash., and Jacksonville, Fla. received funding in 1998. Participants were adult arrestees (for any type of crime) who tested positive for drug use and had a history of drug involvement. The program was based on the recognition that there was a link between drug use and crime, and it had the support of many criminal justice system officials who were willing to use the authority of the criminal justice system to reduce drug use among offenders. BTC intended to expand the scope of earlier programs such as drug courts and Treatment Alternatives to Street Crime by incorporating drug reduction activities as part of handling felony cases. BTC included early intervention; a continuum of treatment options tailored to participants’ needs, including treatment readiness programs in jails; regular judicial monitoring and graduated sanctions; and collaboration among justice and treatment agencies. Begun in 1997, and the final report completed in 2003, the evaluation was funded for $2,419,344, and included both outcome and process components. Comparison groups were selected in each of the 3 sites, and were composed of defendants similar to the BTC participants who were arrested in the year before BTC was implemented. The evaluation examined program success in (1) reducing drug use and criminal activity, as measured by self-reported drug use in the 6 months prior to follow-up interviews and officially recorded arrests in the 12 months after baseline; (2) improving the physical and mental health and family/social well-being of participants, as measured by self-reported interview data on problems experienced in these 3 areas during the 30 days before follow-up; and (3) improving labor market outcomes for participating offenders, as measured by self-reported interview data on employment and social difficulties in the 30 days before follow-up. Survey data were collected at baseline and again at two intervals between 9 and 15 months after baseline. At baseline the sample sizes for the treatment and comparison groups were, respectively, 374 and 192 in Birmingham, 335 and 444 in Jacksonville, and 382 and 351 in Tacoma. Response rates for the follow-up interviews varied across the 3 sites from 65 to 75 percent for the treatment groups, and from 71 to 73 percent for the comparison groups. Method of assessment varied across sites and across samples, with some participants in both the comparison and treatment groups interviewed in person while others were interviewed by telephone. Multiple statistical analyses, including logistic regression, with controls for differences in demographics, offense history, substance abuse history, and work history between treatment and comparison groups were used. BTC’s effect on the larger judicial environment was also assessed, using official records on the number of hearings, case closure rates, and other factors. Cost-benefit analyses of the BTC interventions were conducted at the three locations. The costs attributable to the BTC program were derived from budgetary information provided by program staff. The BTC program benefits were conceptualized as “costs avoided” arising from the social and economic costs associated with crime. The estimates of cost avoided in the study were based on (1) the costs (to society) associated with the commission of particular crimes and (2) the costs (to the criminal justice system) associated with arrests. Estimates of these components from the economic and criminal justice literature were applied to self-reported arrest data from the program and comparison group subjects. The derived estimates of benefits were compared to program costs to form cost-benefit ratios for the interventions. An earlier effort to incorporate estimates of savings in service utilization from BTC (as a program benefit) was not included in the final report analysis due to inconclusive results. Assessment of evaluation The evaluation was well designed and implemented. The study used comparison groups to isolate and minimize external factors that could have influenced the results. While the comparison groups were selected and baseline data collected 1 year before the treatment groups were selected, the study corrected for selection bias and attrition, using multivariate models that incorporated control variables to measure observed sample differences. The study appears to have handled successfully other potential threats to the reliability and validity of results, by using appropriate statistical analyses to make adjustments. For example, the study relied on both self-reported measures of drug use and arrest histories as well as official records of arrests, to assess the effects of the program. Self-report measures are subject to errors in memory or self-presentational biases, while official records can be inaccurate and/or incomplete. The evaluators made use of both the self-report and official measures to attempt to control for these biases. The methodological approach used in the cost benefit analysis was generally sound. The report specified the assumptions underlying the cost and benefit estimates, and appropriately discussed the limitations of the analysis for policymaking. Evaluation of a Multi-Site Demonstration for Enhanced Judicial Oversight of Domestic Violence Cases The Judicial Oversight Demonstration (JOD) initiative is a multiyear program being implemented at 3 sites (City of Boston/Dorchester District Court, Mass.; Washtenaw County, Ann Arbor, Mich.; and Milwaukee County, Wis.) to address the problem of domestic violence. JOD tests the idea that a coordinated community, focused judicial, and systemic criminal justice response can improve victim safety and service provision, as well as offender accountability. JOD emphasizes uniform and consistent responses to domestic violence offenses, including coordinated victim advocacy and services; strong offender accountability and oversight; rigorous research and evaluation components; and centralized technical assistance. Demonstration sites have developed partnerships with a variety of public and private entities, including victim advocacy organizations, local law enforcement agencies, courts, and other social service providers. The program began in fiscal year 2000, and demonstration sites are expected to receive funding for 5 years. A process evaluation began in January 2000. The outcome component of the evaluation began in October 2002 and is to be completed by October 2005. At the time of our review, the evaluation grant amount was $2,839,954. Plans call for a full outcome assessment to be conducted in 2 sites and, because no appropriate comparison site could be identified, a partial assessment in the third site. The 2 sites with a full assessment were matched with comparison sites having similar court caseloads and population demographics; neither comparison site had a specialized court docket, enhanced judicial oversight, or a countywide coordinated system for handling domestic violence cases. Over 12 months, all domestic violence cases in each site, up to monthly size quotas, will be selected into the following groups: cases where the offender was found guilty and sentenced to jail for 6 months or less and probation or probation only, cases that were dismissed or diverted from prosecution, and cases where the offender received more than 6 months incarceration. Victims and offenders in the first group will be interviewed, and in the second group, victims only will be interviewed. Offender recidivism in both groups will be tracked for 1 year following the intervention using police and court records. For the third group, only offender recidivism will be tracked. In the partial assessment site, subject to data availability, the plan is to compare a sample of domestic violence cases in which the offender was placed on probation in the period before JOD implementation with a sample of cases in which the offender was placed on probation and scheduled for judicial review in the period after JOD implementation. Data about incidents, victims, and offenders are to be obtained from official records, and offender recidivism will be tracked using police and court records. Overall, short-term outcomes for the study are planned to include various measures of offender compliance and victim and offender perceptions of JOD, and long-term outcomes are planned to include various measures of offender recidivism, victim well-being, and case processing changes. In addition, to discern any system level changes due to JOD, aggregate, annual data on all domestic violence cases for the 2 years prior to and 3 years after JOD implementation in all sites will be collected and analyzed. Assessment of evaluation The evaluation plan appears to be ambitious and well designed. A quasi-experimental design is planned, and data will be collected from multiple sources, including victims, offenders, and agencies. While lack of sustained cooperation, uneven response rates, and missing data could become problems, detailed plans seem to have been made to minimize these occurrences. The planned approach of selecting cases (choosing equal numbers of cases consecutively until a monthly quota is reached, over a 12-month period) may be nearly as good as random sampling and takes into consideration seasonal variation. However, it could introduce biases, should there be variation as to the time each month when case selection begins. Culturally Focused Batterer Counseling for African-American Men The purpose of this study is to test the relative effectiveness of culturally focused versus conventional batterer counseling for African-American men. It is based on research indicating that conventional counseling dropout and partner re-assault rates are higher for African-American men than they are for white men, and clinical literature in related fields that recommends culturally focused counseling to improve the effectiveness of counseling with African-American men. Culturally focused counseling refers to the counselor recognizing and responding to cultural issues that emerge in group sessions (including such topics as African-American men’s perceptions of the police, relationships with women, sense of African-American manhood, past and recent experiences of violence, and reactions to discrimination and prejudice), and a curriculum that includes the major cultural issues facing a particular group of participants. The setting for the evaluation is a counseling center in Pittsburgh, Pennsylvania. The evaluation began in September 2001, and the expected completion date is February 2005. At the time of our review, the grant amount was $356,321. A clinical trial will be conducted to test the effect of culturally focused counseling on the extent to which African-American men drop out of counseling, are accused of re-assaults, and are re-arrested for domestic violence. Plans are for 600 African-American men referred by the Pittsburgh Domestic Violence Court over a 12-month period to batterer counseling at the counseling center to be randomly assigned to either (1) a culturally focused counseling group of only African-Americans, (2) conventional batterer counseling in an African-American only group, and (3) conventional counseling in a racially mixed group. Before assignment, however, the counseling center must recommend the men for participation in the study. Men included in the study will be administered a background questionnaire and two tests of culturally specific attitudes (i.e., racial acculturation and identity) at program intake. The men’s female partners will be interviewed by phone 3 months, 6 months, and 12 months after program intake. These structured interviews will collect information on the woman’s relationship with the man, the man’s behavior, and the woman’s help- seeking. Clinical records of program attendance and police records of re-arrests will be obtained for each man. Planned analyses are to include (1) verification of equivalent culturally focused and conventional counseling sub-samples at intake and during the follow-up; (2) comparison of the program dropouts, re- assaults, and re-arrests for the three counseling options at each follow-up interval and cumulatively; and (3) a predictive model of the re-assault outcome based on characteristics, cultural attitudes, and situational factors. Additionally, interviews with a sub-sample of 100 men about their counseling experience are to be conducted. Assessment of evaluation This is a well-designed experiment to test the effect of a new approach to provide counseling to perpetrators of domestic violence. The researchers have plans to (1) adjust for any selection bias in group assignment and participant attrition through statistical analysis; (2) prevent “contamination” from counselors introducing intervention characteristics to control groups, or the reverse; and (3) monitor the response rates on the interviews with female partners. The evaluation is on-going. The most recent progress report we reviewed indicated that the evaluation is proceeding as planned, with the recruitment of batterers behind schedule by 1 month, the series of female partner interviews on schedule and very close to expected response rates, and the interviews with the sub-sample of batterers about three- quarters complete. One potential concern we have is that because all men referred by the domestic violence court to the counseling center may not be recommended to participate in the study, any bias in recommending study participants will determine the population to which the study’s results can be generalized. Testing the Impact of Court Monitoring and Batterer Intervention Programs at the Bronx Misdemeanor Domestic Violence Court Fund for the City of New York Operating since 1998, the Bronx Misdemeanor Domestic Violence Court handles spousal abuse misdemeanor cases. The court has the power to prescribe various conditions of discharge for batterers, including participation in group counseling and/or court monitoring. Given concerns about the effectiveness of these options, it was decided to test the efficacy of batterer counseling programs and court monitoring, alone and in combination with each other. Furthermore, court monitoring was tested based on the frequency of its administration—either monthly or on a graduated basis (less monitoring for fewer incidences of abuse). This was to ascertain whether graduated monitoring might give batterers more incentive to change. The evaluation began in September 2001 and is expected to be completed in August 2003. At the time of our review, this evaluation was funded for $294,129. The proposed study is an outcome evaluation of 4 different treatment alternatives for conditional discharge defendants in domestic violence cases. The treatment options are (1) counseling program and monthly court monitoring, (2) counseling program and graduated court monitoring, (3) monthly court monitoring program only, and (4) graduated court monitoring only. Participants in the evaluation (800 total) are to be assigned randomly to 1 of the 4 treatments at the time of sentencing, and incidents of new crimes are to be measured 6 and 12 months after sentencing. Official crime records at both intervals, and interviews with victims at the 12-month interval are the sources of data. The planned analysis involves looking at the groups as a whole, and subgroups related to age, criminal history, and current charge. Outcome measures are (1) completion of the conditional discharge or imposition of the jail alternative, (2) new arrests for domestic violence, and (3) new reports from victims of domestic violence incidents. Assessment of evaluation This is a well-designed approach to measure the comparative efficacy of combinations of program counseling and variations in monitoring. However, at the time of our review, we had some concerns about how well implementation will proceed. One concern is that if one or more of the treatments is less effective, it could result in participants spending time in jail, reducing the possibility of further incidents. This difficulty can be addressed in the analysis, but neither the proposal nor subsequent progress reports discuss this or other differential attrition issues. Also, although the evaluators have a plan to try to ensure good response rates for the victims’ survey, it is uncertain how effective they will be. Other surveys of similar populations have been problematic. An Evaluation of Chicago’s Citywide Community Policing Program Chicago’s community policing program, known as Chicago’s Alternative Policing Strategy (CAPS), began in April 1993. The program reorganizes policing around small geographical areas where officers assigned to beat teams meet with community residents to identify and address a broad range of neighborhood problems. There were 2 evaluation efforts in this study, 1 examining the prototype project and the second examining citywide program implementation. The combined evaluations were completed in August 2001, at a total cost of $2,157,859. The prototype evaluation, conducted between April 1993 and September 1994, compared five areas that implemented CAPS with four areas that did not. Data from the 1990 Census were used to select four sections of the city that closely matched the demographics of the five prototype areas. Residents of all areas were first surveyed in the spring of 1993 regarding the quality of police service and its impact on neighborhood problems. Follow-up interviews occurred in either June or September of 1994 (14 to 17 month time lags). Interviews were conducted by telephone in English and Spanish. The re-interview rate was about 60 percent. A total of 1,506 people were interviewed both times, an average of 180 in each prototype area and 150 in each comparison area. The CAPS citywide evaluation began after the conclusion of the prototype evaluation in July 1994. The purpose of this evaluation was to assess how changing from a traditional policing approach to a community-centered approach would affect citizens’ perceptions of the police, neighborhood problems and crime rates. The researchers administered annual citywide public opinion surveys between 1993 and 2001 (excluding 2000). The surveys covered topics such as police demeanor, responsiveness, and task performance. Surveys were also administered to officers at CAPS orientation sessions to obtain, among other things, aggregate indicators of changes in officers’ attitudes toward CAPS. Changes in levels of recorded crimes were analyzed. Direct observations of police meetings, surveys of residents, and interviews with community activists were used to measure community involvement in problem solving and the capacity of neighborhoods to help themselves. Assessment of evaluation The 1992 crime rates were reported to be similar between prototype districts and their matched comparison areas and the baseline demographic measures used to match the two groups were basically similar. The initial and follow-up response rates of about 60 percent seem reasonable considering the likelihood of community mobility in these areas; however, attrition rates differed for various demographic characteristics, such as home ownership, race, age, and education, raising some concerns about whether the results are generalizable to the intended population. The follow-up time (14-17 months) was the maximum period allowed by the planned citywide implementation of CAPS. A single follow-up survey and the citywide implementation precluded drawing firm conclusions about longer-term impacts of the prototype program. Because CAPS was implemented throughout the city of Chicago in 1995, the CAPS citywide evaluation was not able to include appropriate comparison groups and could not obtain a measure of what would have happened without the benefits of the program. The authors used a variety of methods to examine the implementation and outcomes of the CAPS program, and stated that there was no elaborate research design involved because their focus was on organizational change. However, because the trends over time from resident surveys and crime data were presented without controls or comparison groups and some declines in crime began before the program was implemented, changes cannot be attributed solely to the program. Evaluation of a Comprehensive Service-Based Intervention Strategy in Public Housing Yale University School of Medicine The program was an intervention strategy designed to reduce drug activity and foster family self- sufficiency in families living in a public housing complex in the city of New Haven, Conn. The key elements of the intervention were (1) an on-site comprehensive services model that included both clinical (substance abuse treatment and family support services) and nonclinical components (e.g., extensive outreach and community organizing as well as job training and placement and GED high school equivalency certification) and (2) high profile police involvement. The goals of the program were (1) increases in the proportion of residents entering and completing intervention services and (2) a reduction in substance-related activities and crime. The evaluation began in 1998 and was completed in 2000. The total evaluation funding was $187,412. The intervention site was a public housing complex composed primarily of female heads of household tenants and additional family members; the control site was another public housing complex on the opposite side of town, chosen for its similarities to the intervention site. The evaluation design was both process and outcome oriented and involved the collection of both qualitative and quantitative data. At baseline, a needs assessment survey was completed (n=175 at the intervention site and n=80 at the control site), and follow-up surveys with residents took place at 12 and 18 months post-intervention (no response rates reported). All heads of household at the sites were the target population for the surveys. The follow-up surveys, while administered in the same two sites, did not track the same respondents that were surveyed at baseline. Survey measures included access to social services; knowledge and reported use of social services; and residents’ perceptions of the extent of drug and alcohol abuse, drug selling, violence, safety, and unsupervised youth in the community. The study also examined crime statistics obtained from the New Haven police department, at baseline and during the intervention. Assessment of evaluation The study had several limitations, the first of which is potential selection bias due to pre-existing differences between the sites, as well as considerable (and possibly differential) attrition in both groups, with no statistical control for such differences. Second, respondents may not have been representative of the populations at the housing sites. No statistical comparisons of respondents to nonrespondents on selected variables were presented. In addition, on the baseline survey, the response rates of the intervention and control sites differed substantially (70 vs. 44 percent, respectively). Overall response rates were not reported for the follow-up surveys. Furthermore, implementation did not work smoothly (e.g., the control site received additional unanticipated attention from the police). Finally, the grantee proposed to track data on individuals over time (e.g., completion of services), but this goal was not achieved, in part because of the limited capability of project staff in the areas of case monitoring, tracking, and data management. Thus, although the intervention may have produced changes in the intervention site “environment” over time (aggregate level changes), it is not clear that the intervention successfully impacted the lives of individuals and families at the site. An Evaluation of Victim Advocacy with a Team Approach The program provides assistance to domestic violence victims in some police precincts in the city of Detroit. The domestic violence teams studied included specially trained police officers, police department advocates, legal advocates, and in one police precinct, an on-site prosecutor. The advocates assisted victims by offering information about the legal system, referrals, and safety planning. The outcome evaluation began in January of 1998 and the final report was completed in January of 2001. The grant amount was $153,491. The objectives of the study were to address the relationships between advocacy and victim safety and between advocacy and victims’ responses to the criminal justice system, using a quasi-experimental design to compare domestic violence cases originating in police precincts with and without special police domestic violence teams that included advocates. The study focused on assistance provided in 3 police precincts. Precincts not served by in-precinct domestic violence teams, but resembling the precincts with such teams in terms of ethnic representation and median income, were selected as comparisons. Data were collected using police records, county prosecutor’s office records, advocate contact forms, and telephone interviews with victims. Cases that met Michigan’s legal definition of domestic violence, had adult female victims, and were received in the selected precincts over a 4-month period in 1998 were eligible for the study. The cases were first identified by the police department through police reports and then reviewed for qualification by a member of the research team. A weekly quota of cases was selected from each precinct. If the number of qualified cases for a precinct exceeded the quota, then cases were selected randomly using a random numbers table. Outcomes included rates of completed prosecution of batterers, rate of guilty findings against batterers, subsequent violence against victims, victims’ perceptions of safety, and victims’ views of advocacy and the criminal justice process. Assessment of evaluation The study was severely affected by numerous problems, many of which the researchers acknowledged. First, the sample selection was based on incomplete or unreliable data, since police officers in writing reports often did not fully describe incidents, and precinct staff inconsistently provided complete case information about incidents to the researchers. Second, evaluators were not able to secure cooperation from domestic violence advocates and their supervisors at all service levels in providing reliable reports on service recipients and the type, number, and length of services. Additionally, most domestic violence team members were moved out of the precincts and into a centralized location during the period victims in the study were receiving services, thereby potentially affecting the service(s) provided to them. Further, the researchers were uncertain as to whether women from the comparison precincts received any advocacy services, thereby potentially contaminating the research results between the precincts with the domestic violence teams and the comparison precincts. Finally, low response rates and response bias for data collected from victims were problems. The overall response rate for the initial round of telephone interviews was only about 23 percent and the response rates for follow-up interviews were lower. Response rates were not provided separately for victims from the precincts with the domestic violence teams and the comparison precincts. As a result of the low response rates, the interviewed victims were identified as being less likely to have experienced severe physical abuse, less likely to be living with the abuser, and more likely to have a child in common with the abuser, compared to the victims in the sample who were not interviewed. Reducing Non-Emergency Calls to 911: An Assessment of Four Approaches to Handling Citizen Calls for Service DOJ’s COPS office has worked with police agencies, the Federal Communications Commission, and the telecommunications industry to find ways to relieve the substantial demand on the current 911 emergency number. Many police chiefs and sheriffs have expressed concern that non-emergency calls represent a large portion of the 911 overload problem. Four cities have implemented strategies to decrease non-emergency 911 calls and have agreed to participate in the research. Those cities, each implementing a different type of approach, were Baltimore, Md.; Dallas, Tex.; Buffalo, N.Y.; and Phoenix, Ariz. A process and outcome evaluation was conducted between July of 1998 and June of 2000. The grant amount was $399,919. For the outcome component, the grantee examined whether (1) the volume of 911 calls declined following the introduction of the non emergency call system; (2) there was a corresponding decline in radio dispatches, thus enhancing officer time; and (3) this additional time was directed to community-oriented policing strategies. The bulk of the design and analysis focused on Baltimore, with a limited amount of analysis of outcomes in Dallas and no examination of outcomes in the other two sites. The study compared rates of 911 calls before implementation of the new 311 system to rates of 911 and 311 calls after the system in both cities. In Baltimore, time series analysis was used to analyze the call data; police officers and sergeants were surveyed; the flow of 311 and 911 calls to Neighborhood Service Centers was examined; researchers accompanied police officers during randomly selected shifts in 3 sectors of Baltimore for 2 weeks; and citizens who made 311 calls during a certain 1- month time frame were surveyed. Assessment of evaluation The crux of the outcome analysis relies on the study of pre- and post- 311 system comparisons, and the time series analysis done in Baltimore is sound. The rigor of several other parts of this study is questionable (e.g., poor response rates to surveys and short time frames for data from accompanying police officers on randomly selected shifts). In addition, the choice of sites that NIJ required the grantee to examine, other than Baltimore, did not allow for a test of the study’s objectives. Although NIJ conducted pre-solicitation site visits to all 4 sites, at the time of the solicitation it still did not clearly know whether outcome data would be available at all the sites. As it turned out, outcome data were not available in Phoenix and Buffalo. Further, since the 311 system in Dallas was not implemented with the goal of reducing or changing call volume, it does not appear to be a good case with which to test the study’s objectives. Responding to the Problem Police Officer: An Evaluation of Early Warning Systems University of Nebraska – Omaha An Early Warning (EW) system is a data based police management tool designed to identify officers whose behavior is problematic, as indicated by high rates of citizen complaints, use of force incidents, or other evidence of behavior problems, and to provide some form of intervention, such as counseling or training to correct that performance. According to the current study’s national survey of local law enforcement agencies (LEA) serving populations of 50,000 or more, about one-quarter of LEAs surveyed had an EW system, with another 12 percent indicating that one was planned. One-half of existing EW systems have been created since 1994. Begun in 1998, the study was completed in 1999 and included process and outcome components, as well as a national survey. The total evaluation funding was $174,643. The outcome portion of the study was composed of case studies of EW systems in 3 large urban police departments (Miami-Dade, Fla.; Minneapolis, Minn.; and New Orleans, La.). Sites were selected judgmentally; each had functioning EW systems in place for a period of 4 or more years and had agreed to participate in the study. Both Miami-Dade and Minneapolis case studies examined official performance records (including citizen complaints in both sites and use of force reports in Miami-Dade) for officers identified by the department’s EW system, for 2 years prior to and after departmental intervention, compared to records for officers not identified. The participant groups included officers hired between 1990 and 1992 and later identified by the EW system (n=28 in Miami-Dade; n=29 in Minneapolis); the comparison groups included officers hired during the same period and not identified (n=267 in Miami-Dade; n=78 in Minneapolis). In New Orleans, official records were not organized in a way that permitted analysis of performance of officers subject to EW and a comparison group. The New Orleans case study, therefore, examined citizen complaint data for a group of officers identified by the EW system 2 years or more prior to the study, and for whom full performance data were available for 2 years prior to and 2 years following intervention (n=27). Assessment of evaluation The study had a number of limitations, many of them acknowledged by the grantee. First, it is not possible to disentangle the effect of EW systems per se from the general climate of rising standards of accountability in all 3 sites. Second, use of nonequivalent comparison groups (officers identified for intervention are likely to differ from those not identified), without statistical adjustments for differences between groups creates difficulties in presenting outcome results. Only in Minneapolis did the evaluators explicitly compare changes in performance of the EW group with changes in performance of the comparison group, again without presenting tests of statistical significance. Furthermore, the content of the intervention was not specifically measured, raising questions about the nature of the intervention that was actually delivered, and whether it was consistent over time in the 3 sites, or across officers subject to the intervention. Moreover, it was not possible to determine which aspects of the intervention were most effective overall (e.g., differences in EW selection criteria, intervention services for officers, and post- intervention monitoring), since the intervention was reportedly effective in all 3 departments despite differences in the nature of their EW systems. Also, no data were available to examine whether the EW systems had a deterrent effect on desirable officer behavior (e.g., arrests or other officer-initiated activity). Finally, generalizability of the findings in Miami-Dade and Minneapolis may also be limited, since those case studies examined cohorts of officers recruited in the early 1990s, and it is not clear whether officers with greater or fewer years of police experience in these departments would respond similarly to EW intervention. Evaluation of the Juvenile Justice Mental Health Initiative with Randomized Design University of Missouri - St. Louis. The Juvenile Justice Mental Health Initiative (JJMI) is a collaborative multi-agency demonstration project funded under an Office of Juvenile Justice and Delinquency Prevention grant, and administered by the St. Louis Mental Health Board, the St. Louis Family Court, and the Missouri Department of Health. The initiative provides mental health services to families of youths referred to the juvenile justice system for delinquency who have serious emotional disturbances (SED). The initiative involves parents and families in juvenile justice interventions, providing coordinated services and sanctions for youths who otherwise might shuttle between criminal justice and mental health agencies. Two new mental health programs were established under JJMI. The first, the Child Conduct and Support Program, was designed for families in which youths under the age of 14 do not have a history of serious, violent, or chronic offending. The second, Multi-systemic Therapy (MST), was designed for families in which youths aged 14 and above have prior serious, violent, or chronic delinquency referrals. The evaluation began in October 2001 and is expected to be completed in September 2003. At the time of our review, the evaluation was funded for $200,000. The study proposed to evaluate the two mental health programs using a random experimental design. Youths referred to the Juvenile Court are first screened for SED. Those who test positive or have prior diagnoses of SED (anxiety, depressed mood, somatic complaints, suicidal ideation, thought disturbance, or traumatic experience) are eligible for the JJMI programs. Eligible youth are randomly assigned to either one of the two treatment programs (depending on age) or to a control group. The evaluation includes a comparison of police contact data, court data, self-reported delinquency, and standardized measures of psychological and parental functioning. Potentially important demographic and social context variables, including measures of school involvement and performance, will be obtained from court records. Assessment of evaluation This is an ongoing, well designed study. However, as implementation has proceeded, several problems that may affect the utility of the results have emerged. First, the researchers proposed to sample a total of 200 youths, with random assignment expected to result in approximately 100 juveniles in the treatment and comparison groups. The treatment group turned out to be much smaller than anticipated, however, because the randomization protocol and, subsequently, the MST program itself, were discontinued by the St. Louis Mental Health Board. At the time of termination, only 45 youths had been randomly assigned to the treatment group. The small number of subjects limits the extent of the analyses that can be conducted on this population. The Child Conduct and Support Program designed to address the mental health needs of youth under the age of 14 without a history of serious offending was never implemented by the providers contracted to develop the program. Eligible youth, of all ages, were instead assigned to the MST program. Thus, the evaluation will not be able to compare the relative effectiveness of programs specifically designed for younger and older juvenile offenders with SED. National Evaluation of the Rural Domestic Violence and Child Victimization Enforcement Grant Program The National Rural Domestic Violence and Child Victimization Enforcement Grant program, begun in fiscal year 1996, has funded 92 grants through September 2001 to promote the early identification, intervention, and prevention of woman battering and child victimization; increase victim’s safety and access to services; enhance the investigation and prosecution of crimes of domestic violence and child abuse; and develop innovative, comprehensive strategies for fostering community awareness and prevention of domestic abuse. The program seeks to maximize rural resources and capacity by encouraging greater collaboration between Indian tribal governments, rural local governments, and public and private rural service organizations. The evaluation began in October 1998 and was completed in July 2002. This evaluation was funded at $719,949, and included both process and outcome components. Initially 10 grantees (comprising 11 percent of the total number of program grantees) were selected to participate in the outcome evaluation; 1 was unable to obtain continuation funding and was dropped from the outcome portion of the study. Two criteria were used in the selection of grant participants: the “feasibility” of grantees visited in the process phase of the evaluation (n=16) to conduct an outcome evaluation; and recommendations from OVW, which were based on knowledge of grantee program activities and an interest in representing the range of organizational structures, activities, and targeted groups served by the grantees. Logic models were developed, as part of the case study approach, to show the logical or plausible links between a grantee’s activities and desired outcomes. The specified outcome data were collected from multiple sources, using a variety of methodologies, during 2-3 day site visits (e.g., multi-year criminal justice, medical, and shelter statistics were collected from archival records where available; community stakeholders were interviewed; and grantee and victim service agency staff participated in focus groups). Assessment of evaluation This evaluation has several limitations. First, the choice of the 10 outcome sites was skewed toward the technically developed evaluation sites and was not representative of all Rural Domestic Violence program grantees, particular project types, or delivery styles. Second, the lack of comparison groups makes it difficult to exclude the effect of external factors, such as victim safety and improved access to services, on perceived change. Furthermore, several so-called short-term outcome variables were in fact process variables (e.g., number of clients served, number of services provided, number of workshops conducted, and service capacity of community agencies). Moreover, it is not clear how interview and focus group participants were selected. Finally, pre- and post- survey data were not collected at multiple points in time to assess change, except at 1 site, where pre- and post-tests were used to assess increased knowledge of domestic violence among site staff as a result of receiving training. National Evaluation of the Domestic Violence Victims’ Civil Legal Assistance Program Institute for Law and Justice The Civil Legal Assistance (CLA) program is one of seven OJP grants (through OVW) dedicated to enhancing victim safety and ensuring offender accountability. The CLA program awards grants to nonprofit, nongovernmental organizations that provide legal services to victims of domestic violence or that work with victims of domestic violence who have civil legal needs. The CLA grant program was created by Congress in 1998. In fiscal year 1998, 54 programs were funded, with an additional 94 new grantees in fiscal year 1999. Approximately 85-100 new and continuation grants were anticipated in fiscal year 2000. The study began in November 2000 and was expected to be completed in October 2003. The proposed evaluation consisted of process and outcome components and the total evaluation funding at the time of our review was $800,154. The objective of the outcome evaluation was to determine the effectiveness of the programs in meeting the needs of the women served. The researchers proposed to study 8 sites with CLA programs. At each site at least 75 cases will be tracked to see if there is an increase in pro se (self) representation in domestic violence protective order cases, and a total of 240 victims receiving services will be surveyed (about 30 at each site). Focus groups of service providers will be used to identify potential program impacts on the justice system and wider community. Outcomes to be assessed include change in pro se representation in domestic violence protective order cases, satisfaction with services, and legal outcomes resulting from civil assistance. Assessment of evaluation The evaluation has several limitations. First, NIJ and the grantee agreed in 2002 not to utilize a comparison group approach whereby data would be collected from a set of comparison sites, due to concerns that investment in that approach would limit the amount of information that could be derived from the process component of the evaluation and from within-site and cross-site analyses of the selected outcome sites. Thus, the study will be limited in its ability to isolate and minimize the potential effects of external factors that could influence the results of the study, in part because it did not include comparison groups in the study design. At the time of our review, it was not yet clear whether sufficient data will be available from the court systems at each outcome site in order to examine changes in pro se representation. In addition, since victims would be selected for the surveys partially on the basis of willingness to be interviewed, it is not clear how representative the survey respondents at each site will be and how the researchers will handle response bias. It also appears that the victim interviews will rely to a great extent on measures that will primarily consist of subjective, retrospective reports. Multi-Site Demonstration of Collaborations to Address Domestic Violence and Child Maltreatment The Department of Health and Human Services and DOJ’s Office of Justice Programs are jointly funding 6 demonstration sites for up to 3 years to improve how 3 systems (dependency courts, child protective services, and domestic violence service providers) work with their broader communities to address families with co-occurring domestic violence (DV) and child maltreatment (CM). Funded sites must agree to implement key recommendations of the National Council of Juvenile and Family Courts Judges’ publication, “Effective Interventions in Domestic Violence and Child Maltreatment: Guidelines for Policy and Practice” (aka, the “Greenbook”). At a minimum, the sites need to implement changes in policies and procedures regarding screening and assessment; confidentiality and information sharing; safety; service provision; advocacy; cross-training; and case collaboration. The goals of the demonstration are to generate more coordinated, comprehensive, and consistent responses to families faced with DV and CM, resulting in increased safety and well-being for women and their children. The evaluation began in September 2000, and is expected to be completed around September 2004. At the time of our review, this evaluation was funded at $2,498,638, for both process and outcome components. The original evaluation proposal focused on various process elements as well as the effects of the intervention on perpetrator recidivism and the safety of women and children. In the second year, the evaluator realized that no site considered itself to be in the implementation phase and many of the original outcome indicators for children and families were not appropriate given the initiative time frame. The revised design in the funded third year proposal is therefore a systems-level evaluation. The analytic focus is now on how the 3 systems identify, process, and manage families with co-occurrence of DV and CM. A random sample of case records from before and after the introduction of the intervention will be used to document trends in identification of co-occurring cases of DV and CM over the course of the intervention. Stakeholder interviews conducted during site visits in fall 2001 and later during implementation, and analysis of agency documents, will be used to measure changes in policies and procedures. “Network analysis” of responses on the stakeholder interviews will be performed to measure changes in how key stakeholders work with others within and across systems. Supervisors and workers will also be asked, early in the implementation period and at the end of the initiative, to respond to vignettes describing hypothetical situations involving co-occurrence of DV and CM to see how they might respond to clients. Assessment of evaluation This evaluation has several limitations. First, the study objectives changed substantially from year 1 to year 3. The study is no longer examining outcomes for individuals, precluding conclusions about whether the implementation improved the lives of victims of domestic violence or their children. Second, it is not clear whether the evaluator will locate appropriate comparison data at this late stage, and without a comparison group, the study will not be able to determine (a) whether collaboration between systems improved (or weakened) because of the intervention or some extraneous factors and (b) whether collaboration resulted in increased capacity in the 3 systems to identify the co-occurrence of DV and CM, or whether these kinds of cases increased for reasons other than collaboration (e.g., perhaps identification of these cases is improving all over the country). Questions remain about the extent of data available for examining co-occurrence of DV and CM at the 6 sites. Corrections and Law Enforcement Family Support (CLEFS) Law Enforcement Field Test Since 1996 NIJ has funded, as part of the CLEFS program, 32 grants totaling over $2.8 million to law enforcement agencies, correctional agencies, and organizations representing officers (unions and membership associations) to support the development of research, demonstration, and evaluation projects on stress intervention methods. The stress intervention methods developed and studied have included stress debriefing and management techniques, peer support services, referral networks, police chaplaincy services, stress management training methods, spouse academies, and stress education programs. While NIJ purports to have developed state-of-practice stress reduction methods through these efforts, it acknowledges that very little outcome data have been generated. The evaluation began in June 2000 and is expected to be completed in June 2004. At the time of our review, the grant amount was $649,990. The study proposes to develop and field test a model to allow for the systematic evaluation of selected program components. The grantee worked with NIJ to identify the test sites and services to be evaluated, based on grant application reviews, telephone interviews, and site visits. Three police departments in Duluth, Minn.; North Miami Beach, Fla.; and Knoxville, Tenn. were selected. Baseline stress correlate data were collected during visits to the 3 sites between January 2002 and March 2002, and baseline officer and spouse/partner surveys were conducted during the same visits. Outcome data were to be collected at baseline (prior to actual program implementation), midway through the implementation, and toward the end of the evaluation. While the original proposal did not specify exactly what stress correlate or outcome data were to be collected, the grantee was considering looking at rates of absenteeism and tardiness, citizen complaints, rule and regulation violations, disciplinary actions, and premature retirements and disability pensions, as stress correlates. These were to be obtained from official agency records. Surveys included questions about program impacts on physical health, emotional health, job performance, job satisfaction, job-related stress, and family related stress. The evaluation also included baseline health screenings. It appears the evaluation plan has been modified to add supervisor surveys (there were none at baseline), and to incorporate group data collection efforts with officers, spouses, supervisors, and administrators. Assessment of evaluation The study has several limitations. First, the 3 study sites were chosen on the basis of merits in their proposal to implement a stress reduction or wellness program for officers, from 4 sites that submitted applications. There was no attempt to make the chosen sites representative of other sites with stress reduction programs and police departments more generally. Second, the study will not make use of comparison groups consisting of similar agencies that did not implement stress reduction programs. It is unclear how effects of the interventions in these 3 sites over time will be disentangled from the effects of other factors that might occur concurrently. Third, the grantee will not collect individually identified data, and thus will only be able to analyze and compare aggregated data across time, limiting the extent of analysis of program effects that can be accomplished. Fourth, response rates to the first wave of officer surveys were quite low in 2 of the 3 sites (16 percent and 27 percent). In addition to the above, Tom Jessor, Anthony Hill, Stacy Reinstein, David Alexander, Michele Fejfar, Douglas Sloane, Shana Wallace, Judy Pagano, Kenneth Bombara, Scott Farrow, Ann H. Finley, Katherine Davis, and Leo Barbour made key contributions to this report. | Policy makers need valid, reliable, and timely information on the outcomes of criminal justice programs to help them decide how to set criminal justice funding priorities. In view of previously reported problems with selected outcome evaluations managed by the National Institute of Justice (NIJ), GAO assessed the methodological quality of a sample of completed and ongoing NIJ outcome evaluation grants. From 1992 through 2002, NIJ managed 96 evaluation studies that sought to measure the outcomes of criminal justice programs. Spending on these evaluations totaled about $37 million. Our methodological review of 15 of the 96 studies, totaling about $15 million and covering a broad range of criminal justice issues, showed that sufficiently sound information about program effects could not be obtained from 10 of the 15. Five studies, totaling about $7.5 million (or 48 percent of the funds spent on the studies we reviewed), appeared to be methodologically rigorous in both design and implementation, enabling meaningful conclusions to be drawn about program effects. Six studies, totaling about $3.3 million (or 21 percent of the funds spent on the studies we reviewed), began with sound designs but encountered implementation problems that would render their results inconclusive. An additional 4 studies, totaling about $4.7 million (or 30 percent of the funds spent on the studies we reviewed), had serious methodological limitations that from the start limited their ability to produce reliable and valid results. Although results from 5 completed studies were inconclusive, DOJ program administrators said that they found some of the process and implementation findings from them to be useful. We recognize that optimal conditions for the scientific study of complex social programs almost never exist, making it difficult to design and execute outcome evaluations that produce definitive results. However, the methodological adequacy of NIJ studies can be improved, and NIJ has taken several steps--including the formation of an evaluation division and funding feasibility studies--in this direction. It is too soon to tell whether these changes will lead to evaluations that will better inform policy makers about the effectiveness of criminal justice programs. |
Closing out contracts involves a number of tasks, such as verifying that goods or services were provided, making final payment to the contractor, and deobligating excess funds. A contract is generally eligible to be closed once all option provisions have expired, the contractor has completed performance, and the government has accepted the final delivery of goods or services, or when the government has provided the contractor a notice of complete contract termination. From this point, contracts are considered physically complete, and should be closed within time frames set by the FAR—6 months for firm-fixed-price contracts and 36 months for flexibly priced contracts. The FAR prohibits the closing of contract files if the contract is in litigation, under appeal, or where the contract is being terminated and all termination actions have not been completed. Contract documents can be stored and retained after the contracting officer signs and files the contract completion statement. Additional time is allowed for the closeout of flexibly priced contracts because there are additional steps necessary to close out these types of contracts (see figure 1). Specifically, closing these contracts generally requires an audit and settlement of the contractor’s final indirect cost rates. Contracting officers and DCAA need to ensure all costs incurred by the contractor and charged to the government are allowable, allocable, and reasonable. Contracting officers also need to establish final indirect cost rates based on the contractor’s incurred costs, which determine, in part, the contractor’s final payment on flexibly priced contracts. Contractors are required by the FAR to submit proposals that include information on all of their flexibly priced contracts in a fiscal year. Once submitted, DCAA audits the proposal to determine if the costs incurred are reasonable, allowable and allocable to government contracts (see figure 2 for incurred cost audit process). There is not a one-to-one relationship between an incurred cost audit and an individual contract. In a single fiscal year, a contractor may incur costs on multiple flexibly priced contracts, and all of these contracts would be included in the proposal. The total value of the proposal, called the auditable dollar value (ADV), is the sum of all the costs on flexibly priced contracts for that contractor during the fiscal year. Additionally, since the period of performance on an individual contract may span several fiscal years, several audits may need to be conducted to provide the information necessary to close one flexibly priced contract. Further, DCAA may assess a contractor’s incurred cost proposal as inadequate for a variety of reasons, for example if the proposal is not certified or contains math errors, and request that the contractor review, revise, and resubmit its incurred cost proposal. The responsibility for closing out a contract resides with the DOD contracting officer within the military department or other defense agency that awarded the contract. The contracting officer may, however, delegate certain administrative responsibilities, including contract closeout, to DCMA, which provides contract administration services to DOD. DFAS also plays a role in activities related to contract closeout, such as paying final vouchers, and, when needed, resolving unreconciled balances on a contract. To facilitate contract closeout of flexibly priced contracts, federal regulations authorize the use of quick closeout procedures, by which a contracting officer can negotiate the settlement of direct and indirect costs on a specific contract, task order, or delivery order without waiting for the determination of final indirect cost rates for the contractor’s fiscal year. To use the quick closeout procedure, several conditions must be met. For example, the amount of unsettled direct and indirect costs to be allocated to the contract, task order, or delivery order must be relatively insignificant—which is defined as costs that do not exceed the lesser of $1,000,000 or 10 percent of the total contract, task order, or delivery order amount. The contracting officer also must perform a risk assessment and determine that the use of the quick closeout procedure is appropriate and consider such factors as the contractor’s accounting, estimating, and purchasing systems, and any concerns of cognizant DCAA auditors. When the quick closeout procedure is used for a contract, determinations of final indirect costs are considered binding for the specific contract covered, but the rates used during this process are not considered binding when establishing the final indirect cost rates for other contracts. DCMA has issued a memorandum authorizing its contracting officers to close specific contracts prior to the establishment of indirect cost rates regardless of the dollar value of the contract or the percent of unsettled direct and indirect costs allocable to the contract. This memorandum— known as a class deviation from the FAR because it allows for actions that are inconsistent with the regulation—was signed by the agency director in October 2011 and extends through September 2013. According to DCMA officials, DCMA has had similar deviations in place since 1999. The policy allows a DCMA contracting officer to waive the requirement for an incurred cost audit, in consultation with DCAA, when a compelling reason exists. DCMA guidance indicates that compelling reasons may include contracts with funds at risk of canceling, contracts that have been over-age for 6 or more years, and contracts where a contractor’s historical final indirect cost rates have been fairly consistent with proposed certified final indirect cost rates. Our prior work has highlighted challenges at DCAA as well as some of DOD’s challenges in closing out contracts, particularly those awarded to support operations in Iraq and Afghanistan. In 2009, we found problems with DCAA’s audit quality nationwide, including insufficient testing of contractors’ support for claimed costs. We recommended that DCAA develop a risk-based contract audit approach across the agency that included identification of resource requirements. DCAA officials reported that as a result of our findings, it now requires more testing and stricter compliance with government auditing standards, which adds to the amount of staff time required to complete each audit. In September 2011, we reported that DOD’s ability to close the contracts it awarded to support efforts in Iraq was hindered by several factors, including the failure to plan for or emphasize the need to close these contracts until reconstruction efforts were well under way.prioritized contract awards over other activities and DOD does not have visibility into the number of Iraq contracts eligible for closeout. We also reported that DOD’s efforts to close its large, cost-type contracts was hindered by staffing shortages at DCAA and unresolved issues with contractors’ cost accounting practices. We made recommendations to ensure DOD has sufficient resources to close its Iraq and Afghanistan We also found that DOD commands contracts and to better plan for and improve visibility of closeout efforts in future contingencies. In May 2012, DOD amended the Defense Federal Acquisition Regulation Supplement to require heads of contracting activities to monitor and assess on a regular basis the progress of contingency contract closeout activities and take appropriate steps if a backlog occurs. Our prior work has also identified some challenges at DCMA in relation to establishing indirect cost rates and workload. In November 2011, we reported that DCMA has been increasing its workforce and rebuilding key skills sets that had atrophied in recent years, such as cost and pricing capabilities. According to DCMA, loss of this skill set meant that many of the agency’s pricing-related contract administration responsibilities, such as establishing final indirect cost rates, were no longer performed to the same level of discipline and consistency as in prior years. As a result, DCMA stated that DOD’s acquisitions were subjected to unacceptable levels of cost risks. Both DCMA and DCAA have been increasing their workforce in recent years to address some of the challenges faced by the agencies. The challenges faced by DOD in closing out contracts are not recent phenomena. For example, in 2001, the DOD Inspector General issued a report that found weaknesses in the closeout process, including inadequate monitoring of contracts that could be closed, inattention to closure requirements, erroneous data about contracts available for closure, lack of coordination, lack of sufficient funding, a shortage of personnel, and untimely contractor input. reported that DOD made progress by closing about 30,000 contracts from February 2000 to March 2001, though over 26,000 became over-age during that same period. DOD Office of the Inspector General, Closing Overage Contracts Prior to Fielding a New DOD Contractor Payment System, D-2002-027 (Arlington, Va.: Dec. 19, 2001). To address the backlog of incurred cost audits, DCAA implemented a new, risk-based initiative in 2012 which focuses DCAA’s resources on incurred cost proposals that have high dollar values or are determined by auditors to be high risk. In doing so, DCAA will significantly reduce the number of audits performed on incurred cost proposals that are determined to be low risk. Under its risk-based initiative, DCAA raised the threshold by which an incurred cost audit is automatically performed on a contractor’s incurred cost proposal, revised the criteria used to determine a proposal low risk, and decreased the percentages of low risk proposals that will be randomly selected for audit. DCAA plans to track certain data to help assess progress in eliminating the backlog, but DCAA has not fully developed measures to determine whether key features of the initiative, such as revised criteria for determining a proposal is low risk and revised sampling percentages, should be adjusted in the future. Further, DCAA estimates it will reach a steady state of audits, which DCAA defines as two fiscal years of proposals awaiting review, by 2016, but whether DCAA will achieve its goals will depend on a number of factors, including the number of proposals determined to be high risk and the completion of subsequent audits. In 2012, DCAA began implementing a new, risk-based approach that is expected to shift DCAA’s resources to focus on incurred cost audits involving high-dollar value and high risk proposals. DCAA officials told us that in 2011, DCAA recognized that the number of audits that needed to be conducted exceeded the capacity of DCAA’s staff to do so. Accordingly, DCAA reported that incurred cost audits were not prioritized in fiscal year 2011 since they did not provide as many financial benefits as other audits, such as forward pricing audits, which are used to determine fair and reasonable rates for the award or modification of a contract. In developing the initiative, DCAA performed an analysis of which audits provided financial benefits by comparing how much money was saved or recovered by various types of audits (such as incurred cost and forward pricing audits) to how much money was invested in performing those audits. As a part of this analysis, DCAA officials found that the agency spent more in terms of staff resources to conduct incurred cost audits on proposals valued at less than $1 million than the financial benefits derived from the audits. DCAA’s analysis also determined that proposals valued over $1 million have provided more benefits than the cost to conduct them, with the benefits generally increasing as the value of the proposal increased. DCAA’s risk-based initiative includes key changes to its criteria and procedures that will decrease the number of audits conducted. These changes include (1) raising the threshold by which proposals automatically qualify for audit, (2) revising the criteria used to determine a proposal low risk, (3) lowering the percentage of low risk proposals to be randomly selected for audit, and (4) eliminating further review of proposals not selected for audit, and revising its adequacy review procedures to be more comprehensive. In addition to the changes to criteria and procedures, DCAA officials noted they plan to increase their staffing levels from 4,900 employees in 2011 to 5,600 by 2016. DCAA provided refresher training to its staff, and created 17 dedicated teams to incurred cost audit work. DCAA raised the threshold above which an audit is required based on ADV from $15 million to $250 million, thereby decreasing the number of proposals automatically qualifying for audit from 5,194 to 659, based on the backlog as of the end of fiscal year 2011 (see table 1). Proposals under the $250 million threshold will not be audited unless they are determined to be high risk or randomly selected for audit. In conjunction with raising the threshold by which incurred cost proposals were automatically selected for audit, DCAA revised two of the three criteria that a contractor’s incurred cost proposal under the $250 million threshold must meet to be determined low risk (see table 2). For example, under DCAA’s prior criteria, DCAA must have performed an incurred cost audit within the past three years; under the new initiative, for proposals $100 million or under, the time frame for conducting the last incurred audit was eliminated—the requirement is now that the contractor has had at least one incurred cost audit. These changes will increase the potential number of contractor proposals that are eligible for low risk determinations. The other criterion—audit leads or other significant risks—did not change. However, DCAA did provide several examples of the types of risks that should be considered under this criterion, such as known business system deficiencies or risks identified by the contracting officer. Once risk has been determined, those proposals that are determined to be low risk will be randomly sampled at DCAA’s five regional offices based on the proposal’s ADV, but now at a lower percentage than before. Currently, between 1 and 20 percent of low risk proposals are sampled depending on ADV, whereas under the previous procedure 33 percent of low risk proposals were sampled (see table 3). Under DCAA’s risk-based initiative, low risk proposals that are not selected for audit are not subject to any further review, whereas previously all proposals not selected for audit were subject to desk reviews. Desk reviews included an evaluation of the proposal for unusual items and changes from prior year proposals, among other actions. Now, when a low risk proposal is not selected for audit, DCAA auditors issue memorandums to the contracting officers recommending that the contracting officer use his or her authority to determine the contractor’s final indirect cost rates, which allows the contracting officer to proceed with closing the contract. However, in November 2011, DCAA issued revised guidance to determine whether a contractor’s proposal is adequate, and DCAA officials explained that the revised adequacy review provides a more comprehensive determination that includes many areas previously covered in the desk review process. A summary of DCAA’s revised incurred cost audit procedures are outlined in figure 3. By revising its policies and procedures and dedicating resources to incurred cost audits, DCAA estimates it will reduce its backlog and reach a steady state by 2016, which it defines as having two fiscal years of incurred cost proposals awaiting review. By randomly sampling low risk proposals, DCAA officials note that the possibility of an audit is still present, which is a deterrent for contractors to report inaccurate information. However, DCAA has not yet fully developed measures to evaluate the initiative’s results and assess whether the changes will require further adjustments. For example, DCAA is planning to track the number of risk determinations completed, the numbers of proposals deemed high and low risk, and the number of audits completed. Further, DCAA regional offices will be responsible for monitoring risk determinations on an ongoing basis to ensure that they are completed in a timely manner and to identify any field audit offices that have a significantly higher or lower percentage of high risk determinations than other field audit offices. However, DCAA has not determined how to assess whether the revised criteria for determining a proposal’s risk level or the revised sampling percentages are appropriate or should be adjusted in the future. DCAA stated that they plan to reassess the initiative in about a year, but did not provide details on what would be assessed at that time. Internal control standards require the establishment of clear, consistent objectives and the identification and analysis of what measures will be used to determine if an agency is achieving those objectives. Additionally, it is too early to tell whether DCAA will achieve its goal of eliminating the backlog by 2016, in part because DCAA does not yet know how many proposals under $250 million ADV will be determined low or high risk and its initial estimates have proven inaccurate. DCAA reports that its auditors have completed risk assessments on 13,522 contractor proposals that had an ADV of less than $15 million—out of a universe of approximately 20,000 proposals—as of September 2012. Of 13,522 risk assessments completed, DCAA determined that 7,815 proposals were high risk, or about two-and-a-half times more than anticipated. DCAA determined that the number of high risk proposals is higher than expected because over 3,500 of those proposals belong to contractors with no incurred cost audit history. DCAA’s backlog includes multiple proposals covering several fiscal years for some contractors. DCAA officials stated the agency plans to audit older proposals first, thus contractors’ proposals for later years may become eligible for low risk status once an audit has been conducted for a single fiscal year and an audit history is established. DCAA’s ability to reach a steady state by 2016 will also depend on whether DCAA completes its audits within anticipated time frames. However, DCAA was not able to complete the number of audits it planned to in 2012. Specifically, DCAA planned to address 4,065 incurred cost proposals in fiscal year 2012 by, for example, completing an audit or desk review, but the agency reported that it addressed 2,930 as of the end of September 2012. DCAA’s efforts to reduce its incurred cost backlog will remove one factor hindering efforts to close out flexibly priced contracts; however, DOD is also hindered by limited data and performance metrics on contract closeout efforts. The military departments generally do not have data on the extent or nature of their contract closeout backlog, while DCMA is missing key information that would allow it to identify contracts on which it could take action. Additionally, the military departments generally lack performance metrics on contract closeout. For example, in November 2012, the Army announced its intent to close 475,000 over-age contracts by September 2014, but was still in the process of having its commands identify interim goals and did not yet have a final detailed implementation plan, while the Navy and Air Force have no department-wide metrics. In contrast, DCMA has established two agency-wide performance metrics related to contract closeout with regular reporting to the head of the agency on progress in meeting goals. Our work identified some efforts at local contracting offices to focus on contract closeout, but DOD has made limited use of quick closeout procedures— a tool that can be used to expedite the closeout of flexibly priced contracts. Data on the extent and nature of the contract closeout backlog can help organizations identify or tailor approaches to address the backlog. However, we found that the military departments had difficulty providing reliable data on the size of their backlogs and did not have information on where the contracts were in the closeout process. For example, we requested data from each of the military departments on the number of contracts in the backlog, type of contract, and what contracting office was administering the contract. The military departments took the following steps to provide the data: The Army pulled data from its centralized data repository, but there were large discrepancies between the data provided by headquarters and data reported at the commands and local contracting offices. For example, according to the data provided by headquarters, one contracting office had about 30,000 over-age contracts, yet officials at the office reported about 3,700 over-age contracts. The Air Force also pulled data from its centralized data repository for one command, but the command had to verify the data with its local contracting offices and acknowledged it needed to make adjustments based on the input from those offices before providing the data to us. Air Force officials told us that providing the data for the remaining commands would require significant effort because they would need to go through a similar verification process. Navy officials told us they did not have a centralized data repository, so the Navy requested data from its local contracting offices. At the local level, seven out of the nine contracting offices we spoke with collected some information about their over-age contracts, such as the total number of contracts in the backlog and the type of contracts, but the offices generally were unable to provide us with detailed information as to where the contracts were in the closeout process, such as the number awaiting a DCAA incurred cost audit. DCMA collects data through its Mechanization of Contract Administration Services (MOCAS) system on where its over-age contracts are in the contract closeout process, but it is missing key information that would allow it to identify contracts that it could take action on. For example, DCMA’s data showed that as of July 2012 the agency had approximately 36,000 contracts awaiting closeout, including 28,000 that were awaiting establishment of final indirect cost rates. However, of these 28,000 contracts, DCMA did not know how many were awaiting a contractor’s submission of an adequate incurred cost proposal, a DCAA incurred cost audit, or final negotiation of rates. This information is important because it identifies who is responsible for moving the contract forward in the closeout process—for example, DCMA contracting officers may be able to advance the closeout of contracts awaiting final negotiation of rates. DCMA officials we spoke with thought that the majority of these contracts were awaiting DCAA’s audit, and thus would be addressed by DCAA’s initiative. In addition, DCMA’s MOCAS system was missing data on the DCMA reasons why contracts are over-age for about 1,700 contracts.officials told us the agency recently enhanced its efforts to identify and close contracts within the agency’s control due to an increase in the number of over-age contracts in fiscal year 2012. DCMA officials reported that specific categories of contracts are being targeted for closure, such as firm-fixed-price contracts with no outstanding obligations. The limited visibility into the characteristics of the contract closeout backlog, particularly at the military departments, made it challenging for officials to fully assess the extent to which specific efforts to reduce the backlog would impact their over-age contracts. For example, we asked DOD officials their views on restoring the authority of the head of an agency to close out a contract that is administratively complete, was entered into 10 or more years ago, and has an unreconciled balance under $100,000. Many DOD officials we spoke with at various levels within the military departments and within DCMA stated they did not believe the option would affect the backlog, as they did not believe their contracting offices would have many contracts to which this option would apply. For example, based on DCMA’s over-age contract data, DCMA officials estimated that as of July 2012, only 85 of the approximately 36,000 contracts in their backlog would meet the criteria of the option. However, officials at the military departments, commands and local contracting offices that we asked generally could not provide specific numbers as to how many contracts this option would impact. Further, several officials noted that a similar temporary authority was granted in the mid-2000s that they believed was ineffective, in part because DOD established a process and administrative requirements that they termed burdensome. For example, DFAS officials reported that the authority was only used to close out 14 contracts. Similarly, when we asked how useful it would be if legislation authorized a contracting officer to waive final payment in a case where a contractor has gone out of business and cannot be reached, DOD officials reported that this situation occurs infrequently. Yet DOD officials at several locations we reviewed said that they did not have data readily available on how often they had encountered this situation. Further, some officials noted that a contracting officer can use the authority in the FAR to unilaterally close a contract for this purpose.guidebook outlines procedures that contracting officers use to close a contract where the contractor has gone out of business. Performance measures, which compare actual performance against planned results, can be an important tool in demonstrating leadership commitment and maintaining adequate internal controls. We found that the Army recently communicated a goal to its commands for closing over- age contracts, but the Navy and Air Force did not have established performance metrics for closing out contracts within their organization. In November 2012, the Office of the Deputy Assistant Secretary of the Army (Procurement) sent an e-mail to its commands identifying a goal of closing over 475,000 over-age contracts by September 2014. The office requested that each command provide information on their over-age contracts by the end of November 2012, including identifying contracts that may be suitable to be grouped together and closed at the command level, such as low dollar contracts, and identifying contracts that fit into other priority categories, such as certain contracts with expiring funds. Further, the office directed the commands to establish monthly closeout goals and describe challenges that may impact the command’s ability to meet the Army’s September 2014 goal. According to an Army official, they have drafted an implementation plan for this effort, but the plan has not yet been approved. Although the Navy did not have department-wide performance metrics related to contract closeout, one Navy command established a goal of decreasing the contract closeout backlog from approximately 23,000 contracts to 13,000 contracts over fiscal year 2012, and reported it had exceeded that goal before the end of the year. Within the Air Force, headquarters officials told us in November 2012 that they plan to begin regular collection of data on contract closeout statistics starting in January 2013. In contrast, DCMA had established two agency-wide performance measures related to contract closeout for contracts where they have been delegated contract administration responsibilities. For example, one of DCMA’s closeout measures looks at the total number of over-age contracts in the agency, with a target of reducing the total number of over- age contracts by 10 percent during a fiscal year. The measures are reviewed at a number of levels within DCMA, including a briefing to the DCMA Director approximately twice a year, and monthly reviews at each of the contract management offices. When the targets for each measure are not met, DCMA officials conduct a root cause analysis to identify the reasons why, as well as to identify potential methods for addressing the issues. While many DOD officials acknowledged that contract closeout is not a priority compared to other mission critical activities, we found some contracting offices were taking actions locally to address their contract closeout backlogs. For example, one Army contracting office made its contract closeout process more centralized, added two new staff, and tracked the number of contracts they closed. Officials at this office reported that they closed over 14,000 of a reported 20,000 low dollar firm- fixed-price contracts in its backlog over the past year. Further, we found four contracting offices established a contract closeout team whose work is focused on contract closeout activities. Officials from three of these offices noted that their office prioritizes closing out contracts when funds are close to canceling to preserve funds for other uses. Further, DOD has established a contract with the AbilityOne program for contract closeout support services to help address some of the contract closeout backlog. According to DOD officials, this contract is limited to closing firm-fixed-price contracts across the department, whether over- age or not. AbilityOne representatives reported that the contractor has already provided contract closeout support services for over 50,000 contracts across DOD. DOD stated that using AbilityOne allows contracting officers to focus on other duties and mission-critical work such as awarding contracts; however, some DOD officials noted limitations, including the administrative burden on the contracting officer to locate all appropriate documentation to forward to the AbilityOne contractor, and ensuring AbilityOne staff have the proper clearance and access for the contractors. While AbilityOne’s efforts are currently limited to firm-fixed- price contracts, senior DOD officials told us they are looking into the feasibility of contracting with AbilityOne for contract closeout services on flexibly priced contracts. Our work found that DCMA and the contracting offices we reviewed made only limited use of quick closeout procedures. For example, even though DCMA has a policy, based on a FAR deviation, that allows for broader use of quick closeout procedures than what is allowed under the FAR, and DCMA’s guidance recommends that contracting officers use quick closeout where applicable, the two DCMA contract management offices we reviewed made little or no use of quick closeout procedures.at DCMA headquarters and one contract management office said they were unsure why quick closeout is not used more often. Officials at the other DCMA contract management office we spoke with told us one reason they had not made more use of DCMA’s policy related to quick Officials closeout procedures is that DCMA did not define what can be considered a compelling reason to waive an audit until December 2011. Further, once they started to assess eligibility, this office encountered challenges in identifying contractors that they considered to be eligible for the quick closeout procedure. Specifically, the office initially identified 1,489 contracts with 32 contractors that may be eligible for quick closeout procedures. However, after further analysis, officials reported that 463 contracts with 7 contractors were deemed potentially eligible. Contractors were excluded from eligibility due to issues such as contractor billing problems, DCAA concerns about the contractor, or contractor delays in submitting an incurred cost proposal. Similarly, none of the nine military department contracting offices reported using quick closeout procedures currently, although three reported minimal use in the past. For example, one contracting office estimated using quick closeout procedures for about 40 to 50 contracts in fiscal years 2010 and 2011. Officials at two other contracting offices reporting use of quick closeout on a handful of contracts in the past, but are not making use of it now. With regard to the efficacy of the May 2011 change to the FAR that was intended, in part, to increase the use of quick closeout, officials at the two contracting offices that previously used quick closeout procedures reported that the May 2011 change reduced the number of eligible contracts by including direct costs in the eligibility criteria. Officials from one of these contracting offices explained that the previous language allowed the contracting officer to waive the dollar threshold based on a risk determination. Since the current language removed the waiver, officials explained that the contracting officer is no longer able to make a business decision for contracts above the dollar threshold to determine an acceptable level of risk. DCMA and contracting officials we interviewed also noted that other challenges to the use of quick closeout procedures included a lack of audit history by which to determine what a contractor’s rates should be. DOD officials told us there was sometimes a reluctance to use quick closeout procedures because they are uncertain of the risk they are taking on and concerned that their decisions may be questioned later by others. During our interviews with DOD officials, we asked about the advisability and feasibility of authorizing a contracting officer, in consultation with DCAA, to waive the requirement for an audit in the case of a low risk, low cost contract to assist in closing out contracts—one of the options we were asked to consider in our review. Officials throughout DOD—at DCMA, military commands, military department contracting offices, and others—told us they believed this option was similar to quick closeout procedures, a tool already available to them. DOD has fallen far behind in closing out its contracts, in part due to the large backlog of incurred cost audits that must be performed by DCAA. DCAA has recognized that completing the volume of these audits, as well as other high-priority audits, using its traditional approach exceeds the capacity of its resources. In response, DCAA launched a risk-based approach to focus its resources on audits that are considered to be high risk or high dollar value. Such an approach appears prudent and shows promise, but its success will depend upon reducing the audit backlog in a way that protects the taxpayers’ interests. DCAA, however, has not developed a plan with measures to assess progress toward achieving these goals. DCAA must ensure that the key changes to its criteria and procedures—such as increasing thresholds for audit, revising its risk determination criteria, and decreasing sampling percentages for low risk proposals—adequately direct resources to audits that will provide the greatest benefit to the taxpayer. Without a plan that includes appropriate measures, DCAA will not be well-positioned to assess whether the initiative is achieving its goals. Reducing the incurred cost audit backlog should enable more flexibly priced contracts to be closed out, but the extent to which it will do so is uncertain. As there is not a one-to-one relationship between an incurred cost audit and a specific contract, and there are additional steps that need to be taken to close a contract, there is likely to be a lag in closing out contracts even if DCAA is successful in its efforts. Within the military departments, limited data on the extent of the backlog or the reasons why contracts are in the backlog hinders their ability to develop targeted approaches, with goals and performance metrics, to address over-age contracts. The Army is just starting to collect the information necessary to determine if it can realistically meet its goal of closing over-age contracts and has not issued an implementation plan. The Navy and the Air Force, while facing similar data issues, have no department-wide performance metrics. Even DCMA, which does have performance measures in place as well as some information on where the contract is in the closeout process, has incomplete information on who has responsibility for moving the contract forward in the closeout process. One technique—the use of quick closeout procedures—has been available for a number of years, but we found little evidence that any organization has made significant effort to use it, either before or after the May 2011 change in federal regulations. Until DOD prioritizes closing contracts in a timely fashion and underscores the need to do so by improving the availability of accurate data and establishing performance measures, it will not see a significant reduction in its contract closeout backlog. To improve DCAA’s ability to assess whether its incurred cost backlog initiative is achieving the objectives of reducing the incurred cost audit backlog while continuing to protect the taxpayer’s interests, we recommend that the Director, DCAA, develop a plan that includes time frames and measures to assess progress towards achieving its objectives, and as appropriate, to identify how it will assess whether the changes in DCAA’s procedures and criteria are appropriate or require further revisions. To increase visibility and enhance management attention on closing out contracts within their departments, we recommend that the Secretaries of the Navy and Air Force, respectively, develop baseline data and performance measures for closing out contracts, including consideration of the use of quick closeout procedures, as appropriate. To facilitate the closeout of contracts within the Army, we recommend the Secretary of the Army ensure that the Army’s contract closeout implementation plan includes baseline data and performance measures, and includes consideration of the use of quick closeout procedures, as appropriate. To enable DCMA to better identify contracts that may be closed out, we recommend that the Director, DCMA, take steps to ensure the data in DCMA’s contract information system on who has responsibility for moving the contract forward in the closeout process is complete. DOD provided written comments on a draft of this report. DOD concurred with the four recommendations and identified a number of ongoing and planned actions to address them. For example, DCAA stated that by March 2013 it will develop a more detailed plan to monitor and assess its progress towards achieving the objectives of its initiative. DCAA stated that this plan will include time frames and measures to determine whether its current criteria and procedures will require future modification. Further, DCAA identified several factors that this plan will include, such as an analysis of high risk determinations and audit results to determine if some criteria are better indicators of risk than others and an analysis of DCAA’s return on investment to determine if revisions to the sampling variables are needed. Each of the military departments identified actions that they would take to increase visibility and management attention on closing out contracts within their departments. For example, the Navy plans to collect data and internal policies and procedures on closing out contracts, including the use of quick closeout procedures, from its contracting activities as an initial step developing Navy-wide baseline data and performance measures for closing out contracts. The Air Force stated that it plans to place additional emphasis on aging contracts at a joint forum focused on high priority issues between DCMA, DCAA, and the departments. The Army concurred with our recommendation and reiterated its goal of eliminating approximately 475,000 over-age contracts by the end of fiscal year 2014, but did not provide additional details on how baseline data, performance measures, or consideration of the use of quick closeout procedures would be integrated into its detailed implementation plan. We believe that doing so would facilitate the Army’s efforts and enable it to assess progress. In response to our recommendation to better identify contracts that may be closed out, DCMA plans to begin requiring the use of a code within its MOCAS system that will clearly identify who is responsible for the next step in the closeout process, and to ensure the code is properly entered into the system. DOD’s comments are reprinted in appendix II. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Director, Defense Contract Audit Agency; the Director, Defense Contract Management Agency; appropriate congressional committees; and other interested parties. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The Senate Armed Services Committee report accompanying the National Defense Authorization Act for Fiscal Year 2012 directed us to review the Defense Contract Audit Agency’s (DCAA) criteria and procedures for conducting incurred cost audits and recommend steps DCAA could take to reduce the backlog, and to consider the feasibility and advisability of three options aimed at reducing the contract closeout backlog. The three options we were asked to consider were (1) restoring the authority of the head of an agency to close out a contract that is administratively complete, was entered into 10 or more years ago, and has an unreconciled balance of less than $100,000; (2) authorizing the contracting officer, in consultation with DCAA, to waive the requirement for an incurred cost audit in the case of a low risk, low-cost contract; and (3) authorizing the contracting officer to waive final payment in a case where the contractor has gone out of business and cannot be reached. It also asked us to assess the efficacy of a May 2011 change to the Federal Acquisition Regulation (FAR) that was intended, in part, to increase the use of quick closeout procedures. In response, this report addresses (1) DCAA’s efforts to reduce the backlog of incurred cost audits, and (2) the challenges the Department of Defense (DOD) faces in addressing the contract closeout backlog. Included within the scope of the second objective was a consideration of the three options outlined in the Committee report as well as the use of quick closeout procedures. To conduct our work for each objective, we reviewed relevant sections of the FAR, including FAR Subpart 4.804, Closeout of Contract Files and FAR Subpart 42.1, Contract Audit Services, and the Defense Federal Acquisition Regulation Supplement (DFARS), including DFARS Subpart 242, Contract Administration and Audit Services. We also reviewed DOD policies, such as the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics memorandum on Increasing Contracting Opportunities with the AbilityOne Program. We also reviewed prior GAO and DOD Inspector General reports pertaining to challenges at DCAA and the Defense Contract Management Agency (DCMA), and contract closeout at DOD, including contract closeout in a contingency environment. And finally, we reviewed GAO’s Standards for Internal Controls in the Federal Government. To assess DCAA’s efforts to reduce the backlog of incurred cost audits, we reviewed applicable sections of federal regulations and DCAA’s Contract Audit Manual to identify criteria and procedures for selecting and conducting incurred cost audits. We obtained and reviewed data from DCAA’s management information system on the agency’s incurred cost audit backlog. To assess the reliability of DCAA’s data on its incurred cost audit backlog, we reviewed related documentation, interviewed knowledgeable agency officials, looked for obvious inconsistencies in the data, and verified the accuracy of the data when necessary. From these efforts, we believe the information obtained is sufficiently reliable for this report. We also reviewed documentation on DCAA’s incurred cost audit initiative, such as DCAA guidance on its revised criteria and procedures for risk assessment, agency memorandums related to the timing and implementation of the new sampling procedures, new forms for documenting risk determinations, and data on which proposals were selected for audit. We reviewed analyses and projections DCAA used to establish its goal of becoming current on incurred cost audits by 2016. We interviewed senior DCAA officials responsible for the incurred cost audit initiative, and DCAA auditors from three field offices to obtain a better understanding of the process and considerations for determining risk for the contractor’s incurred cost proposals. We selected the three DCAA field offices that had completed the largest number of risk assessments under the new incurred cost audit initiative as of September 2012. We also interviewed officials at Defense Procurement and Acquisition Policy (DPAP) and DCMA to get their perspective on DCAA’s incurred cost audit initiative. To identify the challenges DOD faces in addressing the contract closeout backlog and to assess the efficacy of the May 2011 change to the FAR pertaining to quick closeout procedures, we reviewed applicable sections of the FAR, including FAR Subpart 42.3, Contract Administration Office Functions and FAR Section 42.708, Quick Closeout Procedures, and DFARS Subpart 242.3, Contract Administration Office Functions. We also reviewed Air Force, Navy, Army, and DCMA guidance, as well as DCMA policy on contract closeout and quick closeout procedures, such as the agency’s March 2012 closeout instructions and DCMA’s deviation to the quick closeout procedures. We obtained available data from the Army, Navy, and Air Force, five commands and nine contracting offices within the military departments. We selected the commands and contracting offices based on factors such as the total reported volume of over-age contracts and interviews with senior DOD officials (see table 4). We determined that the data reported by the military departments was sufficient for our purposes of selecting which commands and contracting offices to review, but did not take steps to assess the reliability of the data collected from the local contracting offices. We collected and analyzed available data and documentation on over- age contracts from DCMA headquarters and the two DCMA contract management offices with the largest volume of over-age contracts— located in Manassas, Virginia and Baltimore, Maryland. To assess the reliability of DCMA’s data, we reviewed related documentation, interviewed knowledgeable agency officials, looked for obvious inconsistencies in the data, and verified the accuracy of the data when necessary. We also compared the data received from DCMA headquarters to the data that we received from Manassas and Baltimore. From these efforts, we believe the information we obtained is sufficiently reliable for this report. We also interviewed officials from DCMA headquarters and the two contract management offices. Further, we interviewed and collected documentation from officials at DPAP, DCAA, and the Defense Finance and Accounting Services; and from Army, Navy, and Air Force officials at the headquarters and command level, as well as individual contracting offices. We interviewed two contractor industry associations as well as a DOD contractor to obtain their views on the incurred cost audit and contract closeout backlogs. To address the options outlined in the Committee report, we reviewed applicable laws, such as the National Defense Authorization Acts of Fiscal Years 2004, 2005, and 2007, which had authorities similar to one we were asked to review, and related agency policies and guidance, such as DCMA’s contract closeout guidebook. We also solicited input about the availability and potential usefulness of the options in our interviews with DCAA, DCMA, military departments’ headquarters, commands, and local contracting offices, military departments’ general counsel, DFAS, and contractor representatives. We conducted this performance audit from February 2012 to December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above Tatiana Winger, Assistant Director; Arkelga Braxton, Virginia Chanley, Nicole Dery, John Krump, Janet McKelvey, Anh Nguyen, Robert Swierczek, and Omar Torres made key contributions to this report. | DOD has a large volume of contracts that have not been closed on time. Closing a contract includes tasks such as verifying that the goods and services were provided and making final payment to the contractor. Closing contracts within required time frames can limit the governments exposure to certain financial risks. One reason why some contracts are not being closed is the large backlog of incurred cost audits that must first be completed. These audits, conducted by DCAA, ensure that the costs contractors have incurred are permissible under government regulations. The Senate Armed Services Committee report accompanying the National Defense Authorization Act for Fiscal Year 2012 directed GAO to review the criteria and procedures for conducting incurred cost audits, among other things. In response, GAO assessed (1) efforts to reduce the backlog of incurred cost audits and (2) the challenges DOD faces in addressing the contract closeout backlog. GAO reviewed DCAAs policies and procedures for incurred cost audits; analyzed data on the audit and contract closeout backlogs; and interviewed officials in the military departments and agencies. To reduce the backlog of incurred cost audits, the Defense Contract Audit Agency (DCAA) implemented an initiative to focus its resources on auditing contractors' incurred costs that involve high dollar values or are otherwise determined to be high risk. Incurred cost audits are conducted on a contractor's annual proposal that includes all costs incurred on certain types of contracts in that fiscal year. Under the initiative, DCAA raised the dollar threshold that triggers an automatic audit on a contractor's incurred cost proposal from $15 million to $250 million, revised the criteria used to determine a proposal's risk level, and significantly reduced the number of low risk audits that will be randomly sampled. This initiative appears promising, and DCAA plans to track certain characteristics, such as the number of risk determinations made and audits completed. But DCAA has not fully developed the measures by which it will assess whether the initiative reduces the backlog in a manner that protects the taxpayers' interests. Specifically, DCAA does not have a plan for how it will determine whether key features of the initiative, such as the revised risk criteria and the revised sampling percentages, should be adjusted in the future. By 2016, DCAA estimates it will reduce the backlog and reach a steady state of audits, which it defines as two fiscal years of proposals awaiting review. DCAA's ability to achieve this goal will depend on a number of factors, including the number of proposals determined to be high risk, which as of September 2012 was about two-and-a-half times more than anticipated. Reducing the backlog of incurred cost audits will ease one obstacle to closing over-age contracts, but other obstacles, such as limited data and performance metrics, must still be overcome. The military departments have limited data on the extent and nature of their contract closeout backlog, and the Defense Contract Management Agency (DCMA)--which performs contract administration services for the Department of Defense (DOD)--is missing information that would allow it to identify contracts that it could act on. Such data can cue agencies on how to identify or tailor approaches to address the backlog. Further, the military departments generally do not have performance metrics to measure progress in closing out contracts. The Army recently announced a goal of closing over 475,000 contracts by September 2014; however, it does not yet have the information necessary to know if it can reach this goal and does not have an implementation plan. The Navy and the Air Force had not established any department-wide performance metrics for contract closeout. In contrast, DCMA has established two agency-wide performance metrics related to contract closeout that are regularly monitored. While many officials said contract closeout was not a priority, GAO found some local contracting activities taking action to bring attention to contract closeout. GAO is recommending that DCAA develop a plan to assess its incurred cost audit initiative; that DCMA improve data on over-age contracts; and that the military departments develop contract closeout data and establish performance measures. DOD concurred with the recommendations and identified ongoing and planned actions to address them. |
Water pollution comes from two types of sources: (1) specific, single locations, such as industrial waste pipes or sewage treatment plants, known as point sources, or (2) multiple dispersed sources over large areas, such as runoff from farms, ranches, logging operations, and urban areas, known as nonpoint sources. Federal officials believe that significant improvements in water quality can be achieved by reducing nonpoint-source pollution. The watershed-based approach to reducing nonpoint-source pollution has been receiving increasing interest. Addressing nonpoint-source pollution throughout a watershed allows consideration of the entire hydrological system, including the quantity and quality of surface water and groundwater as well as all sources of pollution. Such an approach leads to a holistic treatment, as opposed to piecemeal efforts aimed at individual pollutants or pollution sources. A number of federal agencies have primary roles in watershed projects involving agricultural sources of pollution. The U.S. Department of Agriculture (USDA), through its various agencies and programs, provides technical, financial, educational, and research support to a variety of watershed projects. These projects give farmers the knowledge and technical means they need to voluntarily improve water quality in their watersheds. The Environmental Protection Agency (EPA) also provides technical assistance and funds states’ support for watershed projects and other efforts to reduce nonpoint-source pollution. The Department of the Interior’s U.S. Geological Survey provides technical assistance to individual watershed projects, primarily in the areas of research, mapping, and water quality monitoring, while Interior’s Fish and Wildlife Service enforces the Endangered Species Act, which provides for the protection and restoration of fish and wildlife habitats—two common goals of watershed projects. The Department of Commerce, through the National Oceanic and Atmospheric Administration and the National Marine Fisheries Service, provides technical and financial assistance. Finally, the U.S. Army Corps of Engineers is responsible for issuing permits under the Clean Water Act for the discharge of dredged and fill materials into U.S. waters, including wetlands. Nationwide, federal agencies identified 618 watershed projects that had received federal funds through early 1995. The projects ranged in size from as few as 5 acres to over 150 million acres; about 60 percent covered less than 80,000 acres. Figure 1 shows the distribution of projects by size. The projects were geared toward solving various types of problems. As shown in figure 2, over half of the projects were aimed at surface water, about 7 percent at groundwater, and the remainder at both surface water and groundwater resources. More than 85 percent of the projects addressed multiple types of pollutants, while the rest addressed a single pollutant. As shown in figure 3, nutrients and sediments were the pollution problems most frequently addressed by the watershed projects. The common experiences of the nine projects we looked at suggest that achieving success in watershed-based approaches depends on (1) the flexible application of federal assistance and (2) the ability of local officials to enlist broad support and to craft solutions customized to their local needs. At the federal level, participants believed that financial and technical assistance tailored to locally established goals was more effective than prescriptive solutions. At the local level, they identified education, stakeholders’ involvement, and a customized approach to improving water quality as the keys to a successful project. Since each watershed project has unique local characteristics, participants emphasized that federal agencies should adopt a flexible approach, providing funding and technical assistance without prescriptive solutions. In some cases, inflexible federal rules hampered the funding and execution of solutions to watershed problems. Each project we reviewed combined and addressed such characteristics as the type and source of the pollutant, local agricultural practices, impacts, and the community’s attitudes. For example, both the Big Spring Basin and the Tar-Pamlico River Basin projects were initiated to reduce pollution resulting from nutrients. However, the Big Spring Basin project addresses groundwater contamination from agricultural sources, while the Tar-Pamlico River Basin project addresses surface water contamination from municipal and industrial sources in addition to agricultural sources. An approach to mitigating agricultural discharges that would improve water quality at Big Spring Basin would likely have to be modified in the Tar-Pamlico River Basin to mitigate municipal and industrial discharges and would have to take into account that pollutants reach surface water and groundwater in different ways. Similarly, both the Coos Bay-Coquille River and West Stanislaus County watersheds faced problems resulting from erosion. In the Coos Bay-Coquille River project, the erosion was caused by a combination of timber and agricultural practices, while in West Stanislaus the erosion was caused by the runoff of irrigation water. As a result, the two projects need different strategies to reduce erosion. In addition, the Coos Bay-Coquille River project addressed other problems, such as elevated water temperatures, that were not present in West Stanislaus. Because of these combinations of various characteristics, the projects’ participants said that flexible program implementation is crucial to achieving watershed goals. Watershed projects frequently depend on multiple sources of funding and technical assistance obtained through different state and federal programs, each with different requirements. Federal assistance to the nine projects represented about half of the projects’ resources. Agency staff involved in most of the projects we reviewed demonstrated flexibility in working with other participants to meet project goals. For example, USDA and EPA representatives involved in the Huichica Creek project emphasized that they had to find creative ways to work within their respective agencies’ regulations to devise effective strategies and encourage participation by producers. Similarly, participants in the West Stanislaus project said that a key to putting together an effective program was the willingness of federal officials to focus on the overall goal of the watershed project rather than their individual programs. However, in several of the projects we reviewed, the participants also pointed out a need for increased levels of and greater flexibility in the financial assistance provided to farmers. For example: Participants in the West Stanislaus County, Lake Champlain, and Tar-Pamlico River Basin projects said that changes are needed to (1) provide additional funding to farmers for adopting practices that reduce nonpoint-source pollution and (2) separate funding for improving water quality from funding for agricultural conservation. According to participants, the annual $3,500-per-farmer maximum placed on Agricultural Conservation Program funding is inadequate to support some of the structural measures needed to reduce nonpoint-source pollution. Furthermore, as currently structured, provisions for cost-sharing cover a variety of agricultural conservation practices, including those, such as leveling land for irrigation and building irrigation canals, that may be more related to water conservation and increased crop production than to efforts to improve water quality. Farmers could apply the $3,500 to practices that increase their yield rather than to practices that reduce agricultural pollution but are of no financial benefit. Efforts to reduce nonpoint pollution from agricultural sources can be hindered when competing conservation goals result in applying cost-sharing funds to practices that have little or no direct relationship to water quality. Participants in the Huichica Creek project pointed out that USDA’s Small Watershed Program, which is primarily geared toward flood control, provides funding for solutions that are unnecessarily complex. For the problems at Huichica Creek, the solutions eligible for funding would cost far more than the simpler solutions the participants used. For example, USDA staff and landowners wanted to stabilize stream banks to minimize erosion, but they believed USDA’s solution of lining stream banks with rocks was too expensive. Instead, they found that planting young saplings low on the exposed stream bank and interweaving their branches to create a living reinforcement was a much cheaper approach. Participants in the West Stanislaus project said the funds received under USDA’s Hydrologic Unit Area program cannot be used for monitoring water quality, and participants in the Coos Bay-Coquille River and the Otter Lake projects noted that funds received under section 319 of the Clean Water Act could not be used for the planning and additional demonstration activities they needed. The projects’ participants also stressed the need for flexibility in the role federal agencies play in providing technical assistance to help farmers implement pollution-reduction practices. USDA staff served on the technical advisory committee for the Otter Lake project, although the overall approach was set by the project’s resource planning committee. In the West Stanislaus County project, USDA staff took a more active role, writing the project plan that established the implementation approach as well as identifying an innovative technical solution: using polymers in irrigation systems to reduce sediment runoff. In addition, USDA provided a sociologist to help develop approaches that would maximize voluntary participation. However, inflexible federal processes were also cited by some participants. For example, in voicing concern over rigid agency procedures and operating methods, Coos Bay-Coquille River project and state government officials said it took 9 months to obtain a permit from the Army Corps of Engineers (which regulates the disposal of dredged and fill material from wetlands) to build an off-channel pond and to spread the few cubic yards of earth removed in the process over a pasture. Project officials said they could not understand why it took so long to get a permit for such a simple project. The nine innovative or successful projects we reviewed were able to adopt a locally driven approach to achieving the goals for the watersheds. Key elements in a local approach were educating prospective participants about how water quality improvements would benefit them; achieving consensus among these stakeholders in selecting a project’s goals and approaches; and tailoring the project’s strategy, water quality monitoring, and regulatory enforcement to local conditions. Education and public outreach played an important role in encouraging cooperation in many of the projects we reviewed. Farm demonstration projects and myriad educational activities were used to familiarize farmers and the general public with the relationship between agricultural or other activities and water quality problems and to encourage the adoption of practices designed to reduce these problems. The Big Spring Basin project in Iowa, for example, used an intensive strategy of public education and farm demonstration projects to introduce farmers to management practices that would improve their efficiency and profitability while also reducing the impacts of agriculture on water quality in the watershed. According to participating farmers, the opportunity to observe—through a manure management demonstration project on a neighbor’s farm—that nutrients could be reduced gave them the knowledge and confidence they needed to change their own manure management practices. In the Lake Champlain project, a great deal of effort is being expended to inform the public about the water quality problems affecting the lake and to encourage the community’s involvement. Educational and outreach activities include public meetings, the formation of grassroots environmental groups, videos, newsletters, school presentations, and water quality fairs. Similarly, the Coos Bay-Coquille River project, recognizing the role of future generations, has developed a high school curriculum to help students better understand the watershed they live in and the potential impact their activities have on water quality. Public education can also involve less structured efforts. For example, the Coos Bay-Coquille River and Huichica Creek project staffs noted that they spent a lot of time meeting with potential participants informally, answering questions and concerns over a cup of coffee. In addition to achieving public awareness, projects need to solicit stakeholders’ consensus on goals and approaches, according to participants. Watershed projects typically involve a variety of stakeholders, often having different views about a project’s appropriate scope, approach, and management. The stakeholders may include the government agencies responsible for environmental issues or land management; agricultural, timber, fishery, mining, or other commercial industries; recreational users; municipalities; and urban homeowners. For most of the projects we reviewed, participants agreed that broad-based participation by stakeholders is critical in breaking down barriers and building trust among groups. We noted in several projects that respected community leaders with strong interpersonal skills were instrumental in bringing the stakeholders together. One example of successful consensus building is the Coos Bay-Coquille River project, which was managed by associations comprising representatives of timber companies, private landowners, federal land management agencies, state agencies with water and habitat responsibilities, and other interested parties. According to association members, inclusion of the agricultural community in the watershed associations helped members of that community overcome their general distrust of government regulation and negative attitudes about “environmental” initiatives, emanating from federal activities to protect the spotted owl and salmon, which local citizens blamed for harming the local economy. Projects that impose solutions without getting stakeholders’ buy-in have a greater difficulty in achieving success, as illustrated by the experiences of the Tar-Pamlico River Basin project in North Carolina. The project used a two-phase process to address nutrient pollution. The project’s organizers used public hearings to obtain input from those in the watershed and negotiation and consensus to reach agreement on the implementation strategy and water quality goals. Although this worked well for the first phase, the process broke down during the negotiations in the second phase. The state ultimately approved phase two of the project over the objections of environmental and community groups, which disagreed with (1) the goals for reducing nutrients, (2) the allocation of most of the burden for the reduction to nonpoint sources, and (3) the revised formula used to determine the amount of funds that point-source dischargers would contribute to reduce nonpoint-source pollution in an innovative nutrient credit trading program. Under this program, point-source dischargers agreed to contribute to a nutrient credit trading fund whenever they exceeded the discharge limits. The fund would be used to finance more cost-effective actions to reduce nutrient pollution from agricultural nonpoint sources. However, environmental and community groups felt that concessions made to point-source dischargers in the agreement in phase two shifted too much of the financial burden of improving water quality to the agricultural community. Unless steps are taken to address the misgivings of these groups, a key stakeholder is contemplating a lawsuit against the state to block this phase of the project. The experiences of successful projects illustrate that strategies, water quality monitoring, and regulatory enforcement efforts vary, depending on local conditions. For instance, while all the projects generally engaged in some form of planning to ensure that stakeholders agreed on the causes of the problem and the corrective actions needed, they devoted different levels of time, effort, and funding to developing such plans. For example, the Lake Champlain project staff spent significant time identifying the cause of that area’s water quality problem before developing a watershed strategy. They systematically monitored the water quality in rivers and streams feeding into the lake, which allowed them to gradually pinpoint the sources of the problem. In contrast, for the Coos Bay-Coquille River project, which is smaller and has less complex problems, a lengthy planning process was not necessary. The community agreed that sediment and riparian (riverbank) destruction in the Coquille watershed were impeding fish spawning and that the salmon fishery was a resource they wanted to save. They quickly established goals for improving the salmon population and measured progress using fish counts. Similarly, implementation strategies varied according to local conditions and preferences. The Big Spring Basin project, for example, heavily emphasized demonstration and educational activities, whereas the Big Darby Creek project undertook relatively few demonstration projects, preferring instead to provide funds to support individual farmers’ practices. The Big Darby Creek project also took advantage of a state program that provides low-interest loans to those who implement solutions for nonpoint-source pollution. The Tar-Pamlico River Basin project developed the innovative nutrient credit trading program described previously, which meets the overall goal for reducing the discharge of nutrients by allowing the point-source dischargers to finance the reduction of discharges from nonpoint sources. While all participants agreed on the importance of evaluating a project’s performance, they tailored the rigor of their evaluations to the project’s goals. The Coos Bay-Coquille River projects used fish counts to monitor progress, and the West Stanislaus project used sediment assessments, which were easily accomplished by viewing the color of the farms’ agricultural drain water. In contrast, the Big Spring Basin project had over 50 sites to monitor groundwater flow, conductivity, alkalinity, temperature, nitrates, and pesticides. The projects’ participants pointed out, however, that even given rigorous monitoring, demonstrating a link between changes in land use and diminished chemical pollution is difficult, if not impossible, especially within a short time frame. For example, participants in the Lake Champlain, Tar-Pamlico River Basin, and Big Darby Creek projects noted that current science can demonstrate only a tenuous link between land use practices and water quality, and it may take years for their projects to produce chemical improvements in water quality. Similarly, participants in the Big Spring Basin project said that climatic variations, such as droughts followed by years of heavy rainfall, and other factors have made it difficult to establish a link between changes in farming practices and groundwater quality, despite more than 10 years of monitoring and analysis. All nine watershed projects we reviewed are striving to promote voluntary participation by farmers, but several felt it was also necessary to provide for regulatory enforcement in case cooperation was lacking. Three states—Wisconsin, North Carolina, and Illinois—and two projects have included regulatory components in their watershed management strategies. Wisconsin has enacted statutes that provide for state enforcement actions, such as revoking cost-share agreements, against uncooperative individuals. North Carolina requires the adoption of certain best management practices. Illinois allows public water suppliers to use watershed management strategies to comply with safe drinking water standards, but if compliance is not accomplished within specified time frames, contingency measures must be implemented. In the West Stanislaus project, one water district, which is responsible for managing irrigation canals and maintaining water quality within its jurisdiction, can withhold irrigation water from farmers who refuse to adopt practices that reduce sedimentary runoff from their fields. In the Huichica Creek project, participants voluntarily developed additional restrictions on the use of certain pesticides, which EPA approved for inclusion on the labels of pesticides sold in the Huichica Creek area. At some projects, such regulatory provisions were considered unnecessary and in fact counterproductive. For example, the environmental members of the Coos Bay-Coquille River and the Lake Champlain projects said voluntary efforts were the most feasible way of reducing nonpoint-source pollution, given their communities’ resistance to regulatory enforcement. We discussed the facts in this report with USDA officials, including the Special Assistant, Strategic Natural Resources Issues Staff, Natural Resources Conservation Service, and with EPA officials, including the Deputy Director, Assessment and Watershed Protection Division, Office of Water. They fully agreed with the information presented, and we have included their comments where appropriate. We performed our review between December 1994 and June 1995 in accordance with generally accepted government auditing standards. To compile an inventory of federal watershed projects, we contacted officials at USDA, EPA, the Tennessee Valley Authority, and the Department of the Interior headquarters and regional offices to obtain their internal inventories of federal watershed projects addressing water quality problems caused by agricultural production. While we reviewed and refined these lists to eliminate duplication and clarify the descriptive information provided, we did not verify the data provided. To obtain information on the lessons learned at innovative or successful watershed projects, we judgmentally selected and reviewed nine projects from a universe of innovative or successful watershed projects identified by USDA, EPA, and state water quality officials. These nine projects were chosen to reflect a variety of project sizes, locations, agricultural sectors, water quality problems, and management and technical approaches. We cannot make generalizations based on our analysis of these projects since they were judgmentally selected and represent only a small portion of the more than 600 projects nationwide that receive federal funds. We visited each site and discussed the project’s activities in detail with federal, state, and local government officials as well as with the project’s participants. We also reviewed project documents, such as management plans, status reports, and the results of water quality monitoring. Appendixes I through IX discuss each project’s location and problem, genesis and management, planning and funding, key approaches and observations, and accomplishments. We are sending copies of this report to interested congressional committees, the Secretaries of Agriculture and the Interior, and the Administrator of the Environmental Protection Agency. We will also make copies available to others upon request. Please call me at (202) 512-5138 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix X. The major lessons of the Huichica Creek watershed project were that (1) federal program guidelines and financial assistance need to be more flexible and (2) involving stakeholders in project planning can result in a high level of participation and motivate landowners to voluntarily seek tougher regulatory restrictions to head off an environmental crisis before it occurs. The Huichica Creek watershed represents about 4,500 acres of rolling to steep hills in California’s Napa Valley, as shown in figure I.1. Huichica Creek drains into the Napa River, which eventually empties into San Francisco Bay. The watershed is primarily vineyards and dairy pasture land. The Huichica Creek area, historically considered unsuitable for vineyards, was used primarily for dairy operations and pasture lands. Vintners began to recognize the potential for growing grapes in the Huichica Creek watershed as a result of additional viticultural research and the increasing use of new grape varieties. In 1988, staff from the Napa County Resource Conservation District and the U.S. Department of Agriculture (USDA) began to contact the landowners in the Huichica Creek watershed to discuss the need for a long-range resource management plan. Landowners and vineyard managers were very receptive to this concept, and some had already begun efforts along this line. In 1991, agency staff and landowners joined together in a partnership called the Huichica Creek Land Stewardship. Participants describe the stewardship as a “land use ethic” rather than an organization. The Napa County Resource Conservation District acts as a focal point for stakeholder communication and coordination, and the stakeholders hold meetings when they believe it is necessary. The stewardship issued the Huichica Creek Watershed Natural Resource Protection and Enhancement Plan in May 1993, about 2 years after they began the project. The plan emphasizes (1) advice and information on practices that landowners can use to farm in the watershed without negatively affecting water quality and wildlife habitat and (2) low-tech approaches, such as planting “cover crops” between the rows of grape vines to reduce erosion. Participants are also replacing chemical approaches to pest control with biological ones, such as installing housing to attract insect-eating bats or roosts to attract predator birds that keep the rodent population in check. As shown in table I.1, the Environmental Protection Agency (EPA) and the state were the major government funding sources. However, conservation district staff said that landowners had contributed far more in labor, materials, and funds than the federal and state agencies, although they were unable to estimate the community’s total contribution. Agency staff and participating landowners said that federal watershed program guidelines were too restrictive and inflexible. For example, they said that programs such as USDA’s Small Watershed Program, while beginning to move away from a strong tradition of construction and flood control, were still using a pre-selected menu of engineered practices instead of creative solutions developed specifically for each site. They felt this approach was not sufficient to preserve and enhance the diversity of plants and wildlife in a way compatible with agricultural operations. Furthermore, they believed the solutions arrived at through that process were overengineered for their situation. Agency staff and landowners wanted to stabilize stream banks to minimize erosion, but they believed USDA’s solution of lining stream banks with rocks was too complex and expensive. They found that they could reinforce stream banks by planting young saplings low on the exposed bank and interweaving their branches to create a living reinforcement. This approach cost a fraction of the cost of installing rocks. Participants identified several reasons for the high level of participation in the stewardship and for landowners’ quick acceptance of the Huichica Creek implementation plan—they adopted practices suitable for their operations even before the plan was complete. First, landowners were heavily involved in developing the plan and were therefore disposed to implement the recommended practices. Second, having 90 percent (63 of 70) of the Huichica Creek landowners involved in the stewardship facilitated communication and fostered a sense of community. Third, some vintners were also motivated by market concerns, such as potential consumer reactions to pesticide use or the endangerment of a protected species. Stakeholders’ involvement and high participation rates were instrumental in the stewardship’s reaching consensus to seek tougher regulations regarding the use of certain pesticides in the Huichica Creek watershed. These additional regulations, sought by the landowners with technical support from county, state, and federal agencies, were approved by EPA in 1992. As a result, Huichica Creek farmers must comply with 12 additional handling and use requirements on certain pesticides that are potentially toxic to the California fresh-water shrimp. Finally, although the stewardship focuses on getting commitment to changing practices rather than achieving a particular goal, it recognizes the need to monitor results. Therefore, the project includes a number of quantifiable measures to monitor the condition of the watershed. These include monitoring soil structure and quality, endangered species habitat, use of irrigation water, water quality, and the stability of stream banks and channels. The Huichica Creek project’s accomplishments include (1) enlisting 63 of the 70 local landowners to participate in the watershed stewardship, (2) restoring and stabilizing 800 feet of stream banks, (3) planting at least 10,000 trees to revegetate stream banks and the upper reaches of the watershed, (4) planting four demonstration sites to show the suitability of different cover crops to various soil-hydrology combinations, and (5) completing a water survey to estimate the average runoff from each watershed section to help landowners and managers stabilize stream flows. The major lessons of this project were that (1) involving local stakeholders is key to getting voluntary participation, (2) financial assistance limits and inflexible requirements hindered efforts to reduce nonpoint-source pollution, and (3) the threat of regulation can help motivate farmers to take action. The West Stanislaus watershed is located about 70 miles southeast of San Francisco, California, as shown in figure II.1. The watershed occupies 134,000 acres, of which approximately 122,000 acres are irrigated farmland, such as row and field crops, orchards, and vineyards. The watershed encompasses about 400 farms. Eight creeks flow across the watershed and drain into the San Joaquin River. During the arid summer months, the water in the creeks is composed entirely of agricultural runoff, primarily from furrow irrigation. This irrigation method usually results in some erosion, but the highly erodible soil in the West Stanislaus watershed exacerbates the problem. The average level of sediment in the irrigation runoff is 1,500 milligrams of soil per liter, although erosion in some areas reaches as high as 9,000 milligrams of soil per liter. USDA officials describe the irrigation runoff as being chocolate brown in color. Such high levels of sediment have a number of impacts on the San Joaquin River. Of particular concern are organochlorine pesticide residues, especially DDT (dichloro-diphenyl-trichloro-ethane) residues, which persist in the soil for decades. In addition to having a negative impact on fish and other aquatic life, the sediment increases needed maintenance for the river, drainage ditches, and canals, which have to be periodically dredged to remove built-up sediment. After almost 20 years of study, the farmers in West Stanislaus decided it was in their best interests to solve the sediment problem voluntarily rather than have a regulatory agency dictate a solution. The state of California is considering a water quality strategy that includes three levels of implementation—voluntary implementation of conservation practices; regulatory or institutional encouragement of conservation practices, such as waiving requirements concerning discharges if practices are implemented; and regulation, such as issuing permits that specify the type, amount, and concentration of pollutants that may be discharged. The West Stanislaus Resource Conservation District sponsored the watershed project and worked closely and cooperatively with USDA staff to establish the overall goals and implementation strategy for reducing erosion. An additional 25 federal, state, and local agencies provided financial and technical support, including EPA, the California EPA Department of Pesticide Regulation, and the Central Valley Water Quality Control Board. USDA staff took the lead in developing a strategy to achieve the chosen goal of reducing sediment to 300 milligrams of soil per liter of drain water, an 80-percent reduction in average erosion. USDA issued the West Stanislaus Sediment Reduction Plan in February 1992, after it had been reviewed and approved by the resource conservation district. The strategy for reaching the project’s goal is to (1) develop and conduct a comprehensive information and education program, (2) provide cost-sharing assistance, (3) provide technical assistance, and (4) provide for monitoring and evaluation. The plan describes 17 conservation practices that reduce erosion, outlines each practice’s advantages and disadvantages, and estimates the costs and reductions in erosion. It then provides detailed work sheets to help farmers identify the most cost-effective combination of practices, given their soil and crops. According to USDA staff, the plan does not include a detailed water quality monitoring strategy because the funds received under USDA’s Hydrologic Unit Area program cannot be used for water quality monitoring. As shown in table II.1, most of the government funding came from USDA, but farmers also contributed a significant amount in labor and materials. Several factors were considered influential in gaining participation in a watershed project. First, USDA staff and the project’s participants agreed that the involvement of the members of the Conservation District’s Board of Directors, who are well-known and respected farmers, was a key to garnering local support. If outsiders come into the community with solutions, local farmers are skeptical, because they believe each farm is unique in its combination of crops, soils, and management techniques. Second, to discover how to motivate local farmers to participate in the project, a USDA sociologist was used to develop enlistment strategies and estimate the farmers’ participation rates. Although participants were initially skeptical, many felt that the sociologist was very helpful in understanding the social and economic currents that contributed to the project’s success. Third, participants’ ability to see reductions in erosion with their own eyes helped increase participation. USDA staff developed a guide that shows the color of irrigation drain water at three sediment levels: 300 milligrams of soil per liter, 1,000 milligrams of soil per liter, and 9,000 milligrams of soil per liter. Farmers can easily determine whether they are meeting the goal of 300 milligrams, as well as observe whether their neighbors are meeting it. Financial assistance also helped increase participation, but cost-sharing ceilings and inflexible requirements limited its usefulness. USDA staff noted that farming is viewed as a financially risky occupation; thus, farmers are reluctant to adopt new, unproven practices that could threaten their profit margin. They said that USDA’s cost-sharing program helped mitigate the financial impact of new practices, but farmers are limited to $3,500 per year in cost-sharing assistance. Some farmers told us that this hindered the effectiveness of the program because many of the practices recommended by the USDA technical experts, such as installing irrigation piping that can control water flow, require significant financial outlays. After the plan was developed, USDA staff learned of a new technique that could help reduce sediment but is not eligible for cost-sharing funding. Adding a polymer to the irrigation water causes the sediment to settle out much faster, reducing erosion runoff from the fields. The polymer has been used in water treatment plants for years but has not yet been approved for agricultural use. Participants obtained permission to test the polymer, which costs about $10 per acre, for agricultural uses in West Stanislaus County but had to proceed without federal cost sharing because this treatment is not authorized in USDA’s program guidance. Although the project’s participants were motivated in part by a desire to avoid regulation, some felt that encouraging voluntary participation by itself was insufficient to ensure that the project’s goals are achieved. One of the water districts in the watershed decided to require that farmers reduce the sediment in their runoff to 1,000 milligrams per liter to receive water for irrigation, under the threat of halting water deliveries. The water district, which covers 25 percent of the watershed, has never had to cut off water deliveries, and 90 percent of its farmers are in compliance. Structural and managerial best practices have been adopted on about 20 percent of the watershed, or 25,000 acres. USDA estimates that the project has reduced the sediment reaching the San Joaquin River by about 340,000 tons since the project began and thereby reduced the DDT reaching the river by about 620 pounds. Another benefit of changing irrigation techniques is that farmers have reduced the amount of irrigation water they use by 18 percent, saving about 11,000 acre-feet of water. The major lessons of the Otter Lake watershed project were that (1) a holistic approach is important for successful watershed management and (2) federal financial assistance can be instrumental in improving water quality. Otter Lake covers 765 acres in southern Illinois, as shown in figure III.1. Built in 1968, Otter Lake is one of eight lakes contributing to the public water supply of Macoupin County and provides drinking water and recreational uses for about 14,000 people in seven communities. The lake’s water supply is recharged by runoff water drained from the 12,250-acre watershed, 87 percent of which is agricultural land. Excess sedimentation, caused by cropland and shoreline erosion, is a primary cause of the declining water quality in Otter Lake. Sediment has increased turbidity (murkiness), which has reduced aquatic vegetation. It may also be impairing the levels of dissolved oxygen, harming fish reproduction and overall health. All of these factors, in turn, may force a shift in the fish species that populate the lake. Pesticide residues and other organic materials in farm runoff are also impairing the water quality in the lake. In 1993, Otter Lake was one of three public water supply lakes in the county found to have atrazine levels exceeding the standard established by EPA under the Federal Safe Drinking Water Act. Atrazine, an herbicide commonly used on corn, is a potential carcinogen for humans; it is water soluble and takes 15 to 20 years to break down. Responding to a 1990 USDA request to identify and prioritize concerns about water resources, the Macoupin County Soil and Water Conservation District board selected five lakes that contribute to the public water supply for priority attention, including Otter Lake because of its high sediment levels. After the Otter Lake Resource Planning Committee was formed in June 1992, preliminary evidence of atrazine problems in the Otter Lake was discovered. Responsibilities for managing the project are shared by three organizations representing community, state, and federal agencies—a commission, a resource planning committee, and a technical advisory committee. The commission, a quasimunicipal corporation that sells water to seven communities in the watershed, has decision-making authority for all matters related to the lake and surrounding property. The resource planning committee, comprising members from the agricultural community such as farmers and agribusiness leaders, provides local input to define resource concerns and leadership during the development and implementation of the watershed plan. The technical advisory committee, comprising representatives from the Illinois Environmental Protection Agency, Department of Conservation, and Department of Agriculture as well as the U.S. Department of Agriculture, advises the planning committee throughout the project. With guidance from the technical advisory committee, the planning committee decided to pursue funding to implement sediment control measures, recognizing that some of those measures could also be used for atrazine control. However, in August 1993 the Illinois EPA placed Otter Lake on its restricted status list for atrazine. This designation generally prohibits further development and requires compliance within a period of time that varies with the compliance strategy chosen. Otter Lake may choose from among the following compliance strategies: (1) apply water treatment technologies (e.g., activated charcoal treatment), (2) locate a new source of drinking water, (3) blend water from the current source with water from alternative sources, and (4) implement watershed management measures. The atrazine finding encouraged the planning committee to shift from single-issue planning to broader watershed planning and, eventually, to comprehensive “ecosystem” planning. In ecosystem planning, an inventory of regional concerns is developed in addition to an inventory of local community concerns. Best management practices recommended for one resource area must be evaluated to confirm that they do not impair others. Although a final ecosystem plan has not yet been approved, the project aims to encourage (1) management changes on 75 to 80 percent of all acres in the watershed and (2) widespread installation of structural measures, such as sediment control basins and artificial wetlands. Otter Lake has received most of its project funding to date from USDA, as shown in table III.1. Otter Lake has used federal financial and technical assistance to help farmers design, demonstrate, and implement various structural and management practices aimed at reducing nonpoint-source pollution in the watershed. For example, the project funded the implementation of certain management practices, such as integrated crop management, a practice designed to minimize pesticide use; the planting of pasture and hay land to reduce erosion; and animal waste management to control nutrients. Furthermore, with the help of state and federal staff, a demonstration project to construct 14 water and sediment control basins and 2 permanent wetlands structures was designed to show how such containment structures, combined with plantings, can remove atrazine and other pollutants from the water system. According to project staff, some farmers have expressed interest in implementing similar structures, but it is uncertain how funding will be secured to meet this demand. According to EPA officials, funding for the additional demonstration projects requested by the farmers is unlikely because such structures would duplicate those already in place in the Otter Lake project area. In addition to financial assistance, flexibility is also important for monitoring a project’s results. A USDA official said that some water quality monitoring is needed if a watershed project has numerical goals (e.g., 3 parts per billion for atrazine in Otter Lake), but it does not have to be extensive. The official said that information about water quality helps to educate and motivate farmers. In addition, the official believes that water quality is a good indicator of overall health of the watershed. At Otter Lake, the Illinois EPA and a chemical manufacturer are sampling and analyzing the lake water, and the water commission is sampling and analyzing the tap water. Monitoring results have shown acceptable atrazine levels in Otter Lake for the last three quarterly test periods. According to a USDA official, the behavior of atrazine in natural systems, including the reasons for fluctuations in Otter Lake itself, are not well understood, so these results are inconclusive. Furthermore, after obligating its fiscal year 1995 funds, the project is expected to reach its acreage goal for implementation of management practices. The major lessons of the Big Spring Basin watershed project are that (1) education is key to a project’s success and (2) the relationship between changes in land use and chemical improvements in water quality may be difficult to demonstrate. Big Spring Basin covers about 66,000 acres of northeastern Iowa’s Clayton County, as shown in figure IV.1. The area is heavily agricultural, primarily cropland planted in corn and alfalfa and livestock operations. About 220 farmers live in the watershed. In Big Spring Basin, groundwater aquifers (natural underground reservoirs) that supply drinking water are close to the land surface and are thus vulnerable to contamination from surface activities, particularly agricultural operations. The primary concern about water quality is nitrate contamination of the groundwater, which can occur through downward percolation of nitrogen from manure or chemical fertilizers applied to cropland or from surface runoff that enters the groundwater system through sinkholes or other passageways. Some herbicides and insecticides also have been detected in the Big Spring Basin’s groundwater. In Big Spring Basin, data on water quality dating back to 1961 showed strong correlations between the use of nitrogen fertilizers and the concentration of nitrates in the groundwater. During the next 2 decades, the use of nitrogen fertilizer in the watershed more than doubled, while nitrate concentrations in Big Spring tripled. In 1981, the Iowa Geological Survey began extensive, continuous water quality monitoring in Big Spring; by 1983, the survey had documented a steady decline in the quality of the groundwater in the watershed. At the same time, numerous meetings were held, educational activities conducted, and task forces convened to discuss the problem. In late 1983, the Northeast Iowa Conservancy District and the Iowa Cooperative Extension Service formed the Ad Hoc Karst Committee, later renamed the Iowa Consortium on Agriculture and Groundwater Quality, to design a multiagency, tiered research and demonstration project for the Big Spring Basin. The Consortium, with representatives from numerous federal and state agencies, developed the Big Spring Basin watershed proposal, which provided a broad outline for project activities. The project’s day-to-day activities were managed by local project coordinators with the Iowa State University Extension. Educational, technical, and financial assistance is provided by various federal, state, and local agencies, such as USDA and EPA, Iowa’s Departments of Natural Resources and Agriculture, and Iowa State University. The planning process for Big Spring Basin was informal, and participants did not produce a formal watershed management plan. The Consortium targeted key problem areas and developed a nonregulatory model for the Big Spring Basin Demonstration Project. Its objectives were to (1) reduce the potential environmental impacts of agricultural practices and (2) enhance the efficiency and profitability of farm management. These objectives would be met through a 7-year, integrated education, demonstration, research, and monitoring effort focused primarily on nitrogen management. Technical and financial assistance would be provided to participating farmers. When the project officially began in 1986, participants had difficulty securing funding because federal and state funding sources were geared toward protecting surface water, not groundwater. Because of the limited funding, project leaders targeted the 1,005-acre Bugenhagen Subbasin, a microcosm of the larger basin, for the project’s initial effort. Like Big Spring Basin, the subbasin drains to a single outlet—a sinkhole—which provided good conditions for monitoring. Over time, the project received funding from numerous sources, as shown in table IV.1. According to project staff, total financial support for the demonstration project was larger than it was for many watershed projects because of the scope and intensity of the education and monitoring conducted. Farmers and project staff alike gave enormous credit to the project’s coordinators for its success. The coordinators selected for the project had a long-standing involvement in the area as county staff providing technical assistance to the agricultural community. As a result, they were regarded as credible and trustworthy, which was a critical factor in encouraging farmers’ participation in the project. The coordinators drew on their familiarity with farming conditions and practices in the watershed to identify solutions that would be compatible with the farmers’ needs and abilities. Demonstration projects and other educational activities were deemed important because the project had no regulatory component and depended on voluntary participation. In the Bugenhagen Subbasin, project staff worked one-on-one with farmers to provide more intensive education and technical assistance. Demonstration projects involved nitrogen management, soil erosion, pest management, weed control, conservation tillage, and energy conservation. Project staff provided education and technical assistance to farmers for specific practices, such as establishing realistic yield goals, soil sampling, and soil nitrate testing. Publicity and outreach activities, ranging from public meetings and field days to publications in newspapers and newsletters, were used to increase the community’s awareness about the project. The project employed an extensive network of over 50 monitoring stations to generate detailed information on the changes in water quality that accompanied improved farm management (e.g., water flow, conductivity, alkalinity, temperature, nitrates, and pesticides). Surveys of farmers’ practices were conducted both in the subbasin and throughout the basin. Showing a link between overall declines in nitrogen use and chemical changes in the groundwater is difficult. According to project staff, the effects of reducing nitrogen levels over 10 years cannot be isolated from the effects of other factors, particularly climatic variations. For example, the resulting changes in water volume caused by drought conditions in 1988 and 1989, followed by exceptionally wet conditions in subsequent years, affected the nitrate concentrations. Other factors complicating an analysis of effects on water quality include changes in application rates and in land use and cropping patterns. Project staff acknowledge that measuring the project’s impact is generally difficult because much is still unknown about the movement and disposition of contaminants in groundwater systems. Recognizing the need to improve the understanding of how changes in land management eventually affect water quality, USDA is applying a computer modeling program called AGNPS (Agricultural Nonpoint Source) to the extensive data collected in Big Spring Basin so the agency can estimate how reductions in the rates of pesticide and fertilizer application eventually affect water quality. Throughout the basin, more than 200 farmers voluntarily decreased their use of nitrogen as a result of the project. The average amount of nitrogen fertilizer used for corn production between 1981 and 1991 decreased by 33 percent, with no loss of yields. By contrast, the county and statewide rates of nitrogen use declined by 20 percent during the same period. Cumulatively, Big Spring Basin farmers reduced nitrogen use by nearly 1-1/2 million pounds from 1981 to 1991, for estimated cost savings of about $266,000. In the subbasin, 9 of 11 farmers, controlling 98 percent of the total acreage, entered into 7-year contracts for soil and water conservation. From 1987 through 1991, annual soil savings of 64 percent were recorded for about 900 acres of cropland and permanent pasture. Some form of best management practices for pesticides and nutrients were implemented on all acres in the subbasin. The rates of nitrogen use in the subbasin were reduced by about 10 percent. The major lessons of the Tar-Pamlico River Basin watershed project are that (1) flexible and innovative approaches—in this case, pollutant trading—may offer more cost-effective alternatives for improving water quality and (2) the consensus process is essential for maintaining cohesion between stakeholder groups and keeping them committed to the project’s goals. North Carolina’s Tar-Pamlico River Basin watershed, comprising about 3.5 million acres, stretches 180 miles southeast from the state’s hilly north central portion, through the coastal plain region, to empty into Pamlico Sound, as shown in figure V.1. The watershed, which is relatively undeveloped land, encompasses about 365,000 residents, at least nine threatened or endangered freshwater mussel species, and all or part of three national wildlife refuges. Water quality problems in the Tar-Pamlico River Basin watershed have been known for a number of years. About 22 percent of the fresh-water streams in the watershed are impaired by sediment, acidity, and high fecal coliform bacteria counts; several lakes suffer from excessive nutrients; and more than 50,000 acres near the mouth of the Pamlico River are periodically stricken with algae blooms, fish kills, crab and fish diseases, and closed shellfish waters. Both municipal and industrial point sources, as well as nonpoint sources that include agriculture, contribute nutrients to the Tar and Pamlico Rivers. Point sources, such as waste treatment plants and industrial factories, discharge waste water that contains nutrients such as nitrogen and phosphorous. The primary sources of nutrient pollution from agricultural nonpoint sources are animal manure and chemical fertilizers applied to cropland. In response to a petition by a local citizens’ group—the Pamlico-Tar River Foundation—and recommendations made by the state’s Division of Environmental Management, North Carolina designated the entire Tar-Pamlico River Basin as “nutrient sensitive waters” in September 1989. This designation required the development of a comprehensive strategy to reduce pollution in the watershed. In developing this strategy, the North Carolina Division of Environmental Management worked with three industry and community groups that represented the various stakeholders—the Tar-Pamlico Basin Association, a point-source dischargers group; the North Carolina Environmental Defense Fund, a nonprofit environmental group; and the Pamlico-Tar River Foundation. The state initially proposed several steps to reduce the nutrients entering the river system. To reduce the municipal and industrial contribution, the state proposed including nutrient limits in the permits it issues to point-source dischargers. To reduce the agricultural contribution, the state planned to rely on North Carolina’s Agriculture Cost Share Program to financially assist farmers in voluntarily addressing nonpoint nutrient pollution from cropland and concentrated animal operations. The Tar-Pamlico Basin Association estimated it would cost point-source dischargers $50 million in plant and equipment upgrades to comply with the state’s proposed discharge limits. Instead, the environmental, citizens’, and point source groups proposed an alternative that had two major phases. First, association members (i.e., point-source dischargers) would have engineering studies performed on their facilities and would make the operational changes and minor investments found necessary to optimize the removal of nutrients from their discharges. In connection with this aspect of the agreement, the members as a group would reduce the nutrient content of their discharges by at least 25,000 kilograms each year to reach a group limit of 425,000 kilograms per year by the end of 1994. Second, rather than requiring expensive plant and equipment upgrades in order to achieve their nutrient limits, the North Carolina Environmental Defense Fund proposed that point source association members be allowed to instead contribute to an innovative nutrient credit “trading” fund. The fund would be used to finance more cost-effective actions to reduce nutrient pollution from agricultural nonpoint sources. Association members agreed to contribute $56 to the fund for every kilogram of nutrients discharged in excess of their group’s limits. In December 1989, the state approved and adopted the plan, which is now known as the Tar-Pamlico Nutrient Sensitive Waters Implementation Strategy. The major stakeholders agreed that the details of the two-phase plan would be spelled out in an agreement that would periodically be reviewed and updated. Successful consensus building led to smooth implementation of phase one, resulting in goals for reducing nutrients that were accepted by all parties and an updated phase-one agreement that was signed by all three participating organizations in February 1992. The funding for phase one, which ended in December 1994, is shown in table V.1. The Tar-Pamlico Association’s contribution includes a $750,000 grant obtained from EPA under the Clean Water Act. Voluntary participants (farmers) However, problems that emerged during the planning of phase two portend future difficulties for the program. The consensus began to break down during phase two, which began in January 1995 and is scheduled to end in December 2004. A study performed in phase one indicated the need for a goal of a 45-percent reduction in nitrogen levels in phase two. However, because of uncertainties about the accuracy of the model used in the study, the state decided to institute an interim reduction goal for nitrogen of 30 percent (a reduction of 583,000 kilograms per year) and maintain the current discharge limits for phosphorous. Most of the 30-percent reduction goal for nitrogen was allocated to nonpoint sources. Half of the nonpoint allocation is to come from agricultural sources. Also, under phase two, the nutrient credit trading rate that association members would be required to pay was reduced from $56 to $29 for every kilogram over the limit on the basis of the results of a study. The state also agreed to credit the association for the amount of its contribution that had not been spent in phase one. About $450,000 of the association’s contribution to the nutrient credit trading fund had not been spent by the end of phase one, so state officials recomputed the amount of the remaining nutrient credit this figure represented on the basis of the new $29 rate. Thus, the association started phase two of the project with a nitrogen credit of over 22,000 kilograms. The major stakeholders could not reach a consensus on the overall phase-two reduction goals and the allocation of reductions between the point and nonpoint sources. State officials nevertheless approved phase two because they felt it was a good compromise between the positions of the point sources and of the environmental and citizens’ groups. In addition, state officials said phase two had to be coordinated with the state’s 5-year cycle for reviewing its basinwide watershed management plans and the concurrent basinwide approval of all point-source discharge permits. Although the state’s action may have been expedient, it could have a significant impact on the project’s future. The North Carolina Environmental Defense Fund and the Pamlico-Tar River Foundation refused to sign the phase-two agreement. These two organizations believed that the concessions made to the point-source dischargers undermine the effectiveness of nutrient credit trading and shift too much of the financial burden for reducing pollution to the nonpoint sources. An official from the North Carolina Environmental Defense Fund said that the Fund may file a lawsuit against the state unless the state develops a workable plan for achieving the large reductions in nutrients that are to come from nonpoint sources. The project’s participants recognized that success depended on including local citizens and officials in the planning process. During phase one, the state and the North Carolina League of Municipalities sponsored two public workshops in 1994 to familiarize the public with the plan, solicit comments, and broaden stakeholders’ education and participation. Priorities compiled from these meetings included the need to increase public education and stakeholders’ participation, improve the control of nonpoint-source pollution, identify and target problem areas and resources in the river basin, consider land use planning and property rights, improve data on water quality, improve funding and regulatory enforcement, and consider cost-benefit relationships. The project’s participants said these meetings also helped break down barriers and misperceptions between various competing groups. People began to acknowledge that they all contributed to water quality problems in one way or another and that protecting the watershed was in everyone’s best interests. Also during phase one, the state began a demonstration project in the Chicod Creek Subbasin to reduce agricultural discharges. Farmers in that area were encouraged to participate in a voluntary program to implement various agricultural best management practices in order to reduce nutrient runoff. State officials said that demonstration projects greatly increase voluntary participation in watershed projects because farmers generally stay with practices that are “tried and true.” They tend to wait to see what experience their peers have with a new practice before they adopt it, even if the practice is said to be financially beneficial. However, some participants in the project thought that most farmers want to be good stewards of the land and would make the needed changes if they had the funds and expertise. Thus, the availability of financial and technical assistance is important to the farmer. Participants noted that supplemental funding sources, such as the state cost-sharing program, are important to watershed projects because sufficient federal funding may be hard to obtain. They said that USDA’s funding for improvements in water quality at individual farms is severely constrained because actions to improve water quality are considered just one of several competing practices subject to an overall annual cap of $3,500 per farmer for projects funded under the Agricultural Conservation Program. State officials believe that the targeted 50-percent reduction in nitrogen pollution from agricultural sources in the watershed will cost about $8.5 million and have recommended that funding for the state Agriculture Cost Share Program be increased. State officials also said that watershed projects would benefit greatly from increased communication, coordination, and cooperation between the states and all the federal agencies. For instance, USDA staff encourage farmers to plant grasses at the edge of cultivated fields that serve as buffer zones for runoff. However, better coordination between USDA and the U.S. Fish and Wildlife Service would ensure that grasses planted also provide good wildlife habitat. State officials said that accomplishments have not yet been reflected in the results of water quality monitoring for the estuary and may not be measurable for many years. They believed that other indicators, such as a growth in fish populations, may be better short-term indicators of success. These officials feared that if federal agencies measure success on the basis of short-term water quality monitoring data alone, future funding could be in jeopardy. Agency officials and participants both preferred voluntary programs to control nonpoint agricultural discharges over regulation. They believed that people would continue to take the day-to-day actions necessary to improve water quality only if they are truly committed to them. However, some state officials, as well as one farmer we spoke to, believed that there must also be a regulatory enforcement component to encourage early action and to take care of polluters who do not comply. As a result of engineering studies performed early in phase one, point-source dischargers in the Tar-Pamlico Basin Association reduced their nutrient discharges below the state limits through relatively inexpensive equipment upgrades and operational changes. In fact, point-source nutrient discharge levels have been below the state’s limits for nitrogen and phosphorous for every year since the phase-one agreement was signed. By April 1993, the end of the sign-up period for the Chicod Creek demonstration project, 27 of the 32 confined animal operations located in the subbasin had agreed to implement management practices to reduce nonpoint-source pollution. Waste management plans had been written for 6 of the 12 highest-priority operations, and construction of the various containment structures required had begun at 2 of these sites. The major lessons of the Big Darby Creek watershed project are that (1) in the absence of an immediate water quality crisis, financial incentives can be useful in stimulating participation in a project and (2) the link between changes in land use and improvements in water quality may be difficult to demonstrate in a large watershed. The Big Darby Creek watershed covers about 371,000 acres of Ohio’s central lowlands on the eastern edge of the Corn Belt, as shown in table VI.1. The terrain is generally flat land and gently rolling hills. About 1,170 farms are located in the watershed, with the steeper upper portion containing small farms of about 60 acres and the flatter lower portion containing larger farms of about 300 acres. More than 80 percent of the land in the watershed is devoted to crops, and there is some livestock pasturing. The two major threats to water quality in the Big Darby Creek watershed are agricultural and urban nonpoint-source pollution. The major agricultural pollutant is sedimentation, caused by the widespread use of conventional tilling practices and stream bank erosion. Increased sediment has impaired stream habitat and the feeding and spawning activities of fish and other aquatic life. Agriculture also has created limited nutrient and pesticide problems in the creek. Experts estimate that although the watershed is now one of the healthiest in the Midwest, up to 25 percent of Big Darby’s aquatic species may be lost in the near future if land management practices are not changed. The Big Darby project was conceived jointly by the Ohio chapter of the Nature Conservancy and USDA. Before the project began, the Conservancy spearheaded the creation of the Darby Partners, an association designed to facilitate closer communication and coordination among all the stakeholders in the watershed. The Partners now comprise more than 40 public and private organizations committed to working together to protect the creek. The Darby Partners review and assist in the implementation of the Big Darby project. For example, the Partners identify practices and funding sources appropriate for farmers, serve as a clearinghouse for funding applications, and use committees and subcommittees to coordinate individual farmers’ projects. In 1990, USDA approved a proposal to include Big Darby Creek in its Hydrologic Unit Area program, which targets areas facing significant threats to water quality from agricultural nonpoint sources. The program provides technical and financial assistance to encourage landowners to voluntarily adopt best management practices. Under this program, the objective is to maintain or improve the unique, high-quality stream and its watershed by using innovative approaches to reduce sedimentation and levels of nutrients and pesticides while maintaining a viable agricultural economy. The specific goals of the project are to (1) reduce sediment in the creek by 40 percent, (2) protect 3,200 acres of riparian corridor, (3) reduce nutrient and pesticide levels, and (4) protect 21 miles of stream banks. Following Big Darby’s selection as a hydrologic unit area, the Nature Conservancy designated Big Darby as one of its 12 “Last Great Places,” which enabled it to begin funding, conducting, and coordinating environmental conservation programs there. The Partners did not create a formal watershed management plan but relied instead on three basic documents to guide their work: (1) the original hydrologic unit area project proposal, (2) a Forest Service watershed inventory, and (3) the Nature Conservancy’s watershed plan, which focused more on urban issues. Project staff we met with acknowledged the importance of planning but stressed that it must lead to action, not to documents that sit on a shelf. Collectively, the Soil and Water Conservation Districts in the six counties in the watershed identified the following problem areas: (1) soil erosion and sedimentation from croplands, (2) the widespread lack of management of nutrients and pesticides, (3) poor management of animal wastes, and (4) livestock’s access to streams. USDA identified a general list of best management practices to be implemented by the project and estimated the level of funding needed for these practices—about $9 million over 3 years. Almost half of Big Darby’s funding has come from USDA, as shown in table VI.1. Kellogg Corporation (grant to Operation: Future Association) The Big Darby project provides financial and technical assistance and educational opportunities to encourage and facilitate farmers’ implementation of best management practices in all six counties in the watershed. Since Big Darby is not facing an immediate crisis in water quality, the project’s leaders recognized that some farmers need incentives to participate in the watershed project. According to project staff, financial assistance has helped make nonpoint-source pollution more of a priority for some people. One approach to providing financial assistance is Ohio’s recently established low-interest loan program for qualifying individuals and private organizations that want to implement projects to control nonpoint-source pollution. Under this program, an applicant who has received a certification of qualification from a conservation district can take the certification to a participating bank. If the bank approves the loan, the interest rate will be discounted, usually by 3 percent, from the normal lending rate. Regarding technical assistance, project staff noted that it is important to understand farmers’ needs and find practices that are compatible with those needs. Different programs have different purposes and requirements, and farmers need flexibility to choose among a program’s tools or even expand the toolbox. An assortment of educational and outreach activities, such as farm tours, workshops, canoe trips, expositions, videos, the use of mass media, and school events, are being directed to the general public and to landowners to increase their awareness of water quality and encourage interest in the Big Darby watershed. Although some farm demonstration projects have been conducted, project staff prefer to use funding for farmers’ specific projects, estimating that the cost of 1 demonstration could pay for about 10 farmers’ projects. Assessments of the biological, physical, and chemical aspects of water quality are being conducted in Big Darby Creek. According to state officials, biological monitoring may be the best method for assessing problems with nonpoint-source pollution. They stressed that such monitoring should be a separate element and in place before individual watershed projects are begun. However, a project’s performance can be validly assessed using other indicators, such as fish counts, best management practices adopted, and farmers’ attitudes, according to project staff. Water quality monitoring in Big Darby is not tied directly to farmers’ implementation activities. According to project staff, farmers are sometimes frustrated by data limitations and the project’s inability to show results. They acknowledged that a better link between monitoring and day-to-day project activities could help show participants that their activities are having a positive effect. They cautioned, however, that current science can demonstrate only a tenuous link between land use practices and the chemical aspects of water quality. Some sociological data also has been collected. Focus groups showed that farmers were generally enthusiastic about collaborating with agencies to achieve a greater goal and that their primary concerns were the protection of stream corridors and control of suburban encroachment. During the last 3 years, the biological integrity of the watershed’s streams has remained constant, while sediment, pesticide, and nutrient levels have fluctuated. The Big Darby project has reached 57 percent of its goal of reducing sediment by 50,000 tons per year, and 98 producers have installed one or more structural enhancements or implemented management practices in 1994 to reduce nonpoint-source pollution. The major lesson of the Coos Bay and Coquille River watershed projects is that involving local stakeholders in planning and implementing a project can help overcome a community’s suspicion of government-sponsored initiatives and result in a cooperative partnership of community interests and government agencies. Coos Bay and the Coquille River are adjacent watersheds covering about 2,000 acres along the southern Oregon coast, as shown in figure VII.1. The terrain is composed of steep, heavily timbered hills interspersed with pasture land leading to pastures on drained wetlands along various rivers and creeks. The local economy depends heavily on timber, commercial fishing, and agriculture. The Coos Bay-Coquille River area is an important spawning and winter rearing habitat for salmon and other anadromous fish. The fish spawn in the gravel-covered stream beds near the headwaters of the creeks, and the juvenile fish linger in the cool, heavily shaded areas downstream until they are mature enough to head out to sea. The salmon population has severely declined for several reasons, including the impact that timber and agricultural activities have had on the spawning and rearing habitat. Sediment from timber runoff and eroding banks in pasture land has silted over the gravel spawning grounds and decreased the amount of dissolved oxygen available to the fish. Destruction of habitat—by, for example, straightening streams—causes juvenile fish to be swept out to sea before they are mature enough to survive ocean conditions. Finally, temperatures in parts of the Coquille River reach 80 degrees, much warmer than the 50 to 60 degree temperature suitable for fish. The initial effort was a 1991 demonstration project on Larson and Palouse Creeks, tributaries of Coos Bay, that was funded through EPA’s Near Coastal Waters Program. The effort was prompted by a complaint from owners of one of the Coos Bay oyster beds, which had been closed because of fecal contamination. In addition, the state identified these creeks as dangerous for recreation because of the high fecal coliform count. The project’s goal was to reduce the coliform count from 16,000 per 100 milliliters to 200 per 100 milliliters. The government agencies involved in the project called a community meeting to elicit citizens’ concerns about water quality. The potential listing of the coho salmon as an endangered species was a major concern for landowners along the creeks. Attendees also identified drinking water quality, access to creeks in order to water livestock, land loss due to erosion, and suitability for recreational use as their primary concerns. The community organized two watershed associations, one focusing on the Coos Bay watershed and the other on the adjacent Coquille River watershed. Each association has an executive council that sets the overall policy and direction for the project. Watershed members include timber companies; private landowners; federal land management agencies, such as the Bureau of Land Management and the Forest Service; state agencies with water and habitat responsibilities, such as the Oregon Department of Environmental Quality; and other interested parties, such as local seaport operators and environmental groups. Both associations issued an action plan in 1994 after spending less than a year planning and developing their overall approach. The plans include quantifiable goals and a monitoring strategy. For example, the Coquille River watershed plan identified three goals: (1) meeting the Clean Water Act’s standards, (2) enhancing fish survival and production, and (3) creating understanding and acceptance in the community of the need for sustainable economic activities that are compatible with long-term resource conservation. The evaluation strategy includes monitoring a variety of parameters, such as stream temperature, stream flow, and fish spawning and juvenile populations. Both plans emphasize voluntary participation and community education, and both advocate simple, low-technology approaches like (1) installing fencing to minimize damage to streambanks caused by livestock and thus reduce erosion, (2) planting shade trees along the creeks to reduce the water temperature, and (3) building small pools, called off-channel ponds, alongside the creek to provide a rearing habitat for juvenile fish. EPA staff told us that they could not help fund the planning activities for the Coos Bay-Coquille River projects because, at that time, funds received under section 319 of the Clean Water Act could only be used for implementation, not planning. The Coos Bay project is funded almost equally by federal and state agencies, whereas the Coquille River project is funded primarily by federal agencies, as shown in tables VII.1 and VII.2. Landowners’ contributions (estimated) The projects’ participants emphasized that getting the local community to agree that a water quality problem existed and needed to be addressed was critical in making the project viable. The local community is suspicious of government regulation and very protective of private property rights. The community is particularly resistant to projects with an environmental slant, because many blame federal and state efforts to protect the spotted owl and salmon population for high unemployment in the timber and fishing industries. Participants believed that public education and outreach was a major factor in overcoming this resistance. Because many in the community were suspicious of the projects, participants spent a great deal of time making formal and informal contact with members of the community to explain the scope and approach of the projects and reassure the public about the projects’ intent. Members of the Coquille River Watershed Association are also developing a high school curriculum to improve students’ understanding of the watershed they live in and how their activities affect water quality. Emphasizing stakeholders’ involvement capitalized on the fact that many landowners really wanted to help their neighbors by improving water quality and revitalizing the salmon population. Participants said involving stakeholders helped ensure that all economic interests were represented and considered when defining the problem and developing a solution. Representatives of the timber, fishery, and agricultural sectors explained their operations and needs, and these were taken into consideration in developing the projects’ strategy. Participants emphasized that the projects could not progress until stakeholders move beyond blaming each other for the current problem and begin concentrating on the solution. Involving stakeholders also helped the government agencies to move beyond focusing on their own missions to focusing on the overall condition of the watershed. Historically, government personnel had seldom communicated with each other. For example, one agency official noted that the state was doing studies and building in-stream structures, such as inserting logs, old trees, and other woody debris to slow the stream flow; the U.S. Army Corps of Engineers was dredging canals and sloughs to improve drainage; the Bureau of Land Management was undertaking projects on federal land, such as reengineering access roads to minimize erosion; and USDA was working with private landowners to reduce erosion and runoff from animal wastes. However, the agencies were not looking at how these efforts related to each other. The watershed associations have given the agencies a forum for sharing information, and coordination among them has greatly improved. Coos Bay and Coquille River project staff, participants, and state government officials voiced concerns about inflexible federal processes. For example, project staff and state government officials said it took 9 months to obtain a permit to build an off-channel pond and spread the few cubic yards of earth removed across a pasture. A permit had to be obtained from the Army Corps of Engineers because it has authority over disposal of dredge and fill materials into U.S. waters and wetlands. Participants in the project said they could not understand why it would take 9 months to issue a permit for such a simple project. The Coos Bay project on Larson Creek has reached its goal of lowering the fecal coliform count to 200 bacteria per 100 milliliters, allowing the oyster beds that had been closed for 13 years because of fecal contamination to be reopened. In addition, the number of adult fish returning to spawn in the tributaries of Coos Bay has doubled over the previous year. However, project staff noted that factors other than the project, such as overall ocean conditions, can also affect the number of fish returning to spawn. About 20 of the 2,500 landowners along the Coquille River are participating in the program. Participants estimate that the Coquille River project has fenced and replanted about 45 miles of streambanks and built five off-channel ponds. The major lessons of the Lake Champlain Basin Watershed Project are that (1) project management must be flexible enough to span multiple jurisdictions, (2) watershed efforts must be driven by stakeholders’ concerns and supported by local participation, and (3) diversified funding sources and good communication with state and provincial legislatures are essential to sustained success. The Lake Champlain basin spans about 5.3 million acres of mostly rural land, of which about 56 percent is located in Vermont, 37 percent in New York, and 7 percent in Québec, Canada, as shown in figure VIII.1. The watershed is home to a population of over 600,000 people. Lake Champlain and its basin abound with historic and Native American cultural artifacts and was designated part of a biosphere reserve by the United Nations in 1989. Overall, Lake Champlain is considered healthy from the standpoint of water quality. However, various sections of the lake are experiencing problems caused by excessive nutrients, particularly phosphorous. Phosphorous acts as a fertilizer, causing algae and plants to grow more rapidly. When excessive weeds and algae die and decompose, they use up the dissolved oxygen in the water required by fish and other species. Data indicate that the phosphorous levels in the lake need to be reduced by 200 metric tons per year to address the problems associated with accelerated plant growth and that 68 percent of this reduction should come from nonpoint sources of pollution, such as agricultural land. Other problems include (1) annual beach closings in both New York and Vermont because of high counts of fecal coliform bacteria and the presence of pathogens; (2) the presence of toxins, such as mercury and PCBs (polychlorinated biphenyls); (3) nuisance aquatic plants, such as water chestnuts, that discourage recreational use; and (4) nonnative species, such as zebra mussels and sea lamprey, that threaten native mussel and fish species. Water quality problems in Lake Champlain were recognized as far back as 1905 by the U.S. Geological Survey. However, attempts to establish a long-lived institution for the management of Lake Champlain and its watershed have been unsuccessful. The most recent effort, the Lake Champlain Special Designation Act of 1990, elevated Lake Champlain to a protection category shared by only a few national lakes and estuaries. Under the act, EPA was required to establish a management conference tasked to develop, within 5 years, a comprehensive pollution prevention and control and restoration plan for Lake Champlain and its watershed. The Lake Champlain Basin Program was established to coordinate the activities envisioned under the Lake Champlain Special Designation Act. The Basin Program, jointly administered by EPA, the states of New York and Vermont, and the New England Interstate Water Pollution Control Commission, serves as an umbrella for the numerous cooperating agencies, organizations, and individuals working to develop the plan. Altogether, some 227 regional, state, provincial, or federal entities are involved in the planning effort for Lake Champlain and its watershed. The Lake Champlain Management Conference is the Basin Program’s primary decision-making body. It is a 31-member board representing a broad spectrum of stakeholders’ interests within the watershed from both New York and Vermont, including local residents; environmentalists; farmers; marina owners; fishery specialists; scientists; industry and business representatives; and local, state, and federal government officials. Although it has an independent function, the Joint New York-Vermont-Québec Lake Champlain Steering Committee also participates in the planning process, which involves both regional policies and cooperation with Québec. The Management Conference allotted considerable funding to priority research for the first 2 years. Other funds were spent on data management efforts, demonstration projects, education and outreach efforts, and administration of the Lake Champlain Basin Program. Projects were also funded in four major areas: water quality; living natural resources, such as threatened and endangered fish and wildlife; human activities, such as recreation and cultural, economic, and health concerns; and support studies, such as data gathering and monitoring. All projects require a 25-percent minimum in matching funds from anyone undertaking the work. The Lake Champlain Management Conference is scheduled to “sunset” (i.e., terminate under its authorizing legislation) in March 1996, upon completion of the final management plan, which must be approved by the governors of New York and Vermont and the Administrator of EPA. Under the Lake Champlain Special Designation Act of 1990, the Congress tied financial support for Lake Champlain to a clear timetable—up to $5 million per year for 5 years. As shown in table VIII.1, funding for Lake Champlain has come from several sources. Voluntary participants (farmers) Amounts shown are for funding through September 30, 1994. Recognizing that a regulatory program would likely polarize stakeholder groups, the project’s participants agreed that the project should adopt a voluntary approach. Lake Champlain Basin Program officials believe that public education, support, and participation are crucial to getting voluntary action. In this regard, public meetings and other forums were used to break down the barriers between stakeholder groups so that constructive dialogue could take place. Basin program officials also believe that building on existing community organizations results in more effective, less costly, more creative solutions than would result from an inflexible, prescriptive approach. Because many of the farms in New York and Vermont are marginal, family-owned dairy operations, the farmers and USDA officials we spoke to said that financial assistance is essential to the project’s success. The financial assistance available to farmers has often been insufficient, and many farmers could not afford the cost share required of them. Furthermore, the $3,500 annual cap placed on all farm practices that fall under the Agricultural Conservation Program serves to deter farmers from implementing pollution-mitigating structures and practices. While more research is still needed, studies and monitoring efforts undertaken throughout the watershed have provided valuable information about water quality issues, such as phosphorous pollution in the lake, that helped set the framework for the management plan. Data from monitoring have not shown a noticeable water quality improvement, however, and additional information needs to be developed on how pollutants such as phosphorous are introduced into, travel through, and dissipate from the lake. From the agriculture/water quality standpoint, a few of the most significant accomplishments include the following: The states of New York and Vermont and the province of Québec signed a water quality agreement in 1993, which endorsed uniform interim goals for phosphorous management for Lake Champlain. A Lake Champlain Agricultural Advisory Council was established to help address agricultural issues relating to water quality throughout the watershed and ensure that farmers’ needs for information are met. Two demonstration projects dealing with manure management were undertaken, and 70 farmers participated in manure management workshops designed to reduce the nutrient runoff entering Lake Champlain and its tributaries. Finally, more than 500 farmers in the watershed have agreed to participate in water quality projects sponsored by USDA. The major lessons of the Black Earth Creek watershed project are that (1) broad involvement by stakeholders is critical to a project’s success and (2) education is needed to promote long-term stewardship of a watershed. The Black Earth Creek watershed covers about 64,000 acres, primarily in Dane County, Wisconsin, as shown in figure IX.1. Black Earth Creek and its tributaries support one of the state’s top recreational trout fisheries. Approximately 56 percent of the watershed is agricultural land, most of whose approximately 380 farmers operate dairy farms. Other agricultural businesses in the watershed include hog and beef cattle operations and farms devoted to cash crops such as soybeans. In response to anecdotal evidence of deteriorating stream conditions, the U.S. Geological Survey began a study of Black Earth Creek in 1984 to assess the hydrology, aquatic life, and water quality of the creek and its tributaries. The Survey collected data from sites along two of the creek’s tributaries and found problems with animal waste runoff, high sediment levels, and low dissolved oxygen levels. The extent of these problems was greater than originally anticipated. On the basis of local support and interest from Dane County and the local chapter of Trout Unlimited, a national sportsmen’s organization, the Wisconsin Department of Natural Resources designated Black Earth Creek a priority watershed in 1985. The Black Earth Creek project formally began in 1986. The project is being implemented under the Wisconsin Nonpoint Source Water Pollution Abatement Program, also known as the Priority Watershed Program. The program provides state matching funds to encourage farmers to implement best management practices to reduce nonpoint-source pollution. The program targets critical landowners in each watershed, and participation is voluntary, although the state retains some enforcement authority. The project staff is drawn from a number of agencies and organizations, including Wisconsin’s Department of Natural Resources, the University of Wisconsin Extension, the Dane County Land Conservation Department, and USDA’s Natural Resources Conservation Service. Another active participant in the project has been the Black Earth Creek Watershed Association, a citizens’ group formed when the project began to provide a mechanism for local input. The watershed association’s charter is to “advocate the stewardship and sound management of land and water resources in the watershed and to serve as an information clearinghouse” for interested parties. The watershed plan for Black Earth Creek was prepared jointly by the Department of Natural Resources; the county conservation department; and representatives from other federal, state, and local community organizations. The project focuses on surface water issues and covers both rural and urban sources of nonpoint pollution. On the rural side, the plan’s goals include (1) a 50-percent reduction in sediment and manure runoff and (2) habitat restoration in selected stream segments. To accomplish these goals, cropland management practices are needed on about 11,500 critical acres, barnyard runoff controls at 65 of the livestock operations, and intensive stream bank work on two segments of Black Earth Creek. On the urban side, the plan requires that a management plan for storm water be developed for one portion of the watershed. The Black Earth Creek watershed project has received funding from a variety of state, local, and other sources, as shown in table IX.1, but no direct federal funding. Farmers have also used other state and federal funds (i.e., funds not tied specifically to the project) to implement conservation and other environmental practices in the watershed. According to the project’s participants, early involvement by all watershed stakeholders was very important in facilitating understanding and consensus. To this end, the watershed association played a critical role. The association provided a forum for discussion, and its perceived neutrality was key to cutting through intransigence and bureaucracy and achieving consensus on issues to be addressed and actions to be taken. The watershed association also helped to alleviate farmers’ concerns that they were the only ones being “targeted” in the watershed. County staff also emphasized the importance of starting simple and building trust with the local community. Despite prior experience working with county staff on conservation planning, the farmers did not feel comfortable with the project until after repeated visits from county staff and word-of-mouth communication. The Black Earth Creek project provides financial and technical assistance to participating farmers who sign long-term agreements to install and maintain certain practices. In general, project staff have emphasized management solutions over structural ones. Where structural solutions are necessary, the staff have encouraged simple, less expensive structures. For example, a $10,000 to $40,000 barnyard structure that catches solid waste runoff and drains liquids into a grassy filter strip is preferable to a 100-percent containment structure that would require more planning and cost tens of thousands of dollars more. Project staff said that education is also an important component of the Black Earth Creek project and will be critical to its long-term success. An early challenge faced by the watershed association and project staff was to boost community interest in the creek. A variety of mechanisms, including audiovisual programs, printed materials, exhibits, media events, tours, demonstration activities, signs, workshops, meetings, youth education, recreational clinics, and fund raisers, have been used to boost the public’s awareness and provide information about the watershed. Two farm demonstration projects have been implemented in Black Earth Creek. Dane County project staff said that while it takes 2 or 3 years before farmers will implement demonstrated practices in their own operations, such projects can be good vehicles for generating cooperation, especially if they are relatively simple and successful. One such demonstration project combined techniques to simultaneously protect fish habitat and stabilize a stream bank. The project used a USDA-approved practice, called rip-rap (the positioning of rocks to stabilize and shape the stream bank), to reduce erosion. However, since rip-rap alone would have destroyed the cave-like spaces in which certain fish hide and spawn, wooden boxes called “lunkers” were built into the stream bank to imitate the natural habitat. According to county staff, this demonstration project catalyzed partnerships among agencies and between agencies and farmers. They attributed all the stream bank work undertaken so far in the watershed to the success of this one demonstration project. Although water quality monitoring in Black Earth Creek has been more intensive than it has in other watershed projects in the state because of the state’s priorities, Black Earth Creek staff said that extensive monitoring is not always necessary. Project staff said that decisionmakers should adjust their expectations and look to indicators of success other than chemical changes in water quality when evaluating watershed projects. Other measures, such as the level of farmers’ participation, level of community support, and monitoring of plants and aquatic life forms, are also valid indicators of a project’s success. Project staff favor a voluntary approach to watershed management, but acknowledged that regulation to establish minimum standards for farm management may be needed to deal with egregious behavior. The Wisconsin Department of Natural Resources retains certain enforcement authorities that it can use against participants who violate their cost-sharing agreements, or other uncooperative individuals. For example, if landowners violate the terms of their cost-sharing arrangement, the state may revoke its offer of cost sharing and substitute a low-interest loan. For critical sites in a watershed project, the state can issue compliance orders. Preliminary monitoring data (collected up to 1992) show significant decreases in nitrates and sediment in one subwatershed of Black Earth Creek. In addition, the fish population has increased at the stream bank restoration demonstration site, but Department of Natural Resources officials could not attribute this improvement solely to the project’s activities. Thus far, 103 Black Earth Creek landowners have signed county cost-sharing agreements to implement environmentally friendly management practices. County staff said that only about two dozen farmers have taken no action, and of those, only a few have serious problems. Keith W. Oleson, Assistant Director Jonda R. Van Pelt, Project Leader Linda Chu, Staff Evaluator Jonathan M. Silverman, Communications Analyst William D. Prentiss, Graphics Adviser The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined the effects of agricultural production on water pollution, focusing on lessons learned from nine innovative, successful watershed projects that reduced such pollution. GAO found that: (1) the watershed projects ranged from 5 acres to 150 million acres and involved both surface water and groundwater resources; (2) the watershed projects have received an estimated $514 million in federal funds since 1995; (3) all nine watershed participants stressed the need for flexibility in the kinds of financial and technical assistance provided by federal agencies; (4) the watershed participants were able to adopt local approaches to watershed management, since the watersheds had different characteristics; and (5) project participants at the local level emphasized that the keys to reducing agricultural pollution are to build citizen cooperation through education, involve stakeholders in developing project goals, and tailor project strategies. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.